All active Hardware Engineer roles based in Norway.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
About the Company
Armada delivers sovereign, industrial-grade AI infrastructure designed to operate where traditional cloud and centralized data center models fall short. Our platform includes Atlas as the control plane, Galleon modular edge data centers powered by the Armada Edge Platform, and Leviathan, Armada’s AI Factory, which partners with Bridge, our GPU-as-a-Service offering. Together, these capabilities enable enterprises and governments to deploy, operate, and scale AI workloads securely, efficiently, and close to where data is generated.
The ND8 region represents a strategically important market for Armada. Customers here are technically sophisticated, highly security-conscious, and often operate across multiple countries with strong expectations around data residency, sustainability, and operational excellence. Armada’s success in ND8 depends on senior engineers who can both execute deeply technical deployments and maintain long-term customer ownership.
The Senior Forward Deploy Engineer for ND8 is a hybrid role responsible for hands-on deployment execution and ongoing technical account ownership across the Nordic region. This role combines the execution depth of a Forward Deploy Engineer with the long-term ownership and customer leadership of a Technical Account Manager.
In the early phases of engagement, the Senior FDE leads platform bring-up, deployment execution, and initial workload onboarding. Once customers are live, the same engineer remains the primary technical owner, driving adoption, operational maturity, reliability, and expansion across multiple countries and sites.
This role reports into the Sr. Director of Customer Success & Delivery, EMEA & ANZ and is intentionally designed to avoid fragmented ownership between deployment and post-sale success. Success is measured by deployment quality, speed to production, customer confidence, platform adoption, retention, and expansion.
Deployment execution and platform bring-up
Technical account ownership and customer success
Joint Success Plan ownership
Time-to-value acceleration and adoption
Operational excellence and reliability
Data residency, security, and regulatory execution
Field leadership and mentorship
Cross-functional collaboration
You're a Great Fit if You're
Equal Opportunity Statement
At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.
Unsolicited Resumes and Candidates
Armada does not accept unsolicited resumes or candidate submissions from external agencies or recruiters. All candidates must apply directly through our careers page. Any resumes submitted by agencies without a prior signed agreement will be considered unsolicited and Armada will not be obligated to pay any fees.
Ready to apply?
Apply to Armada
Share this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is seeking a highly skilled and experienced AI Compiler Engineer to spearhead the efforts in developing and optimizing graph compilers tailored to cutting-edge AI and ML workloads. You will collaborate with hardware architects, and AI researchers to enhance performance, optimize computation graphs, and enable efficient model deployment on EnCharge’s Inference Accelerators.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AI
Why Entersekt
Founded over 16 years ago, with more recent investments from Accel-KKR, Entersekt is a leader in digital banking fraud prevention and payment security, including mobile authentication, mobile app security, and 3D Secure authentication for issuers, acquirers and payment networks. We offer highly scalable products with a track record of success across multiple continents.
Entersekt enables secure digital transactions for leading financial institutions globally. We exist to create a world where everyone can transact digitally without fear or compromise. Currently, we protect the digital transactions of over 210 million active users on our platform and hold 120 active patents that recognize innovation in digital security, payments, and user experience. Entersekt offers customers secure authentication and digital payments experiences that remove unnecessary friction. Entersekt has a diverse product portfolio and aggressive roadmap that positions the company well to sustain competitive advantage as it expands globally with emphasis on North America and European markets.
The Ideal Candidate
You thrive in an environment of organised chaos. You can hold ownership of long‑running infrastructure projects, while also dropping everything to troubleshoot urgent network, system or application issues.
You are comfortable following strict procedures where required, and applying pragmatic judgement and creative problem solving when needed. You approach gaps in your knowledge with an “I don’t know yet” mindset, and you enjoy sharing knowledge and teaching skilled colleagues new approaches.
The Role
As a Senior Systems Administrator, you will be part of a small, highly capable team responsible for building and maintaining Entersekt’s infrastructure, deploying applications, and managing the databases and services that support them.
Your primary focus will be on network and datacentre operations. As these activities typically require only part of your time, you will also take on a broad range of systems administration responsibilities and provide support to internal teams and customers. This is a senior, hands‑on individual contributor role in a security‑critical environment.
Responsibilities
Skills and Experience
Successful candidates for this role will generally possess the following qualifications and skills:
Personality Attributes
We place a lot of value on how we treat prospective employees and appreciate the time and effort that goes into job hunting. That is why we aim to keep the hiring process as quick and seamless as possible while ensuring the best possible fit for both you and the company.
Working at Entersekt is truly a dream. You get exposed to cutting-edge technology, colleagues who are leaders in their fields, and an awesome working environment that includes flexible hours, remote work, and plenty of growth opportunities.
Apply for this position by following the "apply now" tab and or viewing our other roles at Entersekt Careers page.
Entersekt is an Equal Opportunity Employer:
We are committed to building an inclusive and diverse workforce that reflects the global communities we serve. For all South Africa-based roles, preference will be given to candidates from historically disadvantaged groups, in accordance with local Employment Equity objectives.
Ready to apply?
Apply to EntersektShare this job
Mission Brief
As a Product Assurance Engineer, Launch Operations, you will join our team on the beautiful arctic island of Andøya to perform test and launch campaigns of our Spectrum launch vehicle. You will act as a control instance for the quality and compliance of ground equipment, processes, and procedures, while also closely working with our Quality department based in Germany to ensure the quality of the flight hardware.
To do so, you will carry out in-process and final inspections, document reviews, and follow-up non-conformance reports. You will be following established processes, but will also have the opportunity to introduce new frameworks tailored to this launch site.
Your Role in Our Space Mission:
Qualification Checklist
Bonus Skills
Benefits
Who we are
We are Isar Aerospace and we are at the forefront of New Space building a modern space business to enable faster, better and cheaper access to space.
Our mission is to help democratise space and use it for good in order to improve life on Earth now and for the future generations.
We are a fast-growing company aiming to provide sustainable and environmentally friendly launch solutions for small and medium-sized satellites and constellations into Low Earth Orbit. The company is privately funded by world-leading technology investors with strong commitment and support and our team is made of driven and talented people with a real passion for space innovation.
We're making rockets in a way that hasn't been done before disrupting a traditional industry. If you are up for the challenge, want to work on cutting-edge projects and be part of a team changing the world for better, come, join us and launch your career!
Want to find out more about us?
Visit www.isaraerospace.com
Data Protection
We process your personal data for the purpose of managing the recruitment process and assessing your application. For detailed information on how your data is processed, including your rights, please refer to our Privacy Policy.
Disclaimer
Isar Aerospace SE is an equal-opportunity employer committed to fairness and inclusivity. We do not prioritize any specific religion, gender, nationality, or background. Due to security clearance requirements, affiliations with countries listed under § 13 para. 1 no. 17 SÜG may affect the application process. All qualified applicants are encouraged to apply.
Ready to apply?
Apply to Isar Aerospace SE
Share this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is looking for an experienced AI Research Engineer to optimize deep learning models for deployment on edge AI platforms. You will work on model compression, quantization strategies, and efficient inference techniques to improve the performance of AI workloads.
Responsibilities
Research and develop quantization-aware training (QAT) and post-training quantization (PTQ) techniques for deep learning models.
Implement low-bit precision optimizations (e.g., INT8, BF16).
Design and optimize efficient inference algorithms for AI workloads, focusing on latency, memory footprint, and power efficiency.
Work with frameworks such as PyTorch, ONNX Runtime, and TVM to deploy optimized models.
Analyze accuracy trade-offs and develop calibration techniques to mitigate precision loss in quantized models.
Collaborate with hardware engineers to optimize model execution for edge devices, and NPUs.
Contribute to research on knowledge distillation, sparsity, pruning, and model compression techniques.
Benchmark performance across different hardware and software stacks.
Stay updated with the latest advancements in AI efficiency, model compression, and hardware acceleration.
Qualifications
Master’s or Ph.D. in Computer Science, Electrical Engineering, or a related field.
Strong expertise in deep learning, model optimization, and numerical precision analysis.
Hands-on experience with model quantization techniques (QAT, PTQ, mixed precision).
Proficiency in Python, C++, CUDA, or OpenCL for performance optimization.
Experience with AI frameworks: PyTorch, TensorFlow, ONNX Runtime, TVM, TensorRT, or OpenVINO.
Understanding of low-level hardware acceleration (e.g., SIMD, AVX, Tensor Cores, VNNI).
Familiarity with compiler optimizations for ML workloads (e.g., XLA, MLIR, LLVM).
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is seeking an AI Runtime Engineer to develop and optimize the execution stack for our next-generation AI accelerator. In this role, you will work on low-latency, high-performance runtime software that enables efficient execution of deep learning models on specialized hardware. You will collaborate with hardware, compiler, and AI framework teams to deliver optimized AI inference and training performance across cloud and edge environments.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is seeking a highly skilled Device Driver Engineer to design and implement high-performance driver stack for our cutting-edge AI accelerator hardware. In this role, you will work closely with hardware, firmware, and AI software teams to develop low-latency, high-bandwidth communication between the host system and AI accelerator.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is looking for an Embedded SW Engineer to develop the firmware for our Edge AI processors. The candidate must possess an excellent understanding of computer architecture and operating system concepts including, but not limited to, memory management, virtualization and PCIe address space. The role includes designing and developing the core Firmware for various parts of the SOC. The candidate must possess strong communication skills to interface with Runtime, Architecture and H/W teams.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is seeking an LLM Inference Deployment Engineer to optimize, deploy, and scale large language models (LLMs) for high-performance inference on its energy efficient AI accelerators. You will work at the intersection of AI frameworks, model optimization, and runtime execution to ensure efficient model execution and low-latency AI inference.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AICookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.