All active Hardware Engineer roles based in Vancouver.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Curious about how cutting-edge hardware actually comes to life? We're looking for someone who’s excited to dive into the core of next-gen systems and help make them real. In this role, you’ll validate high-speed interfaces, solve complex system-level puzzles, and collaborate across teams to shape the future of AI/ML computing. If firmware, hardware, and hands-on debugging sound like your kind of fun — let’s chat!
This role is hybrid and based in Vancouver, Canada.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is seeking an AI Processor IP Product Engineer to be the technical bridge between our cutting-edge AI processor technology and customer success. You'll guide customers through the integration of our advanced AI processors, RISC-V CPUs, and chiplet solutions into their SoCs, ensuring optimal performance and accelerated time-to-market. If you thrive in customer-facing roles and want to shape the deployment of revolutionary AI hardware across the industry, join our team.
This role is hybrid, based out of Toronto, Canada or Vancouver, Canada.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
This role sits at the intersection of embedded systems, silicon validation, and advanced networking. You'll work with best-in-class IP from leading vendors and in-house designs, bringing up and validating these IPs in silicon, and building robust validation infrastructure that ensures performance, interoperability, and reliability at scale.
This role is hybrid, based out of Toronto, Canada, Vancouver Canada, Santa Clara, California or Austin, Texas.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Astera Labs (NASDAQ: ALAB) provides rack-scale AI infrastructure through purpose-built connectivity solutions. By collaborating with hyperscalers and ecosystem partners, Astera Labs enables organizations to unlock the full potential of modern AI. Astera Labs’ Intelligent Connectivity Platform integrates CXL®, Ethernet, NVLink, PCIe®, and UALink™ semiconductor-based technologies with the company’s COSMOS software suite to unify diverse components into cohesive, flexible systems that deliver end-to-end scale-up, and scale-out connectivity. The company’s custom connectivity solutions business complements its standards-based portfolio, enabling customers to deploy tailored architectures to meet their unique infrastructure requirements. Discover more at www.asteralabs.com.
Role Overview
As a Principal Product Applications Engineer at Astera Labs, you will be at the forefront of enabling the world's leading cloud service providers and server/network OEMs to successfully deploy connectivity solutions. This high-impact role sits at the intersection of cutting-edge silicon, firmware, and customer success—where you'll tackle complex technical challenges that push the boundaries of AI infrastructure connectivity.
You will serve as a trusted technical advisor to strategic customers, guiding them through design-in, bring-up, and optimization of Astera Labs products. With a strong firmware orientation, you'll dive deep into embedded systems, debug intricate firmware-hardware interactions, and drive issues to resolution across the full stack. Your insights from the field will directly influence product roadmaps and engineering priorities, making you a critical bridge between customers and our R&D teams.
This is an opportunity to work on industry-defining connectivity products that power next-generation data centers and AI platforms—all within a fast-paced, hypergrowth environment where your contributions have immediate and visible impact.
Key Responsibilities
Basic Qualifications
Preferred Qualifications
The base salary range for this role is CAD 180,000 – CAD 205,000. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for a performance bonus, equity, and benefits.
We know that creativity and innovation happen more often when teams include diverse ideas, backgrounds, and experiences, and we actively encourage everyone with relevant experience to apply, including people of color, LGBTQ+ and non-binary people, veterans, parents, and individuals with disabilities.
Ready to apply?
Apply to Astera LabsShare this job
Shift4 (NYSE: FOUR) is boldly redefining commerce by simplifying complex payments ecosystems across the world. As the leader in commerce-enabling technology, Shift4 powers billions of transactions annually for hundreds of thousands of businesses in virtually every industry. For more information, visit www.shift4.com.
Software Engineer, Senior
Shift4 is expanding globally and we are looking for a key software developer to join our innovative Software Engineering team! This opportunity involves designing, developing and deploying Shift4’s Commerce Engine payment solution that delivers a modern and integrated payment processing experience at Retail, Restaurant, and Hotel merchants.
The focal point of the solution consists of well-designed and highly functional business apps deployed on custom payment hardware. We have an ambitious product roadmap for this platform and are looking to expand the team with the best and brightest to ensure we continue to drive the industry with our innovative solutions and change the basis of competition.
This role is onsite and can be based at either of the following Shift4 locations: Las Vegas, NV, U.S.A. or Vancouver, BC, Canada
Relocation assistance may be available.
Responsibilities
Qualifications
Preferred Skills:
#LI-BN1
We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity and/or expression, status as a veteran, and basis of disability or any other federal, state or local protected class.
Ready to apply?
Apply to Shift4Share this job
Astera Labs (NASDAQ: ALAB) provides rack-scale AI infrastructure through purpose-built connectivity solutions. By collaborating with hyperscalers and ecosystem partners, Astera Labs enables organizations to unlock the full potential of modern AI. Astera Labs’ Intelligent Connectivity Platform integrates CXL®, Ethernet, NVLink, PCIe®, and UALink™ semiconductor-based technologies with the company’s COSMOS software suite to unify diverse components into cohesive, flexible systems that deliver end-to-end scale-up, and scale-out connectivity. The company’s custom connectivity solutions business complements its standards-based portfolio, enabling customers to deploy tailored architectures to meet their unique infrastructure requirements. Discover more at www.asteralabs.com.
At Astera Labs, we design cutting-edge connectivity solutions that power the world’s largest cloud service providers, server, and networking OEMs. As a Platform Engineer, you’ll play a pivotal role in building and supporting the platforms that enable these products to thrive in real-world deployments through.
You’ll create and maintain tools that make our products seamless to integrate, highly debuggable, and delightful to use for both customers and internal engineering teams. Your work will span software, systems, and silicon – developing automation, diving into labs with state-of-the-art equipment, and collaborating closely with customers to solve the toughest technical challenges. Beyond execution, you’ll influence product direction by driving innovation, providing feedback to design teams, and streamlining validation and deployment processes.
This is a high-impact, hands-on role at the intersection of hardware and software, where curiosity, creativity, and technical depth come together to define the next generation of data center infrastructure.
Basic Qualifications
Required Experience
Preferred Experience
What You’ll Gain
The base pay range for this role is between $140,000 and $175,000. Your actual base pay will depend on variables that may include your background, qualifications, and experience.
We know that creativity and innovation happen more often when teams include diverse ideas, backgrounds, and experiences, and we actively encourage everyone with relevant experience to apply, including people of color, LGBTQ+ and non-binary people, veterans, parents, and individuals with disabilities.
Ready to apply?
Apply to Astera LabsShare this job
Astera Labs (NASDAQ: ALAB) provides rack-scale AI infrastructure through purpose-built connectivity solutions. By collaborating with hyperscalers and ecosystem partners, Astera Labs enables organizations to unlock the full potential of modern AI. Astera Labs’ Intelligent Connectivity Platform integrates CXL®, Ethernet, NVLink, PCIe®, and UALink™ semiconductor-based technologies with the company’s COSMOS software suite to unify diverse components into cohesive, flexible systems that deliver end-to-end scale-up, and scale-out connectivity. The company’s custom connectivity solutions business complements its standards-based portfolio, enabling customers to deploy tailored architectures to meet their unique infrastructure requirements. Discover more at www.asteralabs.com.
Role Overview
Astera Labs is seeking System Validation Engineers across multiple levels to lead post‑silicon bring‑up and system validation for high‑performance PCIe and CXL memory expansion products used in AI and cloud data centers. You will design and execute validation plans, automate data‑centric test flows, drive root‑cause investigations across silicon, firmware, hardware, and systems, and work directly with customers to validate real world performance and interoperability.
This role is based in our Vancouver office, which is a strategic growth hub for Astera Labs' validation team. You'll have the opportunity to be a foundational member of this expanding site while collaborating closely with our core team in San Jose. This is a unique chance to help shape the team's culture, processes, and technical direction as we scale our validation capabilities to meet surging demand for AI infrastructure connectivity.
What Success Looks Like:
Key Responsibilities
Basic Qualifications
Preferred Qualifications
The base salary range for this role is $125,000 - $290,000 CAD depending on experience, level, and business need. This role may be eligible for discretionary bonus, incentives and benefits.
We know that creativity and innovation happen more often when teams include diverse ideas, backgrounds, and experiences, and we actively encourage everyone with relevant experience to apply, including people of color, LGBTQ+ and non-binary people, veterans, parents, and individuals with disabilities.
Ready to apply?
Apply to Astera LabsShare this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Cerebras builds wafer-scale AI processors—single chips delivering tens of PB/s of memory bandwidth and a dataflow architecture that accelerates at a granularity no multi-device system can match. The Advanced Technology Group (ATG) is Cerebras’ pathfinding organization. We work ahead of product to explore new architectures, demonstrate breakthrough performance on scientific and AI workloads, and shape the technical roadmap for future Cerebras hardware and software. Our work regularly appears at top-tier venues (Supercomputing, SIAM, IEEE, and NeurIPS) and directly influences the design of next-generation wafer-scale systems.
We are seeking R&D Engineers to join Cerebras' Advanced Technology Group. You will design and implement workloads that establish new performance benchmarks on wafer-scale hardware, leveraging architectural features that no traditional platform offers. The
scope ranges from large-scale scientific simulations to emerging AI/ML models, and the work sits at the intersection of algorithm design, compiler co-optimization, and hardware architecture. You will collaborate closely with Cerebras’ ASIC, compiler, kernel, and AI teams as well as external partners at universities and national laboratories.
We are hiring across several focus areas. Exceptional depth in one or more of the following is a strong signal:
We are hiring for multiple positions across experience levels. If this work resonates, we encourage you to apply.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Cerebras builds wafer-scale AI processors—single chips delivering tens of PB/s of memory bandwidth and a dataflow architecture that accelerates at a granularity no multi-device system can match. The Advanced Technology Group (ATG) is Cerebras’ pathfinding organization. We work ahead of product to explore new architectures, demonstrate breakthrough performance on scientific and AI workloads, and shape the technical roadmap for future Cerebras hardware and software. Our work regularly appears at top-tier venues (Supercomputing, SIAM, IEEE, and NeurIPS) and directly influences the design of next-generation wafer-scale systems.
We are seeking Compiler Engineers to join a small team of specialists working on our emerging Tungsten language compiler. Tungsten is Cerebras’ dataflow programming language, purpose-built for wafer-scale hardware. You will work on the Tungsten compiler from language design through code generation, building the toolchain that translates high-level intent into efficient execution across hundreds of thousands of cores with a memory and interconnect model unlike anything in conventional computing.
This is not incremental work on an existing backend. The architecture is new, the programming model is new, and the compiler is where those two things meet. You will collaborate closely with Cerebras’ ASIC, kernel, and AI teams, and your design decisions will directly shape both the language and the hardware it targets. Beyond the compiler itself, the broader toolchain—runtime, debugger, simulator—is still being built, and we are equally interested in engineers who want to own those pieces of the developer experience on novel hardware.
We are hiring for multiple positions across experience levels. If this work resonates, we encourage you to apply.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Synthesis Health
Who We Are
We’re a mission- and values-driven company with tremendous dedication to our customers. Our 100% remote team is dedicated to a common goal – to revolutionize healthcare through innovation, collaboration, and commitment to our core values and behaviors.
About the Opportunity
We are looking for a Staff Platform Engineer to serve as the architect of our edge.
In this critical role, you will own the boundary between the world and our platform. You will own the "Customer Edge"—the mission-critical software gateways that run inside hospital networks, acting as the secure bridge for DICOM and HL7 data into our cloud platform.
You will solve unique distributed systems challenges: How do you buffer terabytes of imaging data on-premise when the hospital's internet goes down? How do you safely auto-update thousands of remote agents behind strict enterprise firewalls? How do you enforce fairness and rate limiting to prevent one tenant from degrading the experience for everyone else?
This is a hands-on leadership role. You will write high-performance code, define the architectural patterns for edge ingestion, and mentor engineers on the nuances of network reliability and secure tunneling.
Key Responsibilities
Customer Gateway Architecture & High Availability (HA)
Cloud Ingestion & Throughput Protection
Network & Connectivity Security
What We’re Looking For
Preferred Qualifications
Why You Should Join Us
We are a values-driven company. Our values:
The behaviors we look for:
Compensation and Benefits
Typical salary range for this position is $120,000 - $150,000 (CAD). However, Synthesis participates in location based hiring and salary ranges can be adjusted based on candidate's residence.
Other benefits include, but are not limited to: Medical, Dental, Vision, “Use as needed” vacation policy, and participation in our employee option program.
Ready to apply?
Apply to Synthesis HealthCookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.