All active Hardware Engineer roles based in Toronto.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.
At Cloudflare, we’re not looking for people who wait for a polished roadmap; we’re looking for the builders who see the cracks in the Internet that everyone else has simply learned to live with. We value candidates who have the instinct to spot a "normalized" problem and the AI-native curiosity to create a solution using the latest tools. Our culture is built on iteration, leveraging AI to ship faster today to make it better tomorrow, while ensuring that every improvement, no matter how small, is shared across the team to lift everyone up. If you’re the type of person who values curiosity over bureaucracy, and that AI is a partner in solving tough problems to keep the Internet moving forward, you’ll fit right in.
Available Locations: Austin, Seattle, London, Lisbon, Washington DC, Toronto
Emerging Technologies & Incubation (ETI) is where new and bold products are built and released within Cloudflare. Rather than being constrained by the structures which make Cloudflare a massively successful business, we are able to leverage them to deliver entirely new tools and products to our customers. Cloudflare’s edge and network make it possible to solve problems at massive scale and efficiency which would be impossible for almost any other organization.
ETI’s Storage Infrastructure team is responsible for the core storage layer that underpins many of ETI's stateful services. Our scope ranges from managing the physical hardware to operating the distributed databases and storage systems built upon it. We run this infrastructure globally across Cloudflare's network, which presents unique and complex engineering puzzles. We navigate efficiently expanding storage capacity, optimizing rebuild operations, and coordinating operations across failure domains to uphold durability. While other service teams focus on product development, our mission is to ensure the underlying storage is reliable, performant, and scalable.
You’ll be joining a highly motivated team that is building the next generation of distributed storage services.
In this role, you will help build and operate the next generation of globally distributed storage systems. You will own your code from inception to release, delivering solutions at all layers of the stack. On any given day, you might write a design document for a new provisioning system, model failure domain dependencies across edge locations, benchmark new storage hardware, build standardized observability and runbooks for distributed database clusters, or automate operational toil through purpose-built tooling and intelligent automation. You can expect to interact with a variety of languages and technologies including Rust, Go, Saltstack, and Terraform.
Equity
This role is eligible to participate in Cloudflare’s equity plan.
Benefits
Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun! The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.
Health & Welfare Benefits
Financial Benefits
Time Off
What Makes Cloudflare Special?
We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.
Project Galileo: Since 2014, we've equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.
Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we've provided services to more than 425 local government election websites in 33 states.
1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.
Sound like something you’d like to be a part of? We’d love to hear from you!
Please note that applicants who progress to the offer stage of the interview process may be asked to attend an in-person interview within one of the Cloudflare Offices or Cloudflare Hubs. More details about this will be available at that stage of the interview process.
This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.
Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer.
Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.
Ready to apply?
Apply to Cloudflare
Stripe is a financial infrastructure platform for businesses. Millions of companies - from the world’s largest enterprises to the most ambitious startups - use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone's reach while doing the most important work of your career.
The Corporate Technology (CorpTech) Services team is a strategic support partner to all Stripes, in office and remote. We ensure the successful operation of new hires, account off-boards and critical business systems with a global team.
We’re looking for a Tier 1 Support Engineer to join the AMER CorpTech Services team to provide in-person, thoughtful and individualized support for all Stripes. Stripe is looking for individuals who can work in a fast paced environment and work autonomously to deliver team oriented results.
You’ll be responsible for providing technical assistance and support related to computer systems, hardware, and software. Respond to queries, run diagnostic programs, isolate problems, and determine and implement solutions. In-person support is required as well as setting up desks and managing/auditing peripherals.
You have the ability to take initiative on tickets and contribute to project design and implementation. You are skilled at writing, updating and maintaining technical documentation and sending directed communications. You are comfortable working as an individual contributor on a global team that is driving towards a common goal. You will best succeed in this role by leading on things you are passionate about while supporting others in their passion.
You love problem solving and collaborating with others to provide world class support. Being the best fit for this position means you are both humble and confident. You strive towards excellence but understand your limitations and don’t hesitate to ask for help when needed.
We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.
Ready to apply?
Apply to Stripe
Share this job
About Windscribe
Windscribe is a leading cyber security and privacy company launched in April 2016 and now with more than 70 million users. We believe that the internet was created so that people across the globe could have access to any type of information, no matter where they are. Our mission is to transform the internet with easy-to-use yet powerful privacy and security tools that allow anyone to circumvent censorship, access geographically restricted content, and minimize their exposure to marketers, criminals, and surveillance dragnets. Our well-received applications have appeared on Lifehacker, Techradar, and CNET.
Headquartered in Toronto, Canada, Windscribe operates two products:
We have infrastructure all over the world and our goal is preserving uncensored Internet access and online privacy for all. Right now we are looking for a Site Reliability Engineer to help us tame DNS.
We’re looking for your version of “seen it all (mostly)”. You may not have deep knowledge in all parts of the stack but you know the lay of the land. You’re quick to learn, but already possess a solid understanding of all things DNS.
#li-remote
#LI-JM1
Thank you for considering this opportunity. Funded.club Senior Recruiters partner exclusively with Startups and are in direct communication with hiring managers and founding team members.
Funded.club uses AI-assisted tools as part of our candidate sourcing and screening process. All applications are reviewed by a human recruiter, who makes all decisions about which candidates to progress. If your application seems like a good fit for the position, a real member of our team will contact you soon!
Ready to apply?
Apply to Funded.club
Share this job
About IonQ:
IonQ, Inc. [NYSE: IONQ] is the world’s leading quantum platform and merchant supplier - delivering integrated quantum solutions across computing, networking, sensing, and security. IonQ’s newest generation of quantum computers, the IonQ Tempo, is the latest in a line of cutting-edge systems that have been helping customers and partners including Amazon Web Services, and AstraZeneca achieve 20x performance results and accelerate innovation in drug discovery, materials science, financial modeling, logistics, cybersecurity, and defense. In 2025, the company achieved 99.99% two-qubit gate fidelity, setting a world record in quantum computing performance.
Headquartered in College Park, Maryland, IonQ has operations in California, Colorado, Massachusetts, Tennessee, Washington, Italy, South Korea, Sweden, Switzerland, Canada, and the United Kingdom. Our quantum computing services are available through all major cloud providers, while we also meet the needs of networking and sensing customers across land, sea, air, and space. IonQ is making quantum platforms more accessible and impactful than ever before.
Location: This role can be based at our office in Bothell, WA (US) or in Oxford, England (UK).
Travel: Twice per year
Job ID: 1453
The Role
As a Staff Software Engineer for Developer Tools, you will be responsible for delivering the next generation of quantum compiler tools and features, with the focus of integration into quantum software applications, overall developer experience, documentation and developer tooling. Your work will involve setting high-level technical standards, mentoring senior engineers and scientists, and providing deep expertise to solve the most challenging problems in quantum compilation, optimization, and hardware interfaces. You will drive the adoption and enhancement of critical, user-facing compiler infrastructure, ensuring our tools are aligned with cutting-edge quantum research (e.g., advanced error mitigation, novel compiler optimization techniques). You will work across organizational boundaries, Compiler, QEC, Applications, and Engineering, to support scientific breakthroughs and define the technical strategy for packaging, documentation, and release processes for major components of IonQ’s developer tooling ecosystem.
The Developer Tools team builds the critical software layer for IonQ’s quantum software tools. In this Staff role, you will be a key driver in shaping the future of this ecosystem, and bridging the gap between critical parts of our engineering and computing organizations. Your impact will be measured by the successful delivery of critical, cross-functional projects and the elevation of technical execution across the entire team.
As a Staff Software Engineer, your influence extends beyond a single project; you define the technical bedrock upon which IonQ's developer ecosystem is built. The compiler is the critical interface where quantum algorithms are realized on hardware, and your architectural decisions will dictate the performance ceiling and accessibility of our systems. You will lead the charge in making advanced quantum techniques, from sophisticated hardware-aware compilation to state-of-the-art error mitigation, seamlessly available to every developer. This role offers the unique opportunity to leverage your deep experience to solve fundamental technical challenges, mentor the next generation of quantum software leaders, and accelerate the global adoption of quantum computing.
Responsibilities
Requirements:
Preferred Qualifications:
The approximate base salary range for this position is $167,808 - $219,704 (USD). The total compensation package includes base, bonus, equity, and a range of benefit options found on our career site.
Compensation will vary based on individual factors such as education, qualifications, and experience of the final candidate(s), specific office location, and calibration against relevant market data and internal team equity. Posted base salary figures are subject to change as new market data becomes available. Our benefits include comprehensive medical, dental, and vision plans, matching 401K, unlimited PTO and paid holidays, parental/adoption leave, legal insurance, and a home technology stipend. Details of participation in these benefit plans will be provided when a candidate receives an offer of employment.
At IonQ, we believe in fair treatment, access, opportunity, and advancement for all while striving to identify and eliminate barriers. We empower employees to thrive by fostering a culture of autonomy, productivity, and respect. We are dedicated to creating an environment where individuals can feel welcomed, respected, supported, and valued.
We are committed to equity and justice. We welcome different voices and viewpoints and do not discriminate on the basis of race, religion, ancestry, physical and/or mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, transgender status, age, sexual orientation, military or veteran status, or any other basis protected by law. We are proud to be an Equal Employment Opportunity employer.
US Technical Jobs. The position you are applying for will require access to technology that is subject to U.S. export control and government contract restrictions. Employment with IonQ is contingent on either verifying “U.S. Person” (e.g., U.S. citizen, U.S. national, U.S. permanent resident, or lawfully admitted into the U.S. as a refugee or granted asylum) status for export controls and government contracts work, obtaining any necessary license, and/or confirming the availability of a license exception under U.S. export controls. Please note that in the absence of confirming you are a U.S. Person for export control and government contracts work purposes, IonQ may choose not to apply for a license or decline to use a license exception (if available) for you to access export-controlled technology that may require authorization, and similarly, you may not qualify for government contracts work that requires U.S. Persons, and IonQ may decline to proceed with your application on those bases alone. Accordingly, we will have some additional questions regarding your immigration status that will be used for export control and compliance purposes, and the answers will be reviewed by compliance personnel to ensure compliance with federal law.
US Non-Technical Jobs. Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Accordingly, we will have some additional questions regarding your immigration status that will be used for export control and compliance purposes, and the answers will be reviewed by compliance personnel to ensure compliance with federal law.
If you are interested in being a part of our team and mission, we encourage you to apply!
Ready to apply?
Apply to IonQ
Share this job
At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.
At Cloudflare, we’re not looking for people who wait for a polished roadmap; we’re looking for the builders who see the cracks in the Internet that everyone else has simply learned to live with. We value candidates who have the instinct to spot a "normalized" problem and the AI-native curiosity to create a solution using the latest tools. Our culture is built on iteration, leveraging AI to ship faster today to make it better tomorrow, while ensuring that every improvement, no matter how small, is shared across the team to lift everyone up. If you’re the type of person who values curiosity over bureaucracy, and that AI is a partner in solving tough problems to keep the Internet moving forward, you’ll fit right in.
Available Locations: Toronto, ON
Cloudflare’s Senior Forward Deployed Engineers (FDEs) operate at the intersection of product engineering and customer impact.
As an FDE, you will be embedded within one of Cloudflare’s most strategic global customers, working side-by-side with their engineering teams to build and deploy solutions using Cloudflare’s platform. Unlike traditional Solutions Architects or consultants, you will write production code, shape technical architecture, and directly influence how Cloudflare products are used at massive scale.
You will operate as a technical extension of both organizations - helping the customer ship faster while surfacing real-world product insights back to Cloudflare engineering.
This role is ideal for engineers who want to:
Compensation
Compensation may be adjusted depending on work location.
Equity
This role is eligible to participate in Cloudflare’s equity plan.
Benefits
Cloudflare offers a complete package of benefits and programs to support you and your family. Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun! The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.
Health & Welfare Benefits
Financial Benefits
Time Off
What Makes Cloudflare Special?
We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.
Project Galileo: Since 2014, we've equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.
Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we've provided services to more than 425 local government election websites in 33 states.
1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.
Sound like something you’d like to be a part of? We’d love to hear from you!
Please note that applicants who progress to the offer stage of the interview process may be asked to attend an in-person interview within one of the Cloudflare Offices or Cloudflare Hubs. More details about this will be available at that stage of the interview process.
This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.
Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer.
Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.
Ready to apply?
Apply to Cloudflare
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About The Role
The Inference Core Platform group is at the heart of Cerebras' mission to deliver the world’s fastest AI inference. Our team builds the foundational software and hardware infrastructure that powers low-latency, high-speed, high-throughput deployment on the Cerebras Wafer-Scale Engine (WSE). We are responsible for the full stack—from model compilation and scheduling down to custom hardware kernels and driver development.
The ML Performance Benchmarking team plays a pivotal role in shaping the performance and scalability of AI inference on one of the most advanced computing systems ever built. We drive the bring-up of core inference capabilities and deliver performance improvements at every stage of development – from early prototyping to production deployment.
We're looking for passionate engineers to join us in redefining the limits of AI inference. If you thrive on building systems that measure, analyze, and optimize performance at scale, this is your opportunity to make a transformative impact on the future of AI.
Scope of the team includes:
Skills & Qualifications
Preferred Skills & Qualifications
Location
#LI-WA1
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
The IP Product Team is where deep tech meets big-picture strategy. We’re the bridge between engineering brilliance and real-world impact — defining how our core IP comes to life and gets delivered to customers.
We’re looking for an Intern who loves technology and wants to learn how world-class intellectual property (IP) gets transformed into world-changing products. You’ll collaborate with engineers, product managers, and cross-functional teams to help shape the future of Tenstorrent’s RISC-V CPU and AI IP portfolio.
This role is remote, based out of the U.S. or Canada.
Who You Are
What We Need
What You Will Learn
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent University Jobs
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
What You Will Learn
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent University Jobs
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent’s Metal Infra team is all about developer happiness—internal and external. As a Release Engineer Intern, you’ll help automate, maintain, and improve the systems behind our open-source Metal stack (tt-metal on GitHub). From CI pipelines to dashboards and build systems, your work will help teams ship faster and smarter.
This role is hybrid, based out of Toronto.
Who You Are:
What We Need:
What You Will Learn:
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent University Jobs
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
This role sits at the intersection of embedded systems, silicon validation, and advanced networking. You'll work with best-in-class IP from leading vendors and in-house designs, bringing up and validating these IPs in silicon, and building robust validation infrastructure that ensures performance, interoperability, and reliability at scale.
This role is hybrid, based out of Toronto, Canada, Vancouver Canada, Santa Clara, California or Austin, Texas.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is seeking a Signal Integrity Engineer to join our growing team. The ideal candidate will have a wealth of exposure designing high speed interconnects, breakout design, material trade-offs and verification. A background in electrical engineering, electronics or relevant fields is required. Must love all things high speed!
This role is hybrid, based out of Santa Clara, CA or Austin, TX or Toronto, ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is building large-scale AI systems across internal clusters and customer deployments. This role sits at the intersection of site reliability, infrastructure operations, and customer engineering, ensuring our systems are reliable, observable, and production-ready.
This role is hybrid, based out of Toronto, ON; Austin, TX; or Santa Clara, CA.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
As a Software Engineer on the Metal Runtime team at Tenstorrent, you’ll work on the low-level software that powers our AI accelerators. You’ll design fast, efficient runtime systems that run close to the hardware, and define the host and device APIs that expose these capabilities to the rest of the software stack. We believe APIs are a core part of systems design: they encode hardware semantics, performance tradeoffs, and concurrency models, and they live longer than any single implementation.
If you enjoy pushing performance, working close to the metal, and designing abstractions that make complex systems usable without sacrificing control, this is your kind of role.
This role is hybrid, based out of Santa Clara, CA; Austin, TX; Toronto, ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
Nice to have:
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
As a Software Engineer on the Metal Runtime team at Tenstorrent, you’ll work on the low-level software that powers our AI accelerators. You’ll build and optimize high-performance runtime systems that execute directly on the hardware, focusing on scheduling, memory movement, and efficient execution across massively parallel processors. We believe runtime systems are a core part of performance: they determine how hardware resources are utilized, how data flows through the system, and how efficiently workloads are executed at scale.
If you enjoy pushing performance, working close to the metal, and solving complex systems challenges at the hardware/software boundary, this is your kind of role.
This role is hybrid, based out of Santa Clara, CA; Austin, TX; Toronto, ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
As our TT-Distributed Software Engineer, you will develop and optimize distributed software systems that power the most efficient and highest-performing AI and HPC clusters. In this role, you'll work on distributed programming across multiple nodes, utilizing systems programming, inter-node communication, and Tenstorrent’s scalable architectures to advance the state-of-the-art distributed inference and training infrastructure.
This role is hybrid, based out of Santa Clara, CA; Austin, TX; or Toronto, ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is building the world’s fastest, most efficient AI compute clusters. TT-Fabric is the high-performance nervous system of this platform: the low-level networking layer that lets thousands of RISC-V and AI processors snap together into a single, massively parallel distributed supercomputer. If you love squeezing nanoseconds out of hot paths, designing protocols that move data at absurd scale, and turning messy hardware constraints into elegant distributed systems, this is an opportunity to shape the fabric that future AI models will run on
This role is hybrid based out of Santa Clara, CA; Austin, TX; or Toronto, ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who We Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Join the team revolutionizing AI computing at Tenstorrent. You'll work on TT-Forge, our MLIR-based compiler that enables developers to run AI on all configurations of Tenstorrent hardware using an open-source, performant, and general-purpose compiler. You will be at the forefront of the AI hardware revolution, building compiler technologies that redefine what’s possible.
This role is hybrid, and can be based out of Santa Clara, CA; Austin, TX; or Toronto; ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is seeking an AI Processor IP Product Engineer to be the technical bridge between our cutting-edge AI processor technology and customer success. You'll guide customers through the integration of our advanced AI processors, RISC-V CPUs, and chiplet solutions into their SoCs, ensuring optimal performance and accelerated time-to-market. If you thrive in customer-facing roles and want to shape the deployment of revolutionary AI hardware across the industry, join our team.
This role is hybrid, based out of Toronto, Canada or Vancouver, Canada.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Join the team revolutionizing AI computing at Tenstorrent. You'll work on TT-Forge, our MLIR-based compiler that enables developers to run AI on all configurations of Tenstorrent hardware using an open-source, performant, and general-purpose compiler. You will be at the forefront of the AI hardware revolution, building compiler technologies that redefine what’s possible.
This role is hybrid and based out of Toronto, ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is seeking an experienced Silicon Validation Engineer to validate and qualify our cutting-edge die-to-die (D2D) subsystem, AI, and Processor IP testchips for the rapidly growing chiplet ecosystem. You'll develop hardware infrastructure for validation platforms, perform comprehensive electrical characterization, and support customer silicon bring-up. If you’re passionate about hands-on silicon testing and want to help usher in the chiplet era of semiconductor products, join us.
This role is on-site, based out of Santa Clara, CA; or Toronto, Canada.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
At Tenstorrent, we build computers for AI, and the developers shaping its future.
Our high-performance RISC-V CPUs, modular chiplets, and scalable compute systems give developers full control at every layer of the stack, at any scale from a single-node experimentation to data center-scale deployment.
We believe in an open future. Our architecture and software are designed to be edited, forked, and owned. Our team of engineers, dreamers, and first-principle thinkers is redefining how hardware and software converge to accelerate innovation.
As part of a new organization focused on experience, we need engineers for our Developer Relations team that deeply understand developers’ trials , tribulations, and wins. You'll build, present, and contextualize the tools, demos, and interfaces developers need to navigate and fully utilize Tenstorrent hardware and software. You'll meet developers where they are, understand their needs, and partner with them to build an open future.
This role is remote, but you're welcome to work from one of our offices if you're nearby. We encourage candidates of all experience levels to apply. Your interview will determine the best fit, and offers will reflect that assessment.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is seeking a senior High Speed Interconnect / Signal Integrity Engineer to design and validate high-bandwidth links for large-scale AI systems. You will define, model, and qualify interconnect solutions across copper and optical technologies for next-generation AI inference and training clusters.
This role is on-site in Santa Clara, CA, Austin, TX, or Toronto, Canada.
We welcome candidates at various experience levels. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent’s AI Software Infrastructure team builds the platforms that power internal development, orchestrate workloads, and manage large-scale AI hardware across on-prem data centers. This team develops and productionizes infrastructure used both internally and externally on Tenstorrent systems.
This role is hybrid, based out of Toronto, ON; Santa Clara, CA; Austin, TX; Belgrade, Serbia; Warsaw, Poland; or Gdańsk, Poland.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
We are seeking an IP Software Generalist to develop and optimize the software stack for our IP customers, enabling them to successfully integrate our AI and RISC-V hardware into their systems. In this role, you will build and support a versatile range of software components, from bare-metal firmware and drivers to system-level tools and APIs, ensuring a seamless customer experience. You will partner closely with hardware engineering, IP delivery, customer support, and product teams to deliver robust, scalable, and high-performance software solutions tailored to diverse customer architectures. Success in this role requires adaptability, strong system-level debugging skills, and the ability to bridge the gap between complex hardware IP and user-friendly software integration.
This role is hybrid, based out of Austin, TX; Toronto, ON; or Santa Clara, CA.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is seeking a Layout Engineer with expertise in board layout for high-performance computing and AI hardware. This role requires direct experience with multi-layer boards, HDI vias, blind vias, back drilling, DFM, DFA and routing / layout of interfaces like GDDR, LPDDR, DDR, PCIe Gen 5 & 6, Ethernet IEEE 802.3bj, cd, cf, USB, etc. The ideal candidate will work closely with cross-functional teams consisting of engineers from systems, power, signal integrity and mechanical to ensure that our products meet the layout margins and cutting-edge specifications for mass production boards for data centers, workstations, and consumer computing applications.
This role is hybrid, based out of Toronto, ON or Santa Clara, CA.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who you are
What we need
What you will learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
The Tensix team is building the high-performance compute fabric that powers Tenstorrent’s AI and ML workloads. As an AI Performance Architect, you will model, analyze, and optimize how real AI workloads run on the Tensix architecture, shaping future hardware features and ensuring every design decision delivers measurable performance gains. This role connects architecture, software, and RTL to push the limits of efficiency and scalability across next-generation AI systems.
This role is hybrid, based out of Toronto, ON; Austin, TX; or remote.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You’ll Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Our Tensix team is building custom AI compute cores, RISC-V CPUs, and chiplet-based architectures for datacenter, edge, and automotive AI. Design Verification Engineers on this team validate compute IP and subsystems and build scalable DV infrastructure to keep verification fast, automated, and production-grade.
This role is hybrid, based out of Toronto, ON or Austin, TX.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is seeking a Senior Engineer, System-Level Design Verification to ensure our AI silicon delivers reliable, scalable performance in real workloads. You will validate full-system behavior, from connectivity to throughput, across large distributed compute platforms.
This role is hybrid, based out of Toronto, Austin, TX or Belgrade, Serbia.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
We are hiring a talented software engineer to help us build the next generation of photonic AI processors and interconnects. In this role, you will be responsible for developing and extending the device software and firmware stack for Photonic Compute and Photonic interconnect products. You will collaborate with other software teams and hardware systems teams to develop security, telemetry, virtualization, and remote administration functionality.
We offer competitive compensation. The base salary range for this role determined based on location, experience, educational background, and market data.
Benefits eligibility may vary depending on your employment status and location. Lightmatter recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Export Control
Candidates should have capacity to comply with the federally mandated requirements of U.S. export control laws.
Ready to apply?
Apply to Lightmatter
Share this job
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
We are hiring a talented software engineer to help us build the next generation of photonic AI processors and interconnects. In this role, you will be responsible for developing and extending the device software and firmware stack for Photonic Compute and Photonic interconnect products. You will collaborate with other software teams and hardware systems teams to develop security, telemetry, virtualization, and remote administration functionality.
We offer competitive compensation. The base salary range for this role determined based on location, experience, educational background, and market data.
Benefits eligibility may vary depending on your employment status and location. Lightmatter recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Export Control
Candidates should have capacity to comply with the federally mandated requirements of U.S. export control laws.
Ready to apply?
Apply to Lightmatter
Share this job
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
We are hiring a talented software engineer to help us build the next generation of photonic AI processors and interconnects. In this role, you will be responsible for developing and extending the device software and firmware stack for Photonic Compute and Photonic interconnect products. You will collaborate with other software teams and hardware systems teams to develop security, telemetry, virtualization, and remote administration functionality.
We offer competitive compensation. The base salary range for this role determined based on location, experience, educational background, and market data.
Benefits eligibility may vary depending on your employment status and location. Lightmatter recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Export Control
Candidates should have capacity to comply with the federally mandated requirements of U.S. export control laws.
Ready to apply?
Apply to Lightmatter
Share this job
Qsight is a high-growth division of Guidepoint focused on building data intelligence solutions for the healthcare sector. Qsight leverages proprietary datasets and rigorous analysis of alternative data sources to generate actionable insights for top-tier institutional investors, medical device manufacturers, and pharmaceutical companies. The Qsight team develops market intelligence products designed to be highly relevant, accurate, and scalable – delivering superior insights to a diverse, global client base.
We are seeking an experienced, motivated Operations Engineer to join our growing team. This is a multiple-hats role focused on SaaS/platform operations and tier-2 support for client-facing systems. You will own the administration and reliability of key tools, troubleshoot and resolve escalations with clear documentation, and build lightweight automation and reporting to reduce manual work as we scale.
You will partner closely with Customer Success, Product, and Engineering to proactively monitor, support, and improve critical systems. Through practical, creative problem-solving, you will strengthen reliability, accelerate time to resolution, and increase operational visibility. Day to day, you will triage and resolve client technical questions, manage vendor license administration and renewals, and produce reporting that informs operational decisions.
This role is hybrid in Toronto City(with the option to be fully remote); candidates must be able to work US/CAN Eastern hours from 9 am to 6 pm.
The annual base salary range for this position is $95,000 to $135,000. Additionally, this position is eligible for an annual discretionary bonus based on performance.
You will also be eligible for the following benefits:
Guidepoint is a leading research enablement platform designed to advance understanding and empower our clients’ decision-making process. Powered by innovative technology, real-time data, and hard-to-source expertise, we help our clients to turn answers into action.
Backed by a network of nearly 1.75 million experts and Guidepoint’s 1,600 employees worldwide, we inform leading organizations’ research by delivering on-demand intelligence and research on request. With Guidepoint, companies and investors can better navigate the abundance of information available today, making it both more useful and more powerful.
At Guidepoint, our success relies on the diversity of our employees, advisors, and client base, which allows us to create connections that offer a wealth of perspectives. We are committed to upholding policies that contribute to an equitable and welcoming environment for our community, regardless of background, identity, or experience.
#LI-AP1
#LI-HYBRID
Base salary may vary depending on job-related knowledge, skills, and experience, as well as geographic location.
Ready to apply?
Apply to GuidepointProto is accelerating the world's transition to an open economy with products that increase access and independence for everyone. We're building Bitkey, a simple and safe self-custody bitcoin wallet that will put customers in control, as well as hardware and software that will help decentralize bitcoin mining and enable new and innovative use cases for bitcoin mining. We're developing these products in the open - you can read more about them at bitkey.build and mining.build. Within Proto, our Bitcoin Products team delivers the product and go-to-market strategy, software, firmware, and custom silicon needed to make Bitkey and our ambitious mining initiatives a reality. Come build the future of money with us!
We are looking for an ASIC Validation Engineer to help bridge the gap between custom mining silicon design and real-world operation. In this highly hands-on, lab-focused role, you'll work closely with ASIC designers and system engineers to debug issues, build test infrastructure, and generate the data needed to enable rapid design iteration and system-level development.
Block takes a market-based approach to pay, and pay may vary depending on your location. Canada locations are categorized into one of two zones based on a cost of labor index for that geographic area. The successful candidate’s starting pay will be determined based on job-related skills, experience, qualifications, work location, and market conditions. These ranges may be modified in the future.
Application Guidelines
Candidates may submit up to 9 active applications within a 60-day period. Reapplications to the same role are accepted 90 days after a previous application has been reviewed.
Use of AI in Our Hiring Process
We may use automated AI tools to evaluate job applications for efficiency and consistency. These tools comply with local regulations, including bias audits, and we handle all personal data in accordance with state and local privacy laws.
Contact us here with hiring practice or data usage questions.
Every benefit we offer is designed with one goal: empowering you to do the best work of your career while building the life you want. Remote work, medical insurance, flexible time off, retirement savings plans, and modern family planning are just some of our offering. Check out our other benefits at Block.
Block, Inc. (NYSE: XYZ) builds technology to increase access to the global economy. Each of our brands unlocks different aspects of the economy for more people. Square makes commerce and financial services accessible to sellers. Cash App is the easy way to spend, send, and store money. Afterpay is transforming the way customers manage their spending over time. TIDAL is a music platform that empowers artists to thrive as entrepreneurs. Bitkey is a simple self-custody wallet built for bitcoin. Proto is a suite of bitcoin mining products and services. Together, we’re helping build a financial system that is open to everyone.
Ready to apply?
Apply to Block
Forma.ai is a Series B startup that's revolutionizing how sales compensation is designed, managed and optimized. We handle billions in annual managed commissions for market leaders like Edmentum, Stryker, and Autodesk.
Our growth has been fuelled by our passion for fundamentally changing and shaping how companies use sales intelligence to drive business strategy.
We’re welcoming equally driven individuals who are excited about creating something big!
What You’ll Do
This is a hands-on Associate-level DevOps role focused on cloud automation, internal infrastructure, and maintaining an internal cloud-based application used across the company.
You’ll work closely with our engineering teams and senior DevOps staff to maintain, improve, and evolve our internal cloud infrastructure and automation systems, contributing to tooling that supports every other team at Forma.ai.
While your primary focus will be cloud automation and infrastructure, you will also serve as the primary IT presence in our Toronto office, supporting day-to-day IT operations as part of the role.
The role’s key responsibilities are listed below:
What we’re looking for:
Nice to have:
Additional Job Info:
Meaningful compensation. In addition to your base salary, you’ll join our employee stock ownership plan to further recognize your contributions to Forma.ai’s success.
Healthcare coverage. We have a full benefits package that includes medical, dental, vision, disability and life insurance, and a paid parental leave program.
Learning and development. Access the resources you want to help you grow in your role by utilizing our $750 yearly training stipend.
Growth. You’ll have a huge opportunity to build a career for yourself and gain the type of experience you’re looking for, whether that’s as an individual contributor or as a people leader.
Currently, Forma.ai does not use artificial intelligence as part of our recruitment process, specifically but not limited to the screening, filtering and shortlisting of applicants.
Forma is a proud equal opportunity employer that is committed to creating a diverse and inclusive work environment. Every effort to accommodate candidates for accessibility will be made upon request. Information received related to accommodations will be addressed confidentially. We know that applying to a new role takes a lot of effort. You're encouraged to apply even if your experience doesn't precisely match the job description. There are many paths to a successful career and we’re looking forward to reading yours.
We thank all candidates for their interest however only qualified applicants will be shortlisted.
Ready to apply?
Apply to Forma.ai
Who we are
Samsara (NYSE: IOT) is the pioneer of the Connected Operations™ Cloud, which is a platform that enables organizations that depend on physical operations to harness Internet of Things (IoT) data to develop actionable insights and improve their operations. At Samsara, we are helping improve the safety, efficiency and sustainability of the physical operations that power our global economy. Representing more than 40% of global GDP, these industries are the infrastructure of our planet, including agriculture, construction, field services, transportation, and manufacturing — and we are excited to help digitally transform their operations at scale.
Working at Samsara means you’ll help define the future of physical operations and be on a team that’s shaping an exciting array of product solutions, including Video-Based Safety, Vehicle Telematics, Apps and Driver Workflows, and Equipment Monitoring. As part of a recently public company, you’ll have the autonomy and support to make an impact as we build for the long term.
About the role:
As an Enterprise Field Sales Engineer at Samsara, you’d be an integral part of a diverse team working to help modernize essential industries through the application of cutting-edge IoT solutions. Your work would directly contribute to a cleaner, more efficient, and productive supply chain by creating safer roadways, reducing fuel consumption and emissions, and providing a consolidated platform for connecting operations. Our daily customer engagements include conversations around how IoT can positively impact logistics management, workplace safety programs, fleet maintenance strategies, global asset management, and regulatory compliance. This means a successful SE at Samsara will develop a thorough understanding of the application of IoT hardware and sensors, hands-on hardware installation strategies, managing data collection over carrier networks, presenting a robust cloud infrastructure, and building third-party system integrations (via our open API) to ensure the best technical solution is presented to Samsara customers.
This is a remote position open to candidates within the greater Toronto area. This position requires domestic and international travel up to 50% of the time and proximity to a major airport is preferred.
You should apply if:
In this role, you will:
Minimum requirements for the role:
An ideal candidate also has:
The range of annual on-target earnings (OTE) range for full-time employees for this position is below. Please note that OTE pay may vary depending on factors including your city of residence, job-related knowledge, skills, and experience. Learn more about our total rewards and benefits below.
Total Rewards
At Samsara, we build for the people who keep the global economy moving. We want owners, not passengers, which is why our rewards are designed to fuel high-impact builders. Our compensation program delivers above-market total compensation through a combination of base salary, performance-based bonus/variable pay, and equity (for eligible roles) in a high-growth public company. We meaningfully differentiate pay for our top performers, who have the opportunity to earn above-market compensation that can outpace the broader market over time.
Beyond compensation, we provide the foundations that enable long-term success: a flexible, employee-led remote model, a professional development stipend, comprehensive health and parental leave plans, and more. If you’re ready to build for the long term and own the outcome, your journey starts here.
Flexible Working
At Samsara, we embrace a flexible working model that caters to the diverse needs of our teams. Our offices are open for those who prefer to work in-person and we also support remote work where it aligns with our operational requirements. For certain positions, being close to one of our offices or within a specific geographic area is important to facilitate collaboration, access to resources, or alignment with our service regions. In these cases, the job description will clearly indicate any working location requirements. Our goal is to ensure that all members of our team can contribute effectively, whether they are working on-site, in a hybrid model, or fully remotely. All offers of employment are contingent upon an individual’s ability to secure and maintain the legal right to work at the company and in the specified work location, if applicable.
Belonging at Samsara
At Samsara, we welcome everyone regardless of their background. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender, gender identity, sexual orientation, protected veteran status, disability, age, and other characteristics protected by law. We depend on the unique approaches of our team members to help us solve complex problems and want to ensure that Samsara is a place where people from all backgrounds can make an impact.
Accommodations
Samsara is an inclusive work environment, and we are committed to ensuring equal opportunity in employment for qualified persons with disabilities. Please email accessibleinterviewing@samsara.com or click here if you require any reasonable accommodations throughout the recruiting process.
Our Commitment to Authenticity
We use Tofu, a fraud detection tool, to validate the authenticity of applications and protect against identity fraud. This ensures we are connecting with real people and allows us to prioritize genuine candidates. Please see Samsara’s Candidate Privacy Notice for more information.
Fraudulent Employment Offers
Samsara is aware of scams involving fake job interviews and offers. Please know we do not charge fees to applicants at any stage of the hiring process. Official communication about your application will only come from emails ending in @samsara.com, @us-greenhouse-mail.io or @mail3.guide.co. For more information regarding fraudulent employment offers, please visit our blog post here.
Ready to apply?
Apply to Samsara
About Us
AfterShip, a Great Place to Work Certified company, is transforming the global eCommerce landscape. Founded in 2012, AfterShip is a post-purchase SaaS company on a mission to build the world’s leading automation platform for ecommerce merchants.
AfterShip unifies shipping & labels, order tracking, AI predictive delivery, and returns management into one system—giving merchants a single place to manage and automate everything that happens after checkout. By centralizing these workflows, AfterShip enables merchants to reduce customer support inquiries, deliver a more reliable and engaging customer experiences, and unlock incremental revenue at every post-purchase touchpoint.
AfterShip integrates seamlessly with ecommerce platforms including Shopify and TikTok Shop, and connects with more than 1,200 carriers worldwide. Today, over 20,000 businesses—including Samsung, Gymshark, Vivino, Harry’s, Mous, and Rakuten—rely on AfterShip to turn every post-purchase moment into an opportunity to build trust, reduce costs, and drive repeat purchases.
Built for a global market from day one, AfterShip operates with an engineering-driven, internationally distributed team. The company employs more than 450 people across 8 offices, spanning North America, Europe, and Asia, and representing over 20 cities worldwide.
Your Mission:
As a Senior IT Specialist at AfterShip, you will be the foundational pillar for our IT operations across our North American (NA) and European (EU) hubs. You will act as the critical bridge between our growing global teams, ensuring our technology infrastructure scales seamlessly with our business. While our Global Hub architects global security frameworks and SaaS automation, you will own the execution out of the Toronto hub.
Your primary objective is to champion a world-class employee experience by providing high-touch support that eliminates time-zone friction. You aren't just resolving tickets; you are a trusted technical partner and hands-on builder who manages the entire hardware lifecycle and drives the local adoption of global IT initiatives.
This is a high-impact role reporting to the IT Manager, requiring close collaboration with cross-functional partners in People, Finance, and Global IT. You will serve as the primary guardian of our technology standards in the region, ensuring our teams in NA and EU have the tools they need to build the future of e-commerce.
This is a hybrid-flexible position, with a requirement to come to the Toronto office about 3 times per week.
This role requires collaboration with teams located in Asia and Europe, which may require working outside of regular business hours 1-2 times per week.
What You'll Do:
At AfterShip, our IT runs on a Hub-Spoke model: our global IT Hub owns global identity (Google workspace), SaaS automation, Security frameworks, and MDM policies, while our regional specialists own local execution and employee experience. As the Senior IT Specialist for NA/EU, you’ll be the face of IT for 100+ employees across the region, operating from our Toronto hub.
Who We're Looking For:
At AfterShip, we know great talent doesn’t always fit every requirement. If you’re passionate about our mission and believe you can make an impact, we encourage you to apply.
Why You Should Join Us:
Perks:
Salary range for this role: CAD $97,000 - $116,000
We are an equal opportunity employer and provide accommodations upon request throughout the recruitment process, in accordance with local legislation. Please let us know if you require any support, and we’ll work with you to meet your needs.
We believe in hiring right over hiring fast. While timelines may vary, we’re looking to fill this role as soon as possible.
Our hiring process uses AI to help with initial resume screening and to support interview note-taking. These tools help our team stay organized and fair, but all hiring decisions are made by people.
This job posting is for a new position
Ready to apply?
Apply to AfterShip
Share this job
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
We are hiring a talented software engineer to help us build the next generation of photonic AI processors and interconnects. In this role, you will be responsible for developing and extending the device software and firmware stack for Photonic Compute and Photonic interconnect products. You will collaborate with other software teams and hardware systems teams to develop security, telemetry, virtualization, and remote administration functionality.
We offer competitive compensation. The base salary range for this role determined based on location, experience, educational background, and market data.
Benefits eligibility may vary depending on your employment status and location. Lightmatter recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Export Control
Candidates should have capacity to comply with the federally mandated requirements of U.S. export control laws.
Ready to apply?
Apply to Lightmatter
Share this job
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
We're looking for a Senior Data Systems Engineer who bridges the gap between hardware domain expertise and modern data infrastructure. You won't just build pipelines — you'll work directly with foundries, OSATs, test engineers, and validation teams to understand what the data means, define how it should flow, and build the platform that makes AI-driven decisions possible across 13+ active hardware programs.
This is not a typical data engineering role. The right person has a background in semiconductor manufacturing, photonics, hardware test/validation, or a related hardware discipline — and a strong pull toward data systems, process optimization, and automation. You understand that the hardest part of a data platform isn't the code; it's knowing which data matters and why.
Program Data Onboarding & Vendor Engagement
Data Platform Engineering
Performance Metrics & Reporting
Platform Improvement & Tooling
Required
Preferred
We offer competitive compensation. The base salary range for this role determined based on location, experience, educational background, and market data.
Benefits eligibility may vary depending on your employment status and location. Lightmatter recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Export Control
Candidates should have capacity to comply with the federally mandated requirements of U.S. export control laws.
Ready to apply?
Apply to Lightmatter
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
We’re hiring a staff level full-stack Technical Lead (L6/L7) to own and scale critical parts of the Cerebras Developer Console — the primary interface developers and enterprises use to run and manage inference workloads.
This is a deeply technical, end-to-end role. You’ll build high-quality frontend systems (Next.js, TypeScript) and design backend services (GraphQL, Postgres, Redis) that power usage tracking, billing, quotas, and observability. The systems you build will operate at high scale, require careful data modeling, and balance real-time and batch processing. You’ll be expected to make strong architectural decisions and move quickly from idea to production.
You’ll join an existing, high-velocity team and take ownership of major platform areas such as billing, request logs, and metrics. This is not a “ticket execution” role — you’ll define problems, drive technical direction, and lead execution across the stack. The work directly impacts customer experience and revenue, and the expectations are correspondingly high.
As a Technical Lead, you’ll set the bar for engineering quality and execution. You’ll mentor engineers, drive design reviews, and push the team toward simple, scalable solutions. We’re looking for someone who thrives in fast-moving environments, operates with urgency, and is comfortable navigating ambiguity while shipping high-quality systems.
Cerebras is redefining the speed and scale of AI inference. The systems we build power real-world production workloads, not demos.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About the Role
We are seeking a versatile and experienced engineer to join our SOTA Training Platform team. This team is responsible to rapidly bring up state-of-the-art open-source models (like LLaMA, Qwen, etc) or customer-provided proprietary models on our Cerebras CSX systems. Success in this role requires a system-minded generalist who thrives in fast-paced bringup environments and is comfortable working across the entire Cerebras software stack.
Your work will play a critical role in achieving unprecedented levels of performance, efficiency, and scalability for AI applications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
We are building the next generation of large-scale AI systems that power training and inference workloads at unprecedented scale and efficiency.
You will design and develop high-performance distributed software that orchestrates massive compute and data pipelines across heterogeneous clusters. Your work will push the limits of concurrency, throughput, and scalability—enabling efficient execution of models at massive scale. This role sits at the intersection of systems engineering and machine learning performance, demanding both architectural depth and low-level implementation skills. You will help shape how models are executed and optimized end-to-end, from data ingestion to distributed execution, across cutting-edge hardware platforms.
We’re hiring for runtime roles across both Training and Inference.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role:
In this exciting role, you will be responsible for bring up and optimizations of Cerebras’s Wafer Scale Engine (WSE). Suitable candidate will have experience delivering end to end solutions working closely with teams across chip design, system performance, software development and productization.
Responsibilities:
Skills & Qualifications:
Preferred:
Location:
Sunnyvale, California.
Bangalore, India
Toronto, Canada
The base salary range for this position is $175,000 to $275,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role:
In this exciting role, you will be responsible for bring up and optimizations of Cerebras’s Wafer Scale Engine (WSE). Suitable candidate will have experience delivering end to end solutions working closely with teams across chip design, system performance, software development and productization.
Responsibilities:
Skills & Qualifications:
Preferred:
Location:
Bangalore, India
Toronto, Canada
Sunnyvale, California.
For Sunnyvale: The base salary range for this position is $175,000 to $275,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About the Role
We are building a high-performance SRE function to support one of the world’s fastest-growing AI inference services, powered by the Wafer-Scale Engine (WSE), helping deliver infrastructure for frontier-class models from leading model builders such as OpenAI.
This role offers immediate ownership of real production systems at a growing scale, direct mentorship from seasoned engineers, and close collaboration with incoming Staff SREs who will focus on long-term automation. After ~1 month of shared hands-on operations with the Staff engineers, you’ll primarily operate the current setup, bring up new capacity in high-stakes environments and help bring new continuous delivery pipelines into production use.
If you thrive in high-ownership SRE roles at scale and want to help shape a team from the ground up in cutting-edge AI Inference infrastructure, this is your chance.
This role does not require 24/7 on-call rotations.
Key Responsibilities
Required Experience & Skills
Nice-to-Have
Location
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
We're looking for a deeply technical, hands-on software engineer to join our on-field Kernel Reliability team. You'll help tackle a critical challenge: improving the reliability of our advanced compute clusters and the underlying inference, training, and internal production services. In this role, you'll work close to the code and design solutions that will scale with our rapidly growing system production and software service offerings. If you have strong fundamentals in systems, debugging, and failure analysis—and enjoy building tools and solving hard reliability problems—we want to hear from you. New college graduates are welcome.
Responsibilities
Skills & Qualifications
Nice to have:
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About The Role
The Inference ML Engineering team at Cerebras Systems is dedicated to enabling our fast generative inference solution through simple APIs powered by a distributed runtime that runs on large clusters of our own hardware. Our mission is to empower enterprises, developers, and researchers to unlock the full potential of our platform, leveraging its performance, scalability, and flexibility. The team works closely with cross-functional groups, including compiler developers, cluster orchestrators, ML scientists, cloud architects, and product teams, to deliver high-impact solutions that redefine the boundaries of ML performance and usability.
As a Senior Software Engineer on the Inference ML Engineering team, you will play a key role in designing and implementing APIs, ML features, and tools that enable running state-of-the-art generative AI models on our custom hardware. You will architect solutions that enable seamless model translation and execution, ensuring high throughput and low latency, while maintaining ease of use. Your responsibilities will include leading technical initiatives, collaborating with other engineering teams to enhance the developer experience, enabling key ML features at scale, maintaining our speed advantage, achieving high throughput, and supporting a wide range of ML workloads. This role offers an opportunity to shape the evolution of our ML ecosystem while tackling complex technical challenges at the intersection of machine learning, software, and hardware.
Responsibilities
Skills and Qualifications
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About the Role
We are building a high-performance SRE function to support one of the world’s fastest-growing AI inference services, powered by the Wafer-Scale Engine (WSE). This team will help deliver world-class, ultra-reliable inference infrastructure for leading model builders such as OpenAI and other frontier labs.
As a Staff SRE, you will lead the engineering effort to eliminate toil at scale by driving implementation of self-service delivery pipelines, shared observability common tooling. This role starts with ~1 month of hands-on operational immersion to gain deep familiarity with our current stack, production pain points, and high-stakes workflows.
From there, your primary focus shifts to architecting and delivering the "tomorrow" layer: declarative GitOps-driven CD for model releases, capacity provisioning and cluster upgrades. Success over the first year in this role will be defined by enabling core teams, product managers, external customers, and cluster stakeholders to operate in a fully self-service model with strong reliability guarantees.
You will partner with our early-career SRE sub-team, who own day-to-day operations. This will allow you to deeply understand their pain points, automate their toil, and mentor them as platform engineers.
You will collaborate with the tech leads and the leadership team across core, cluster, cloud, and product stakeholders. This work will shift reliability from an ops-only burden to a shared engineering discipline that underpins frontier AI inference at scale.
If you are a proven Staff+ engineer who enjoys turning complexity into elegant reliability at scale, this is your chance to lead this transformation from the front.
This role does not require 24/7 on-call rotations.
Key Responsibilities
Required Experience & Skills
Nice-to-Haves
Location
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Location: Sunnyvale
We're hiring a Staff Engineer to own major areas of the architecture of our Inference Cloud Platform. This team owns the cloud layer behind our Inference Service, with responsibility for availability, latency, reliability, and global scale.
This is a hands on IC role for an engineer who wants to work on the hardest distributed systems problems in the stack: multi-region traffic architecture, graceful degradation under bursty AI workloads, performance at high QPS, and the operating model for a platform that has to stay fast and available under load. You'll write code, lead key architectural decisions in your domain, debug production issues, and help shape technical direction across adjacent teams.
If you're interested in building the next-generation architecture of a globally distributed inference platform, we'd like to talk.
Responsibilities
Skills & Qualifications
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Cerebras builds wafer-scale AI processors—single chips delivering tens of PB/s of memory bandwidth and a dataflow architecture that accelerates at a granularity no multi-device system can match. The Advanced Technology Group (ATG) is Cerebras’ pathfinding organization. We work ahead of product to explore new architectures, demonstrate breakthrough performance on scientific and AI workloads, and shape the technical roadmap for future Cerebras hardware and software. Our work regularly appears at top-tier venues (Supercomputing, SIAM, IEEE, and NeurIPS) and directly influences the design of next-generation wafer-scale systems.
We are seeking R&D Engineers to join Cerebras' Advanced Technology Group. You will design and implement workloads that establish new performance benchmarks on wafer-scale hardware, leveraging architectural features that no traditional platform offers. The
scope ranges from large-scale scientific simulations to emerging AI/ML models, and the work sits at the intersection of algorithm design, compiler co-optimization, and hardware architecture. You will collaborate closely with Cerebras’ ASIC, compiler, kernel, and AI teams as well as external partners at universities and national laboratories.
We are hiring across several focus areas. Exceptional depth in one or more of the following is a strong signal:
We are hiring for multiple positions across experience levels. If this work resonates, we encourage you to apply.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.