Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Our customers span leading AI Native companies, Fortune 500 Enterprises, Sovereign AI and Federal programs, and leading research institutions. Our mission is to deliver the platform that unlocks the next generation of AI applications, providing the fundamentally new capability to leverage the most intelligent models at real-time serving speeds.
Why Cerebras?
Here at Cerebras, we have built the world’s first wafer-scale compute platform and software stack, purpose-designed to accelerate generative AI by over 10-20x what is possible on legacy processors today. AI developers are limited today by the constant tradeoffs between model quality, speed, and cost, and Cerebras’ mission is to remove these limitations to unlock AI creativity and potential.
As a founding member of the Strategic Verticals product team at Cerebras, you are the tip of the spear for our company. You’ll embed with our most strategic customers, from AI-native startups shipping 0-to-1 products to Fortune 500 enterprises transforming their industries, to translate and guide their ambitions into blazing-fast, production-ready AI solutions.
Think of yourself as part product leader, part technical expert, and part GTM strategist:
Successful candidates will be passionate about creative problem solving and idea generation, learning and embedding into new domains, building relationships, and delighting customers.
You’ll have the opportunity to learn about and enable some of the most impactful AI products in the world, with industry-leading organizations across each vertical. You will get to work closely with a tight-knit product team, in a fast-moving but supportive environment. Your scope and career here will be driven by your passion, ability, and impact – not by your seniority or prior experience.
Key Responsibilities
You will:
Preferred requirements
You’ll thrive in this role if you:
Location
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Cerebras powers the world's fastest AI inference. As the Product Manager for AI Models, you'll lead the strategic model portfolio that defines our product — deciding which models ship, how they perform, and how the world discovers them.
You'll partner directly with leading AI labs, drive launches that shape the industry, and ensure every model on our platform delivers exceptional quality at unprecedented speed.
What we need to see:
Preferred requirements
How to stand out:
Location
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
As a Senior Hardware Technical Program Manager at Cerebras, you will spearhead operational excellence for our high-performance AI compute systems and data centers. You will own the end-to-end hardware schedule for design and engineering improvements, report on engineering issues, and define mitigation strategies. You will own the schedule, implementation, and software integration of hardware changes. You will collaborate closely with electrical and system engineering, manufacturing, supply chain, and system software to drive end-to-end schedule of improvements to our wafer-scale engine supercomputers. Your role will be critical in ensuring seamless translation of product strategy and engineering constraints into the creation and execution of massive supercomputer deployments in the US and abroad.
The base salary range for this position is $180,000 to $230,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.