Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role
As part of the Embedded Software team, you will help build the critical software foundation that powers the Cerebras Wafer Scale Engine (WSE)—the world’s largest AI processor. Our team owns a diverse range of embedded and system level components that enable the WSE to operate reliably at scale, including microcontroller firmware, wafer level monitoring logic, system administration services, and the Linux platform and BSP layers that keep the entire system running smoothly.
This role exists at the intersection of embedded systems, platform engineering, and distributed system enablement. As our technology and deployments continue to scale, we are expanding the team with versatile engineers eager to work across multiple layers of the software stack. You will help build administrative services that connect the WSE’s system software to cluster-level orchestration, collaborate closely with hardware and ASIC teams, and contribute to the robustness, visibility, and operability of our next-generation AI systems.
Responsibilities
Skills & Qualifications
Minimum Qualifications
Preferred Qualifications
The base salary range for this position is $175,000 to $275,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Job Summary
We are seeking an experienced System Signal Integrity and Power Integrity Engineer to solve complex, high‑impact integrity challenges in next‑generation AI compute systems. This role is focused on deep technical analysis and hands‑on problem solving across high‑speed interfaces, power delivery networks, rigid and flex interconnects, and advanced packaging.
The ideal candidate is a technical expert engaged to resolve difficult SI/PI problems spanning silicon, package, PCB, flex, and connector domains.
Key Responsibilities
Minimum Qualifications
Required Experience and Skills
Additional Information
Experience with large‑scale AI or high‑performance compute systems is preferred.
The base salary range for this position is $225,000 to $275,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role
Cerebras is seeking a Commissioning Lead to own the end-to-end commissioning and readiness of AI data center infrastructure across colocation environments. This role is responsible for ensuring all systems are tested, validated, and fully operational prior to handover, with zero tolerance for failures in mission-critical environments. You will operate with high ownership in a fast-paced startup environment, driving commissioning execution across multiple concurrent sites and ensuring rapid, reliable capacity bring-up.
Responsibilities
• Lead commissioning strategy and execution across all colo data center deployments.
• Own full lifecycle commissioning from Level 1–5 testing through integrated systems testing (IST).
• Develop and enforce commissioning plans, scripts, and procedures.
• Coordinate with construction, engineering, vendors, and colo providers to ensure readiness.
• Oversee testing of electrical systems (switchgear, UPS, generators), mechanical systems (cooling), and IT infrastructure.
• Ensure all systems meet design intent, performance requirements, and reliability standards.
• Drive issue identification, resolution, and closure prior to handover.
• Manage commissioning agents, vendors, and third-party testing teams.
• Establish standardized commissioning processes for repeatable deployments.
• Track and report commissioning progress, risks, and readiness to executive leadership.
• Ensure all documentation, test results, and turnover packages are complete and accurate.
• Validate base building readiness from colo providers prior to fit-out energization.
• Coordinate integration between landlord systems and tenant infrastructure.
• Ensure alignment on power availability, redundancy, and cooling capacity.
• Resolve interface issues between colo infrastructure and Cerebras systems.
• Hold providers accountable for performance during testing and energization.
Skills & Qualifications
• 10–15+ years of experience in commissioning of mission-critical facilities.
• Deep expertise in data center electrical and mechanical systems.
• Experience leading Level 1–5 commissioning for large-scale projects.
• Strong understanding of high-density compute environments.
• Experience working in colo environments and coordinating landlord/tenant interfaces.
• Proven ability to manage multiple sites and fast-track deployments.
• Strong troubleshooting and problem-solving skills.
• Ability to operate in a fast-paced, high-growth startup environment.
• Excellent communication and stakeholder management skills.
Location: Remote, USA
The base salary range for this position is $220,000 to $260,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role
Cerebras is seeking a Network & Fiber Engineer to lead the design, deployment, and validation of high-performance network and fiber infrastructure across colocation data centers. This role is critical to enabling AI-scale compute clusters by ensuring low-latency, high-throughput connectivity between racks, data halls, and external networks. You will operate with high ownership in a fast-paced startup environment, working closely with construction, commissioning, and IT teams to bring network infrastructure online quickly and reliably.
Responsibilities
• Own end-to-end fiber and network infrastructure deployment across colo data center sites.
• Design fiber pathways, structured cabling systems, and high-density fiber distribution architectures.
• Oversee installation of fiber (SMF/MMF), patch panels, trays, and cable management systems.
• Coordinate with construction and commissioning teams to align network readiness with overall site delivery.
• Validate fiber installations including testing (OTDR, insertion loss, continuity).
• Support deployment of network hardware including switches, routers, and interconnects.
• Ensure low-latency, high-bandwidth connectivity across racks and clusters.
• Develop and maintain standards for fiber design, labeling, and documentation.
• Troubleshoot network and fiber issues during deployment and post-handover.
• Manage vendors, installers, and low-voltage contractors.
• Track progress, risks, and readiness across multiple sites.
• Coordinate with colo providers for meet-me room (MMR) connectivity and cross-connects.
• Ensure alignment on demarcation points and handoff standards.
• Manage external connectivity including ISP, dark fiber, and backbone integration.
• Validate provider fiber infrastructure and resolve interface issues.
Skills & Qualifications
• 7–12+ years of experience in network and/or fiber engineering in data centers or telecom environments.
• Strong experience with fiber design, installation, and testing (OTDR, power meter).
• Familiarity with high-density fiber systems (MPO/MTP).
• Experience deploying and troubleshooting network infrastructure.
• Understanding of data center architectures and high-performance computing environments.
• Experience working in colo environments is highly preferred.
• Ability to manage multiple concurrent deployments.
• Strong problem-solving and troubleshooting skills.
• Excellent communication and coordination abilities.
Location: Remote, USA
The base salary range for this position is $250,000 to $290,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Cerebras is seeking a Director of Procurement to lead sourcing, contracting, and supply chain execution for data center infrastructure and critical equipment. This role is responsible for securing capacity, negotiating commercial terms, and ensuring timely delivery of long-lead equipment to support rapid AI infrastructure deployment.
You will operate with high ownership in a fast-paced startup environment, driving both strategic supplier partnerships and tactical execution across multiple concurrent builds.
Location: Remote, USA
The base salary range for this position is $280,000 to $350,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Responsibilities
Skills & Qualifications
The base salary range for this position is $150,000 to $260,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
Location: Sunnyvale, California
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About the Role
As a Full Stack Engineer focusing on Cerebras’ manufacturing test platform, you will design, build, and maintain a comprehensive test software solution for all stages of manufacturing – from individual components to complete Cerebras systems. You will collaborate cross-functionally with hardware design, engineering, operations, and data analytics teams to develop user interfaces and data processing frameworks that directly impact manufacturing efficiency, quality, and scalability.
Responsibilities
Skills and Qualifications
Required
Preferred
The base salary range for this position is $175,000 to $220,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Role Summary
We are seeking a hands‑on DVT Technical Lead (Individual Contributor) to own and drive the Design Validation Test (DVT) process end‑to‑end across complex electrical engineering boards and full systems. You will define validation strategy, build test plans and infrastructure, lead deep debug and root‑cause analysis (RCA), and drive closure through design changes and re‑test. The domain includes difficult power delivery technology, fast high‑speed I/O, and electro‑mechanical systems with thermal, optics, and high‑power constraints. People management is not required (mentoring is a plus).
What You’ll Own
1) Board / Subassembly DVT (EE-heavy)
2) System-level DVT (integration + electro-mechanical)
Key Responsibilities
DVT Strategy, Plans, and Coverage
Hands-on Validation, Debug, and Closure
DVT Infrastructure, Automation, and Data
Cross-functional Technical Leadership
Minimum Qualifications
Preferred / Nice-to-Have
Success Measures
Location :Sunnyvale, California
The base salary range for this position is $175,000 to $275,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Role Overview
As a Software Engineer specializing in automation, you will play a key role in designing and delivering software solutions that improve operational efficiency and streamline business processes. You will develop automation frameworks, tools, and applications that reduce manual effort, enhance system reliability, and support scalable growth across the organization.
In this role, you will collaborate closely with cross functional teams—including engineers, analysts, and business stakeholders—to understand workflow challenges and identify opportunities for automation. Your work may involve building process automation systems, develop real-time monitoring and alerting capabilities, integrating disparate systems, and creating data driven solutions to optimize performance.
Your contributions will help eliminate bottlenecks, reduce operational costs, and enable teams to focus on higher value activities. Ideal candidates have strong software engineering fundamentals, experience with automation tools and scripting languages, and a passion for building efficient, reliable, and elegant solutions in a dynamic environment.
Key Responsibilities
Skills and Qualifications
Assets:
The base salary range for this position is $190,000 to $230,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Advanced Packaging Technologist & Lead
We are seeking an accomplished Advanced Packaging Technologist & Lead to drive the development, integration, and deployment of next‑generation semiconductor packaging technologies. This role is critical in architecting and implementing advanced, high‑performance, and high‑density packaging solutions supporting cutting‑edge compute, AI, and heterogeneous integration platforms.
Key Responsibilities
Advanced Packaging Architecture & Development
Assembly, Materials, & Interconnect Technologies
Process Technology & Reliability
Qualifications
The base salary range for this position is $175,000 to $275,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
As a Senior Mechanical Engineer at Cerebras, you will lead the design of mechanical systems for our next-generation wafer-scale engine. Your responsibilities will include ensuring compliance with specifications, validating manufacturability, and delivering a high-quality product in a fast-paced environment—tackling some of the most challenging problems in the rapidly evolving AI space.
In this role, you will develop mechanical infrastructure for Cerebras’ custom hardware system.
The base salary range for this position is $190,000 to $230,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.