Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role
Cerebras is seeking a Commissioning Lead to own the end-to-end commissioning and readiness of AI data center infrastructure across colocation environments. This role is responsible for ensuring all systems are tested, validated, and fully operational prior to handover, with zero tolerance for failures in mission-critical environments. You will operate with high ownership in a fast-paced startup environment, driving commissioning execution across multiple concurrent sites and ensuring rapid, reliable capacity bring-up.
Responsibilities
• Lead commissioning strategy and execution across all colo data center deployments.
• Own full lifecycle commissioning from Level 1–5 testing through integrated systems testing (IST).
• Develop and enforce commissioning plans, scripts, and procedures.
• Coordinate with construction, engineering, vendors, and colo providers to ensure readiness.
• Oversee testing of electrical systems (switchgear, UPS, generators), mechanical systems (cooling), and IT infrastructure.
• Ensure all systems meet design intent, performance requirements, and reliability standards.
• Drive issue identification, resolution, and closure prior to handover.
• Manage commissioning agents, vendors, and third-party testing teams.
• Establish standardized commissioning processes for repeatable deployments.
• Track and report commissioning progress, risks, and readiness to executive leadership.
• Ensure all documentation, test results, and turnover packages are complete and accurate.
• Validate base building readiness from colo providers prior to fit-out energization.
• Coordinate integration between landlord systems and tenant infrastructure.
• Ensure alignment on power availability, redundancy, and cooling capacity.
• Resolve interface issues between colo infrastructure and Cerebras systems.
• Hold providers accountable for performance during testing and energization.
Skills & Qualifications
• 10–15+ years of experience in commissioning of mission-critical facilities.
• Deep expertise in data center electrical and mechanical systems.
• Experience leading Level 1–5 commissioning for large-scale projects.
• Strong understanding of high-density compute environments.
• Experience working in colo environments and coordinating landlord/tenant interfaces.
• Proven ability to manage multiple sites and fast-track deployments.
• Strong troubleshooting and problem-solving skills.
• Ability to operate in a fast-paced, high-growth startup environment.
• Excellent communication and stakeholder management skills.
Location: Remote, USA
The base salary range for this position is $220,000 to $260,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role
Cerebras is seeking a Network & Fiber Engineer to lead the design, deployment, and validation of high-performance network and fiber infrastructure across colocation data centers. This role is critical to enabling AI-scale compute clusters by ensuring low-latency, high-throughput connectivity between racks, data halls, and external networks. You will operate with high ownership in a fast-paced startup environment, working closely with construction, commissioning, and IT teams to bring network infrastructure online quickly and reliably.
Responsibilities
• Own end-to-end fiber and network infrastructure deployment across colo data center sites.
• Design fiber pathways, structured cabling systems, and high-density fiber distribution architectures.
• Oversee installation of fiber (SMF/MMF), patch panels, trays, and cable management systems.
• Coordinate with construction and commissioning teams to align network readiness with overall site delivery.
• Validate fiber installations including testing (OTDR, insertion loss, continuity).
• Support deployment of network hardware including switches, routers, and interconnects.
• Ensure low-latency, high-bandwidth connectivity across racks and clusters.
• Develop and maintain standards for fiber design, labeling, and documentation.
• Troubleshoot network and fiber issues during deployment and post-handover.
• Manage vendors, installers, and low-voltage contractors.
• Track progress, risks, and readiness across multiple sites.
• Coordinate with colo providers for meet-me room (MMR) connectivity and cross-connects.
• Ensure alignment on demarcation points and handoff standards.
• Manage external connectivity including ISP, dark fiber, and backbone integration.
• Validate provider fiber infrastructure and resolve interface issues.
Skills & Qualifications
• 7–12+ years of experience in network and/or fiber engineering in data centers or telecom environments.
• Strong experience with fiber design, installation, and testing (OTDR, power meter).
• Familiarity with high-density fiber systems (MPO/MTP).
• Experience deploying and troubleshooting network infrastructure.
• Understanding of data center architectures and high-performance computing environments.
• Experience working in colo environments is highly preferred.
• Ability to manage multiple concurrent deployments.
• Strong problem-solving and troubleshooting skills.
• Excellent communication and coordination abilities.
Location: Remote, USA
The base salary range for this position is $250,000 to $290,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
Cerebras is seeking a Director of Procurement to lead sourcing, contracting, and supply chain execution for data center infrastructure and critical equipment. This role is responsible for securing capacity, negotiating commercial terms, and ensuring timely delivery of long-lead equipment to support rapid AI infrastructure deployment.
You will operate with high ownership in a fast-paced startup environment, driving both strategic supplier partnerships and tactical execution across multiple concurrent builds.
Location: Remote, USA
The base salary range for this position is $280,000 to $350,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
In late 2024, we launched Cerebras Inference, the fastest Generative AI inference service in the world, over 10 times faster than GPU-based hyperscale cloud inference. Since launch, we’ve scaled to meet the surging demand from AI labs, enterprises, and a thriving developer community.
In October 2025, we announced our series G funding, raising $1.1 billion USD to accelerate the expansion of our products and services to meet global AI demand.
About the team
The Cerebras Inference team’s mission is to deliver the world’s most performant, secure, and reliable enterprise-grade AI service. We build and operate large-scale distributed systems that power AI inference at unprecedented speed and efficiency. Join us to help scale inference and accelerate AI.
About the role
We’re looking for a hands-on Reliability Tech Lead (IC) to own the mission of making Cerebras Inference the most reliable AI service in the world. You will drive reliability strategy and execution across our inference stack, from client SDKs and public-cloud multi-region deployments to wafer-scale systems in specialized data centers.
In this role, you will define SLOs and incident-response frameworks, design and implement reliability mechanisms at scale, and partner across hundreds of engineers to ensure our service meets world-class reliability standards.
If you are passionate about building and operating massive-scale, low-latency, high-reliability distributed systems, we want to hear from you.
Responsibilities:
Skills & Qualifications:
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Cybersecurity GRC Manager is accountable for maturing and scaling engineering-driven governance, risk, and compliance programs that support the security, privacy, and regulatory-compliant posture of the organization. The ideal candidate will bring a unique blend of deep technical security acumen and GRC expertise, enabling the creation of GRC workflows that are measurable, automated, and resilient. This is a strategic, cross-functional, and customer-facing role reporting to the Director of Governance, Risk, & Compliance.
A successful candidate will have a comprehensive understanding of cybersecurity and privacy industry frameworks (e.g., NIST, ISO, SOC 2, CCPA, GDPR, HIPAA). They will be responsible for transforming governance, risk, and compliance practices into proactive, testable capabilities using automation, continuous auditing, and AI-driven solutions.
Proficiency with AI tools (LLMs, prompt engineering, generative‑AI workflows) is a core requirement – you’ll use AI to streamline GRC workflow creation and implementation, evidence generation, and security risk mitigation. Experience with designing and implementing autonomous “agentic AI” solutions is preferred.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.