All active ASIC roles based in Bengaluru.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
The role is Design for Test (DFT) for high-performance designs going into industry leading AI/ML architectures. The person coming into this role will be involved in all implementation aspects from RTL to tapeout for various IPs on the chip. High level challenges include reducing test cost while attaining high coverage, and facilitating debug and yield learnings while minimizing design intrusions. The work is done collaboratively with a group of highly experienced engineers across various domains of the ASIC.
This role is hybrid, based out of Bangalore.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Responsibilities:
Experience & Qualifications:
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Principal Embedded SW/FW Engineer (Bringup) - Bengaluru, multiple vacancies
Job Summary
We have an exciting opportunity to be part of a collaborative, cross-functional development team validating cutting-edge, high-performance AI chips and platforms.
You will play a key role in supporting new product introductions and post-silicon validation.
Working within the Post-Silicon Validation team, you will be involved with bringing first silicon to life, functionally validating it and working closely with many other teams to help it become a fully characterised and working product, reporting project status/progress to program management on a regular basis. You will have the opportunity to provide technical guidance to other engineering team members. In this role, you can leverage our experience and industry knowledge to architect and drive implementation of continuous improvements to test infrastructure and processes.
The Team
The Post-Silicon Bringup team sits within the Architecture and Validation team, we are responsible for bringup and validation of new silicon when it returns from manufacture, enabling and supporting the production SW and FW teams to bring up their software and supporting the Silicon Characterisation team.
Responsibilities and Duties
Candidate Profile
Essential:
Desirable
Ready to apply?
Apply to Graphcore
Silicon Verification Engineer
Multiple roles across different levels
Graphcore is a globally recognised leader in Artificial Intelligence computing systems. The company designs advanced semiconductors and data centre hardware that provide the specialised processing power needed to drive AI innovation, while delivering the efficiency required to support its broader adoption.
As part of the SoftBank Group, Graphcore is a member of an elite family of companies responsible for some of the world’s most transformative technologies. We are opening a new AI Engineering Campus in Bengaluru which will play a central role in Graphcore's work building the future of AI computing.
The verification team sits within the Silicon design team and is responsible for ensuring that the RTL created by the logical design team and used by the physical design team matches the architecture specification for Graphcore silicon. The silicon verification engineer is responsible for verification activities within Graphcore, helping the team meet the company objectives for quality silicon delivery.
Responsibilities
Essential skills:
• Verification experience in relevant industry
• Proven leadership and planning skills
• Highly motivated, a self starter, and a team player
• Ability to work across teams and programming languages to find root causes of deep and complex issues
• Experience of the verification process applied in CPU and/or ASIC environments
• System Verilog, Python, C++, Linux
Desirable skills:
• UVM
• SVA
• Assembly languages
• LLVM, GCC
• DVCS e.g. Git
• SGE or other DRMS
• XML and XPath/XSLT
• Web programming – HTML/DOM, Javascript, SQL
Benefits:
In addition to a competitive salary, Graphcore offers a competitive benefits package. We welcome people of different backgrounds and experiences; we’re committed to building an inclusive work environment that makes Graphcore a great home for everyone. We offer an equal opportunity process and understand that there are visible and invisible differences in all of us. We can provide a flexible approach to interview and encourage you to chat to us if you require any reasonable adjustments.
Ready to apply?
Apply to Graphcore
Share this job
Staff Embedded SW/FW Engineer (Bringup)
Graphcore is a globally recognised leader in Artificial Intelligence computing systems. The company designs advanced semiconductors and data centre hardware that provide the specialised processing power needed to drive AI innovation, while delivering the efficiency required to support its broader adoption.
As part of the SoftBank Group, Graphcore is a member of an elite family of companies responsible for some of the world’s most transformative technologies. We are opening a new AI Engineering Campus in Bengaluru which will play a central role in Graphcore's work building the future of AI computing.
Job Summary
We have an exciting opportunity to be part of a collaborative, cross-functional development team developing C code used to validate cutting-edge, high-performance AI chips and platforms. You will play a critical role in supporting new product introductions and post-silicon validation.
Working within the Post-Silicon Bringup team, you will be involved with bringing first silicon to life, developing code primarily in C to configure and exercise systems and sub-systems on new silicon devices, and working closely with many other teams to help it become a fully characterised and working product, reporting project status/progress to program management on a regular basis. You will have the opportunity to, and be responsible for, leading, mentoring, and providing technical guidance to other engineering team members. In this role, you can leverage your experience and industry knowledge to architect and drive implementation of continuous improvements to test infrastructure and processes.
The Team
The Post-Silicon Bringup team sits within the Architecture and Validation team, we are responsible for bringup and validation of new silicon when it returns from manufacture, enabling and supporting the production SW and FW teams to bring up their software and supporting the Silicon Characterisation team.
Responsibilities and Duties
Candidate Profile
Essential:
Desirable:
Benefits:
In addition to a competitive salary, Graphcore offers a competitive benefits package. We welcome people of different backgrounds and experiences; we’re committed to building an inclusive work environment that makes Graphcore a great home for everyone. We offer an equal opportunity process and understand that there are visible and invisible differences in all of us. We can provide a flexible approach to interview and encourage you to chat to us if you require any reasonable adjustments.
Ready to apply?
Apply to Graphcore
Share this job
Silicon Verification Engineer
Multiple roles across different levels
Graphcore is a globally recognised leader in Artificial Intelligence computing systems. The company designs advanced semiconductors and data centre hardware that provide the specialised processing power needed to drive AI innovation, while delivering the efficiency required to support its broader adoption.
As part of the SoftBank Group, Graphcore is a member of an elite family of companies responsible for some of the world’s most transformative technologies. We are opening a new AI Engineering Campus in Bengaluru which will play a central role in Graphcore's work building the future of AI computing.
The verification team sits within the Silicon design team and is responsible for ensuring that the RTL created by the logical design team and used by the physical design team matches the architecture specification for Graphcore silicon. The silicon verification engineer is responsible for verification activities within Graphcore, helping the team meet the company objectives for quality silicon delivery.
Responsibilities
Essential skills:
•8 to 12 Years of experience
•Verification experience in relevant industry
• Proven leadership and planning skills
• Highly motivated, a self starter, and a team player
• Ability to work across teams and programming languages to find root causes of deep and complex issues
• Experience of the verification process applied in CPU and/or ASIC environments
• System Verilog, Python, C++, Linux
Desirable skills:
• UVM
• SVA
• Assembly languages
• LLVM, GCC
• DVCS e.g. Git
• SGE or other DRMS
• XML and XPath/XSLT
• Web programming – HTML/DOM, Javascript, SQL
Benefits:
In addition to a competitive salary, Graphcore offers a competitive benefits package. We welcome people of different backgrounds and experiences; we’re committed to building an inclusive work environment that makes Graphcore a great home for everyone. We offer an equal opportunity process and understand that there are visible and invisible differences in all of us. We can provide a flexible approach to interview and encourage you to chat to us if you require any reasonable adjustments.
Ready to apply?
Apply to Graphcore
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role:
In this exciting role, you will be responsible for bring up and optimizations of Cerebras’s Wafer Scale Engine (WSE). Suitable candidate will have experience delivering end to end solutions working closely with teams across chip design, system performance, software development and productization.
Responsibilities:
Skills & Qualifications:
Preferred:
Location:
Bangalore, India
Toronto, Canada
Sunnyvale, California.
For Sunnyvale: The base salary range for this position is $175,000 to $275,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role
We are seeking a highly skilled and motivated Manufacturing Bring-up Engineer to join our team. As the Manufacturing Bring-up Engineer you will support our system level bring-up process execution, implementation, and evolution in the manufacturing pipeline. This is a high visibility role that requires strong technical expertise, coordination, and collaboration to deliver our product from manufacturing to the customer.
Responsibilities
Skills & Qualifications
Preferred:
Location
Bangalore, India/Toronto, Canada/ Sunnyvale, California.
The base salary range for this position is $170,000 to $230,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role
We are seeking a highly skilled and motivated Manufacturing Bring-up Engineer to join our team. As the Manufacturing Bring-up Engineer you will support our system level bring-up process execution, implementation, and evolution in the manufacturing pipeline. This is a high visibility role that requires strong technical expertise, coordination, and collaboration to deliver our product from manufacturing to the customer.
Responsibilities
Skills & Qualifications
Preferred:
Location
Sunnyvale, California/ Bangalore, India/Toronto, Canada.
The base salary range for this position is $170,000 to $230,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The Role:
In this exciting role, you will be responsible for bring up and optimizations of Cerebras’s Wafer Scale Engine (WSE). Suitable candidate will have experience delivering end to end solutions working closely with teams across chip design, system performance, software development and productization.
Responsibilities:
Skills & Qualifications:
Preferred:
Location:
Sunnyvale, California.
Bangalore, India
Toronto, Canada
The base salary range for this position is $175,000 to $275,000 annually. Actual compensation may include bonus and equity, and will be determined based on factors such as experience, skills, and qualifications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Location:
Hybrid, working onsite at our Bangalore offices 3 days per week.
Minimum:
BS in Electrical Engineering, Computer Science, or a related field with 8+ years of industry experience, or an MS in Electrical Engineering, Computer Science, or a related field with 7 years of industry experience is preferred.
Experience in IP/SoC verification cycle preferably from concept to tape-out to bring-up.
Good knowledge of verification methodologies such as UVM/OVM, etc.
Hands-on ASIC-SoC design verification tests and debug experience.
Fluency with SystemVerilog randomization constraints, coverage, and assertions methodology.
Good problem-solving skills and the passion to take on challenges (particularly in AI domain).
Passionate about AI and thriving in a fast-paced and dynamic startup culture.
Preferred:
Part of Successful implementation of multiple verification environments and tape-out efforts.
Experience with C/C++ is good
Ready to apply?
Apply to PhizenixCookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.