All active TensorFlow roles based in Toronto.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Join Levio, a leader in digital transformation, and take your career to the next level. You will work alongside high-caliber professionals on ambitious, large-scale technology projects, directly embedded in our clients’ environments. At Levio, we value expertise, curiosity, and continuous improvement — and we give you the space to grow.
The ML / AI Engineer design, build, deploy, and operate production-grade machine learning and generative AI systems. This role owns the end-to-end ML lifecycle, ensuring models and AI services are scalable, reliable, secure, and deliver measurable business value. The role will be remote.
The salary range provided reflects a good faith estimate based on factors such as experience, technical expertise, location, and relevant certifications. Final compensation will be determined according to the specific circumstances of each candidate.
Estimated salary range: $110,000 to $150,000 per year.
This posting is a current hiring need.
Levio offers a comprehensive and flexible benefits package designed to support your professional growth and personal wellbeing, including:
Position Details
Notice on the Use of Artificial Intelligence in Recruitment
We use AI enabled tools to help sort and review applications based on job related criteria. Final decisions regarding candidate progression are always made by a human recruiter.
Employment Equity
Levio subscribes to the principle of employment equity and applies an equal access employment program for women, Indigenous peoples, visible minorities, ethnic minorities, and persons with disabilities.
We value diversity and inclusion and are committed to creating a healthy, accessible, and rewarding work environment that highlights the unique contributions of our employees. Accommodations are available upon request for candidates participating in all aspects of the selection process.
Ready to apply?
Apply to Levio
Are you looking to thrive in a stimulating work environment?
Join Levio, a leader in digital transformation, and take your career to the next level. You will work alongside high-caliber professionals on ambitious, large-scale technology projects, directly embedded in our clients’ environments. At Levio, we value expertise, curiosity, and continuous improvement — and we give you the space to grow.
The salary range provided reflects a good faith estimate based on factors such as experience, technical expertise, location, and relevant certifications. Final compensation will be determined according to the specific circumstances of each candidate.
Estimated salary range: $100,000 to $140,000 per year.
This posting is a current hiring need.
Levio offers a comprehensive and flexible benefits package designed to support your professional growth and personal wellbeing, including:
Position Details
Notice on the Use of Artificial Intelligence in Recruitment
We use AI enabled tools to help sort and review applications based on job related criteria. Final decisions regarding candidate progression are always made by a human recruiter.
Employment Equity
Levio subscribes to the principle of employment equity and applies an equal access employment program for women, Indigenous peoples, visible minorities, ethnic minorities, and persons with disabilities.
We value diversity and inclusion and are committed to creating a healthy, accessible, and rewarding work environment that highlights the unique contributions of our employees. Accommodations are available upon request for candidates participating in all aspects of the selection process.
Ready to apply?
Apply to Levio
We are seeking a Senior Machine Learning Engineer to join the Growth Tech Alliance. In this role, you will architect and deploy the robust infrastructure behind our intelligent marketing systems. You will be responsible for maturing algorithmic prototypes into high-performance production systems, ensuring our AI-driven marketing optimization is served reliably and autonomously at a global scale.
S'more about the team
We are hiring a Senior Machine Learning Engineer to take our AI tooling to the next level by architecting and deploying the robust infrastructure behind our intelligent marketing optimization systems. You will provide critical engineering execution for our AI initiatives. You will develop scalable microservices for predictive scoring, orchestrate complex LLM-based agents for creative intelligence. As the ML engineering expert for the team, you will drive the maturation of algorithmic prototypes into high-performance production systems with maximum Speed & Agility, shaping the future of how HelloFresh automates marketing at an unprecedented scale.
Lettuce share what this role will be responsible for
As a core member of the engineering team, you will focus on productionizing ML infrastructure across several domains:
Sound a-peeling? Here's what we're looking for
Let’s cut to the cheese, this is why you'll love it here
Flexible Hybrid Approach
At HelloFresh, we know that flexible work arrangements are essential in enabling you to do your best work, while balancing your personal and life needs. Offering remote work flexibility, along with the opportunity to interact and collaborate in the office are all a part of creating a great employee experience.
To meet these needs, we are pleased to provide Flexible Hybrid work. Flexible Hybrid is a people-first approach that is based on choice, trust, personalization, and empowers teams to choose when and how often they work from the office and work from home, in addition to team days and company days. This means a minimum of 2 days in office per week, with most teams in office between 2-3 days a week.
#LI-HYBRID
#Engineering
HelloFresh Canada uses AI-integrated technology to help us process and evaluate applications more efficiently. This includes tools that screen and assess candidate qualifications based on the requirements for this role. While these tools assist our workflow, all final selection decisions are made by our hiring team.
This is a posting for an existing vacancy. We are actively seeking to fill this position.
Ready to apply?
Apply to HelloFresh
The Growth Tech Alliance's goal is to enable HelloFresh to drive smarter marketing through AI-powered automation, intelligent systems, and advanced optimization. We've pioneered cutting-edge approaches to programmatic campaign optimization using state-of-the-art AI/ML technologies and are now scaling these capabilities across our global marketing operations. Our team’s key responsibilities include designing, building, maintaining autonomous decision-making systems, advanced ML models for marketing optimization, and real-time intelligence that drives measurable business impact. We work at the intersection of modern AI architectures, deep statistical modelling, and enterprise-scale ML operations.
S'more about the team
As a Senior Data Scientist in AdTech, you will design, build, and deploy production-grade machine learning models that drive significant marketing ROI. You will provide critical expertise in advancing our intelligent marketing optimization systems, focusing on the entire ML lifecycle from data preprocessing and feature engineering to large-scale deployment and performance monitoring. By translating complex marketing challenges into robust production systems, you will ensure our AI-driven capabilities deliver measurable business impact and enhanced efficiency at a global scale
Lettuce share what this role will be responsible for
Sound a-peeling? Here's what we're looking for
Let’s cut to the cheese, this is why you'll love it here
Flexible Hybrid Approach
At HelloFresh, we know that flexible work arrangements are essential in enabling you to do your best work, while balancing your personal and life needs. Offering remote work flexibility, along with the opportunity to interact and collaborate in the office are all a part of creating a great employee experience.
To meet these needs, we are pleased to provide Flexible Hybrid work. Flexible Hybrid is a people-first approach that is based on choice, trust, personalization, and empowers teams to choose when and how often they work from the office and work from home, in addition to team days and company days. This means a minimum of 2 days in office per week, with most teams in office between 2-3 days a week.
#LI-HYBRID
HelloFresh Canada uses AI-integrated technology to help us process and evaluate applications more efficiently. This includes tools that screen and assess candidate qualifications based on the requirements for this role. While these tools assist our workflow, all final selection decisions are made by our hiring team.
This is a posting for an existing vacancy. We are actively seeking to fill this position.
Ready to apply?
Apply to HelloFresh
Our mission is to democratize finance for all. An estimated $124 trillion of assets will be inherited by younger generations in the next two decades. The largest transfer of wealth in human history. If you’re ready to be at the epicenter of this historic cultural and financial shift, keep reading.
We are building an elite team, applying frontier technologies to the world’s biggest financial problems. We’re looking for bold thinkers. Sharp problem-solvers. Builders who are wired to make an impact. Robinhood isn’t a place for complacency, it’s where ambitious people do the best work of their careers. We’re a high-performing, fast-moving team with ethics at the center of everything we do. Expectations are high, and so are the rewards.
The Fraud Data Science team safeguards Robinhood and its customers by detecting and preventing fraud and abuse across our platform. We leverage machine learning and analytics to combat malicious behavior in real time, supporting a safe and trusted experience for all users. Our work has a direct impact on customer security, company risk posture, and regulatory compliance.
As a Senior Data Scientist on the Fraud team, you will own the design and deployment of ML solutions that proactively surface suspicious activity, reduce financial loss, and improve fraud detection precision. You’ll collaborate closely with engineering, product, risk, and compliance partners to influence system architecture, shape policy through data, and enhance the safety and integrity of our platform.
This role is based in our Menlo Park office, with in-person attendance expected at least 3 days per week.
At Robinhood, we believe in the power of in-person work to accelerate progress, spark innovation, and strengthen community. Our office experience is intentional, energizing, and designed to fully support high-performing teams.
In addition to the base pay range listed below, this role is also eligible for bonus opportunities + equity + benefits.
Base pay for the successful applicant will depend on a variety of job-related factors, which may include education, training, experience, location, business needs, or market demands. The expected base pay range for this role is based on the location where the work will be performed and is aligned to one of 3 compensation zones. For other locations not listed, compensation can be discussed with your recruiter during the interview process.
Base Pay Range:
Click here to learn more about our Total Rewards, which vary by region and entity.
If our mission energizes you and you’re ready to build the future of finance, we look forward to seeing your application.
Robinhood provides equal opportunity for all applicants, offers reasonable accommodations upon request, and complies with applicable equal employment and privacy laws. Inclusion is built into how we hire and work—welcoming different backgrounds, perspectives, and experiences so everyone can do their best. Please review the Privacy Policy for your country of application.
Ready to apply?
Apply to Robinhood
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Join the team revolutionizing AI computing at Tenstorrent. You'll work on TT-Forge, our MLIR-based compiler that enables developers to run AI on all configurations of Tenstorrent hardware using an open-source, performant, and general-purpose compiler. You will be at the forefront of the AI hardware revolution, building compiler technologies that redefine what’s possible.
This role is hybrid and based out of Toronto, ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Join the team revolutionizing AI computing at Tenstorrent. You'll work on TT-Forge, our MLIR-based compiler that enables developers to run AI on all configurations of Tenstorrent hardware using an open-source, performant, and general-purpose compiler. You will be at the forefront of the AI hardware revolution, building compiler technologies that redefine what’s possible.
This role is hybrid, and can be based out of Santa Clara, CA; Austin, TX; or Toronto; ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
At Lyft, our purpose is to serve and connect. We aim to achieve this by cultivating a work environment where all team members belong and have the opportunity to thrive.
As a Data Scientist on the Mapping team, you will collaborate with our world class team of engineers, product managers, and designers to grow and improve the quality of recommended routes and accuracy of our travel time estimations. We're looking for a passionate, driven Data Scientist who is excited to dive into our spatial data and build a best-in-class mapping product that provides safe, efficient, and seamless navigation for our rideshare drivers.
Data Science is at the heart of Lyft’s products and decision-making. You will leverage data and rigorous, analytical thinking to shape our mapping products and make business decisions that put our customers first. This will involve identifying and scoping opportunities, shaping priorities, recommending technical solutions, designing experiments, and measuring the impact of new features. You will help us solve some of the most impactful problems in mapping, including:
Lyft is committed to creating an inclusive workforce that fosters belonging. Lyft believes that every person has a right to equal employment opportunities without discrimination because of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, marital status, family status, disability, pardoned record of offences, or any other basis protected by applicable law or by Company policy. Lyft also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. Accommodation for persons with disabilities will be provided upon request in accordance with applicable law during the application and hiring process. Please contact your recruiter if you wish to make such a request.
Lyft highly values having employees working in-office to foster a collaborative work environment and company culture. This role will be in-office on a hybrid schedule — Team Members will be expected to work in the office at least 3 days per week, including on Mondays, Wednesdays, and Thursdays. Lyft considers working in the office at least 3 days per week to be an essential function of this hybrid role. Your recruiter can share more information about the various in-office perks Lyft offers. Additionally, hybrid roles have the flexibility to work from anywhere for up to 4 weeks per year. #Hybrid
The expected base pay range for this position in the Toronto area is $108,000 - $135,000, not inclusive of potential equity offering, bonus or benefits. Salary ranges are dependent on a variety of factors, including qualifications, experience and geographic location. Your recruiter can share more information about the salary range specific to your working location and other factors during the hiring process.
Lyft may use artificial intelligence to screen applicants, however, Lyft employees make the ultimate selection and hiring decisions.
This job fills an existing vacancy.
Ready to apply?
Apply to Lyft
We are a Canadian leader in digital automotive solutions. Our flagship brands — AutoTrader.ca, AutoSync, Dealertrack Canada and CMS — help Canadians buy, sell, and finance vehicles with confidence.
AutoTrader.ca is Canada’s largest automotive marketplace, with over 25 million monthly visits.
As part of AutoScout24 group, Europe’s largest online car marketplace, we’re shaping the future of automotive retail in Canada and beyond.
The base salary range for this position is CAD 180K – CAD 220K.
This range reflects the expected compensation at the time of posting. The final offer may vary and can be higher based on relevant skills, experience, location, and market conditions. Based on the role the total rewards package may also include benefits, bonus, and other employee offerings.
What's in it for you:
We understand that there is life at work and life outside of work. Here are a few benefits we all benefit from that support us to be our creative best.
For a career where you can drive our business and shape your future, apply now.
Use of Artificial Intelligence in Hiring: We use artificial intelligence (“AI”) in our hiring process, including to screen, assess, or select applicants for this position.
Vacancy Status: This job posting is for an existing vacancy.
Ready to apply?
Apply to AutoTrader.ca
Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.
The Supportability Evaluation team acts as stewards of the financial ecosystem. Our mission is to protect Stripe’s reputation with our global financial partners by architecting highly precise, automated supportability controls. We develop the AI/ML models and systems that detect and action supportability violations in real-time. We're responsible for building high-fidelity detection engines that ensure our merchants remain compliant across the globe, balancing the scale of millions of users with the surgical precision required by the world’s largest financial institutions.
As a Machine Learning Engineer in Supportability, you will be responsible for designing, building, training, evaluating, deploying, and owning AI/ML models in production. You will work closely with software engineers, machine learning engineers, product managers, and data scientists to operate Stripe’s ML powered systems, features, and products. You will also have the opportunity to contribute to and influence AI/ML architecture at Stripe and be a part of a larger community.
We are looking for ML Engineers who are passionate about building AI/ML and AI systems that touch the lives of millions. You have experience building and evaluating advanced AI/ML models, and deploying them to production. You are comfortable with ambiguity, love to take initiative, have a bias towards action, and thrive in a collaborative environment.
Ready to apply?
Apply to Stripe
Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.
The Support Experience engineering organization builds and improves Stripe’s user support from end to end: how users get help within our products, how they get in touch with us when they have questions, and how our teams use internal tools to answer those questions. We’re accountable for the quality and reliability of this support stack and we use data and firsthand user research to continuously improve it.
Providing great support to users of all sizes is culturally important to everyone at Stripe. We are a group of friendly, user-oriented engineers that partner closely with Stripe’s world-class design, product, and operational teams. This includes the external-facing support interfaces (support.stripe.com), content, entry points, internal tooling, case routing, and helping product teams across the company reduce support volume by improving our products. We are also using the latest generative AI technologies to re-imagine support experiences, and are developing AI assistants for Stripe’s users and internally to help our operations teams be more productive.
As a Machine Learning Engineer on the Support Experience team, you'll play a crucial role in enhancing our self-serve support experiences. You will be responsible for designing, building, training, evaluating, deploying, and owning ML models in production. For example, we apply LLMs to answer user questions with conversational agents and personalize product documentation, and are building automated systems to solve complex user problems. You will work closely with software engineers, machine learning engineers, product managers, and data scientists to operate Stripe’s ML powered systems, features, and products. You will also have the opportunity to contribute to and influence ML architecture at Stripe and be a part of a larger ML community.
We are looking for ML Engineers who are passionate about building ML systems that touch the lives of millions. You have experience developing efficient feature pipelines, building advanced ML models, and deploying them to production. You are comfortable with ambiguity, love to take initiative, have a bias towards action, and thrive in a collaborative environment.
Ready to apply?
Apply to Stripe
About the Role:
The Machine Learning team at Tubi drives the innovation behind personalized user experiences. With the largest inventory in the industry and hundreds of millions of viewers, we tackle problems in the space of recommendations, search, content understanding and ads optimization that shape the future of streaming.
We are seeking a highly skilled Machine Learning Engineer to contribute to transformative projects in video personalization. In this role, you will design and implement advanced algorithms and systems to improve our personalization strategy. As a senior technical expert, you will tackle complex problems in machine learning at scale, collaborating closely with cross-functional teams to develop and optimize machine learning-driven solutions.
What You'll Do:
Your Background:
#LI-Hybrid #LI-SC1
Pursuant to state and local pay disclosure requirements, the pay range for this role, with final offer amount dependent on education, skills, experience, and location is is listed annually below. This role is also eligible for an annual discretionary bonus, long-term incentive plan, and various benefits including medical/dental/vision, insurance, a 401(k) plan, paid time off and other benefits in accordance with applicable plan documents.
High cost labor markets such as but not limited to Los Angeles, New York City, and San Francisco
Tubi is a division of Fox Corporation, and the FOX Employee Benefits summarized here, covers the majority of all US employee benefits. The following distinctions below outline the differences between the Tubi and FOX benefits:
Boldly built for every fandom, Tubi is a free streaming service that entertains over 100 million monthly active users. Tubi offers the world's largest collection of Hollywood movies and TV shows, thousands of creator-led stories and hundreds of Tubi Originals made for the most passionate fans. Headquartered in San Francisco and founded in 2014, Tubi is part of Tubi Media Group, a division of Fox Corporation.
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, gender identity, disability, protected veteran status, or any other characteristic protected by law. We will consider for employment qualified applicants with criminal histories consistent with applicable law.
Ready to apply?
Apply to Tubi
The Machine Learning team at Tubi drives the innovation behind personalized user experiences. With the largest inventory in the industry and hundreds of millions of viewers, we tackle problems in the space of recommendations, search, content understanding, and ads optimization that shape the future of streaming.
We are seeking a Director of Machine Learning Engineering and Infrastructure to lead a hybrid team bridging advanced ML engineering with world-class infrastructure design. In this role, you will own the strategic direction and execution for scaling our machine learning capabilities while ensuring our distributed systems and infrastructure can support innovation at massive scale. You will combine technical depth with leadership excellence to guide teams that deliver both foundational ML systems and high-performance distributed services.
What You'll Do:
Your Background:
Pursuant to state and local pay disclosure requirements, the pay range for this role, with final offer amount dependent on education, skills, experience, and location is listed annually below. This role is also eligible for an annual discretionary bonus, long-term incentive plan, and various benefits including medical/dental/vision, insurance, a 401(k) plan, paid time off and other benefits in accordance with applicable plan documents.
Tubi is a division of Fox Corporation, and the FOX Employee Benefits summarized here, covers the majority of all US employee benefits. The following distinctions below outline the differences between the Tubi and FOX benefits:
Boldly built for every fandom, Tubi is a free streaming service that entertains over 100 million monthly active users. Tubi offers the world's largest collection of Hollywood movies and TV shows, thousands of creator-led stories and hundreds of Tubi Originals made for the most passionate fans. Headquartered in San Francisco and founded in 2014, Tubi is part of Tubi Media Group, a division of Fox Corporation.
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, gender identity, disability, protected veteran status, or any other characteristic protected by law. We will consider for employment qualified applicants with criminal histories consistent with applicable law.
Ready to apply?
Apply to Tubi
At Braze, we have found our people. We’re a genuinely approachable, exceptionally kind, and intensely passionate crew.
We seek to ignite that passion by setting high standards, championing teamwork, and creating work-life harmony as we collectively navigate rapid growth on a global scale while striving for greater equity and opportunity – inside and outside our organization.
To flourish here, you must be prepared to set a high bar for yourself and those around you. There is always a way to contribute: Acting with autonomy, having accountability and being open to new perspectives are essential to our continued success.
Our deep curiosity to learn and our eagerness to share diverse passions with others gives us balance and injects a one-of-a-kind vibrancy into our culture.
If you are driven to solve exhilarating challenges and have a bias toward action in the face of change, you will be empowered to make a real impact here, with a sharp and passionate team at your back. If Braze sounds like a place where you can thrive, we can’t wait to meet you.
WHAT YOU'LL DO
Our Data Scientist, AI Deployment team is a group of creative technical experts who design and build end-to-end machine learning solutions that power 1-to-1 personalization for some of the world's leading brands. In this role, you will:
WHO YOU ARE
For candidates based in Ontario, the pay range at the start of employment for this position is expected to be between CA$112,000 - CA$168,000/year, with an expected On Target Earnings (OTE) between CA$125,000 - CA$188,000/year (including performance-based or variable compensation (bonus or commission). Your particular offer may vary depending on multiple individual factors, including market location, job-related knowledge, skills, and experience. In addition to cash compensation, this role qualifies for a comprehensive Total Rewards package that includes equity grants of restricted stock (RSUs) so that you will own a piece of our company.
#LI-Hybrid
WHAT WE OFFER
Braze benefits vary by location, and we encourage you to review our specific benefits offerings for each country here. More details on benefits plans will be provided if you receive an offer of employment.
From offering comprehensive benefits to fostering hybrid ways of working, we’ve got you covered so you can prioritize work-life harmony. Braze offers benefits such as:
ABOUT BRAZE
Braze is the leading customer engagement platform that empowers brands to Be Absolutely Engaging.™ Braze helps brands deliver great customer experiences that drive value both for consumers and for their businesses. Built on a foundation of composable intelligence, BrazeAI™ allows marketers to combine and activate AI agents, models, and features at every touchpoint throughout the Braze Customer Engagement Platform for smarter, faster, and more meaningful customer engagement. From cross-channel messaging and journey orchestration to Al-powered decisioning and optimization, Braze enables companies to turn action into interaction through autonomous, 1:1 personalized experiences.
The company has repeatedly been recognized as a Leader in marketing technology by industry analysts, and was voted a G2 “Best of Marketing and Digital Advertising Software Product” in 2025.
Braze was also named a 2025 Best Companies To Work For by U.S. News & World Report, a 2025 America’s Greatest Companies by Newsweek, and a 2025 Fortune Best Workplace in Technology™ by Great Place To Work®, among other accolades. Braze is also proudly certified as a Great Place to Work® in the U.S., the UK, Australia, and Singapore.
The company is headquartered in New York with offices in Austin, Berlin, Bucharest, Chicago, Dubai, Jakarta, London, Paris, San Francisco, São Paulo, Singapore, Seoul, Sydney and Tokyo.
At Braze, we strive to create equitable growth and opportunities inside and outside the organization.
Building meaningful connections is at the heart of everything we do, and that includes our recruiting practices. We're committed to offering all candidates a fair, accessible, and inclusive experience – regardless of age, color, disability, gender identity, marital status, maternity, national origin, pregnancy, race, religion, sex, sexual orientation, or status as a protected veteran. When applying and interviewing with Braze, we want you to feel comfortable showcasing what makes you you.
We know that sometimes different circumstances can lead talented people to hesitate to apply for a role unless they meet 100% of the criteria. If this sounds familiar, we encourage you to apply, as we’d love to meet you.
Please see our Candidate Privacy Policy for more information on how Braze processes your personal information during the recruitment process and, if applicable based on your location, how you can exercise any privacy rights.Ready to apply?
Apply to Braze
MaintainX est la première plateforme mobile au monde dédiée à la gestion des actifs et des tâches dans les environnements industriels et de première ligne. Nous proposons une solution moderne, basée sur l'IoT et le cloud, qui facilite la maintenance, la sécurité et l'exploitation des équipements et installations physiques.
Nous aidons plus de 12 000 organisations, dont Duracell, Univar Solutions, Titan America, McDonald's, Brenntag, Cintas, Xylem et Shell, à atteindre l'excellence opérationnelle et la fiabilité à grande échelle.
À la suite de notre série D de 150 millions de dollars menée par Bain Capital Ventures, Bessemer Ventures, August Capital, Amity Ventures et Ridge Ventures, MaintainX a levé un total de 254 millions de dollars, valorisant la société à 2,5 milliards de dollars.
Alors que nous entrons dans notre prochaine phase de croissance, nous investissons massivement dans l'IA/ML, les LLM et l'IoT industriel afin de transformer le mode de fonctionnement des équipes de première ligne, en prédisant les pannes avant qu'elles ne se produisent, en automatisant les flux de travail et en intégrant l'intelligence dans chaque actif et chaque procédure.
Nous recherchons un développeur(se) senior en apprentissage automatique appliqué hautement qualifié et motivé pour orienter la direction technique et l'architecture de nos initiatives en matière de maintenance prédictive et d'intelligence des actifs.
Vous combinerez une expertise approfondie en apprentissage automatique avec de solides compétences en génie logiciel et en sens du leadership. Vous encadrerez des ingénieurs, développerez des systèmes et piloterez la feuille de route pour l'intelligence de maintenance basée sur l'IA sur des milliers de sites industriels.
Ce poste se situe à la croisée de l'architecture d'apprentissage automatique, des systèmes de données IoT et de l'impact des produits, et constitue le fondement de la stratégie d'IA prédictive et générative de MaintainX.
Ce que vous ferez:
À propos de vous:
Une attention particulière est accordée aux candidats présentant les caractéristiques suivantes:
Quels sont les avantages pour vous?:
Qui sommes-nous:
MaintainX s'engage à créer un environnement diversifié. Tous les candidats qualifiés seront pris en considération pour un emploi sans considération de race, de couleur, de religion, de sexe, d'identité ou d'expression de genre, d'orientation sexuelle, d'origine nationale, de génétique, d'invalidité, d'âge ou de statut d'ancien combattant.
Ready to apply?
Apply to MaintainX
MaintainX est la première plateforme mobile au monde dédiée à la gestion des actifs et des tâches dans les environnements industriels et de première ligne. Nous proposons une solution moderne, basée sur l'IoT et le cloud, qui facilite la maintenance, la sécurité et l'exploitation des équipements et installations physiques. Nous aidons plus de 12 000 organisations, dont Duracell, Univar Solutions, Titan America, McDonald's, Brenntag, Cintas, Xylem et Shell, à atteindre l'excellence opérationnelle et la fiabilité à grande échelle. À la suite de notre série D de 150 millions de dollars menée par Bain Capital Ventures, Bessemer Ventures, August Capital, Amity Ventures et Ridge Ventures, MaintainX a levé un total de 254 millions de dollars, valorisant la société à 2,5 milliards de dollars. Alors que nous entrons dans notre prochaine phase de croissance, nous investissons massivement dans l'IA/ML, les LLM et l'IoT industriel afin de transformer le mode de fonctionnement des équipes de première ligne, en prédisant les pannes avant qu'elles ne se produisent, en automatisant les flux de travail et en intégrant l'intelligence dans chaque actif et chaque procédure.
Ce que vous ferez:
À propos de vous:
Atouts:
Quels sont les avantages pour vous?:
Qui sommes-nous:
Ready to apply?
Apply to MaintainX
MaintainX is the world’s leading mobile-first Asset and Work Intelligence platform for industrial and frontline environments. We’re a modern, IoT-enabled, cloud-based solution that powers maintenance, safety, and operations on physical equipment and facilities.
We help 12,000+ organizations—including Duracell, Univar Solutions, Titan America, McDonald’s, Brenntag, Cintas, Xylem, and Shell—achieve operational excellence and reliability at scale.
Following our $150 million Series D led by Bain Capital Ventures, Bessemer Ventures, August Capital, Amity Ventures, and Ridge Ventures, MaintainX has raised a total of $254 million, valuing the company at $2.5 billion.
As we enter our next phase of growth, we’re investing deeply in AI/ML, LLMs, and Industrial IoT to transform how frontline teams operate—predicting failures before they happen, automating workflows, and embedding intelligence into every asset and procedure.
We are seeking a highly skilled and motivated Senior Applied Machine Learning Developer to guide the technical direction and architecture of our Predictive Maintenance and Asset Intelligence initiatives.
You’ll combine deep ML expertise with strong software development and leadership skills—mentoring developers, scaling systems, and driving the roadmap for AI-enabled maintenance intelligence across thousands of industrial sites.
This role sits at the intersection of ML architecture, IoT data systems, and product impact, shaping the foundation for MaintainX’s predictive and generative AI strategy.
What you’ll do:
About you:
Bonus skills:
What’s in it for you:
About us:
We exist to make the lives of frontline and maintenance teams easier by building software that meets their real-world needs. Our product transforms how 80% of the global workforce—those who don’t sit behind a desk—manage their operations, assets, and teams.
MaintainX is committed to creating a diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Ready to apply?
Apply to MaintainX
MaintainX is the world’s leading mobile-first Asset and Work Intelligence platform for industrial and frontline environments. We’re a modern, IoT-enabled, cloud-based solution that powers maintenance, safety, and operations on physical equipment and facilities.
We help 12,000+ organizations—including Duracell, Univar Solutions, Titan America, McDonald’s, Brenntag, Cintas, Xylem, and Shell—achieve operational excellence and reliability at scale.
Following our $150 million Series D led by Bain Capital Ventures, Bessemer Ventures, August Capital, Amity Ventures, and Ridge Ventures, MaintainX has raised a total of $254 million, valuing the company at $2.5 billion.
As we enter our next phase of growth, we’re investing deeply in AI/ML, LLMs, and Industrial IoT to transform how frontline teams operate—predicting failures before they happen, automating workflows, and embedding intelligence into every asset and procedure.
What you’ll do:
About you:
Bonus skills:
What’s in it for you:
About us:
We exist to make the lives of frontline and maintenance teams easier by building software that meets their real-world needs. Our product transforms how 80% of the global workforce—those who don’t sit behind a desk—manage their operations, assets, and teams.
MaintainX is committed to creating a diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Ready to apply?
Apply to MaintainX
Cresta is on a mission to turn every customer conversation into a competitive advantage by unlocking the true potential of the contact center. Our platform combines the best of AI and human intelligence to help contact centers discover customer insights and behavioral best practices, automate conversations and inefficient processes, and empower every team member to work smarter and faster. Born from the prestigious Stanford AI lab, Cresta's co-founder and chairman is Sebastian Thrun, the genius behind Google X, Waymo, Udacity, and more. Our leadership also includes CEO, Ping Wu, the co-founder of Google Contact Center AI and Vertex AI platform, and co-founder, Tim Shi, an early member of Open AI.
Join us on this thrilling journey to revolutionize the workforce with AI. The future of work is here, and it's at Cresta.
At Cresta, the Knowledge Assist (KA) team develops AI solutions for the contact center industry, focusing on improving agent productivity by providing access to the right knowledge at the right time.
Our current projects:
Our internships offer a dynamic, fast-paced environment where you’ll collaborate with top researchers and engineers in the field. We provide opportunities for interns to make significant contributions to AI research and apply novel techniques at scale.
This is a unique opportunity to shape the future of AI at Cresta by solving complex problems and bringing breakthrough AI advancements into production environments.
Responsibilities:
Perks & Benefits:
Compensation for this position includes a base salary, equity, and a variety of benefits. Actual base salaries will be based on candidate-specific factors, including experience, skillset, and location, and local minimum pay requirements as applicable. We are actively hiring for this role in the US and Canada. Your recruiter can provide further details.
This posting will be used to fill a newly-created role.
We have noticed a rise in recruiting impersonations across the industry, where scammers attempt to access candidates' personal and financial information through fake interviews and offers. All Cresta recruiting email communications will always come from the @cresta.ai domain. Any outreach claiming to be from Cresta via other sources should be ignored. If you are uncertain whether you have been contacted by an official Cresta employee, reach out to recruiting@cresta.ai
Ready to apply?
Apply to CrestaCerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About the Role
We are seeking a versatile and experienced engineer to join our SOTA Training Platform team. This team is responsible to rapidly bring up state-of-the-art open-source models (like LLaMA, Qwen, etc) or customer-provided proprietary models on our Cerebras CSX systems. Success in this role requires a system-minded generalist who thrives in fast-paced bringup environments and is comfortable working across the entire Cerebras software stack.
Your work will play a critical role in achieving unprecedented levels of performance, efficiency, and scalability for AI applications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About the Role
We are seeking a versatile and experienced engineer to join our Inference Core Model Bringup team. This team is responsible to rapidly bring up state-of-the-art open-source models (like LLaMA, Qwen, etc) or customer-provided proprietary models on our Cerebras CSX systems. Success in this role requires a system-minded generalist who thrives in fast-paced bringup environments and is comfortable working across the entire Cerebras software stack.
Your work will play a critical role in achieving unprecedented levels of performance, efficiency, and scalability for AI applications.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
As a Kernel Engineer on our team, you will develop high-performance software solutions at the intersection of hardware and software, developing high-performance software for cutting-edge AI and HPC workloads. Your focus will be on implementing, optimizing, and scaling deep learning operations to fully leverage our custom, massively parallel processor architecture.
You will be part of a world-class team responsible for the design, performance tuning, and validation of foundational ML and HPC kernels. This includes building a library of parallel and distributed algorithms that maximize compute utilization and push the boundaries of training efficiency for state-of-the-art AI models. Your work will be critical to unlocking the full potential of our hardware and accelerating the pace of AI innovation.
Responsibilities
Skills And Qualifications
Preferred Skills And Qualifications
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
At Braze, we have found our people. We’re a genuinely approachable, exceptionally kind, and intensely passionate crew.
We seek to ignite that passion by setting high standards, championing teamwork, and creating work-life harmony as we collectively navigate rapid growth on a global scale while striving for greater equity and opportunity – inside and outside our organization.
To flourish here, you must be prepared to set a high bar for yourself and those around you. There is always a way to contribute: Acting with autonomy, having accountability and being open to new perspectives are essential to our continued success.
Our deep curiosity to learn and our eagerness to share diverse passions with others gives us balance and injects a one-of-a-kind vibrancy into our culture.
If you are driven to solve exhilarating challenges and have a bias toward action in the face of change, you will be empowered to make a real impact here, with a sharp and passionate team at your back. If Braze sounds like a place where you can thrive, we can’t wait to meet you.
As our customer base continues to grow with the excitement around BrazeAI, we’re expanding our team! Join our Forward-Deployed Data Scientist group of creative technical experts who partner with customers to ensure their success. In this role, you will:
WHAT WE OFFER
Braze benefits vary by location, and we encourage you to review our specific benefits offerings for each country here. More details on benefits plans will be provided if you receive an offer of employment.
From offering comprehensive benefits to fostering hybrid ways of working, we’ve got you covered so you can prioritize work-life harmony. Braze offers benefits such as:
ABOUT BRAZE
Braze is the leading customer engagement platform that empowers brands to Be Absolutely Engaging.™ Braze helps brands deliver great customer experiences that drive value both for consumers and for their businesses. Built on a foundation of composable intelligence, BrazeAI™ allows marketers to combine and activate AI agents, models, and features at every touchpoint throughout the Braze Customer Engagement Platform for smarter, faster, and more meaningful customer engagement. From cross-channel messaging and journey orchestration to Al-powered decisioning and optimization, Braze enables companies to turn action into interaction through autonomous, 1:1 personalized experiences.
The company has repeatedly been recognized as a Leader in marketing technology by industry analysts, and was voted a G2 “Best of Marketing and Digital Advertising Software Product” in 2025.
Braze was also named a 2025 Best Companies To Work For by U.S. News & World Report, a 2025 America’s Greatest Companies by Newsweek, and a 2025 Fortune Best Workplace in Technology™ by Great Place To Work®, among other accolades. Braze is also proudly certified as a Great Place to Work® in the U.S., the UK, Australia, and Singapore.
The company is headquartered in New York with offices in Austin, Berlin, Bucharest, Chicago, Dubai, Jakarta, London, Paris, San Francisco, São Paulo, Singapore, Seoul, Sydney and Tokyo.
At Braze, we strive to create equitable growth and opportunities inside and outside the organization.
Building meaningful connections is at the heart of everything we do, and that includes our recruiting practices. We're committed to offering all candidates a fair, accessible, and inclusive experience – regardless of age, color, disability, gender identity, marital status, maternity, national origin, pregnancy, race, religion, sex, sexual orientation, or status as a protected veteran. When applying and interviewing with Braze, we want you to feel comfortable showcasing what makes you you.
We know that sometimes different circumstances can lead talented people to hesitate to apply for a role unless they meet 100% of the criteria. If this sounds familiar, we encourage you to apply, as we’d love to meet you.
Please see our Candidate Privacy Policy for more information on how Braze processes your personal information during the recruitment process and, if applicable based on your location, how you can exercise any privacy rights.Ready to apply?
Apply to Braze
At HeyGen, our mission is to make visual storytelling accessible to all. Over the last decade, visual content has become the preferred method of information creation, consumption, and retention. But the ability to create such content, in particular videos, continues to be costly and challenging to scale. Our ambition is to build technology that equips more people with the power to reach, captivate, and inspire audiences.
Learn more at www.heygen.com. Visit our Mission and Culture doc here.
We are seeking a seasoned Technical Leader to build and scale the foundational compute infrastructure that powers our state-of-the-art AI models—from multimodal training data pipelines to high-throughput, low-latency video generation.
You will be the core engineer responsible for building the robust, efficient, and scalable platform that enables our research and production teams to rapidly iterate on HeyGen's generative video models. Your contributions will directly impact model performance, developer productivity, and the final quality of every AI-generated video.
Optimize GPU Utilization: Design and implement mechanisms to aggressively optimize GPU and cluster utilization across thousands of devices for inference, training, data processing and large-scale deployment of our state-of-art video generation models.
Develop Large-Scale AI Job Framework: Build highly scalable, reliable frameworks for launching and managing massive, heterogeneous compute jobs, including multi-modal high-volume data ingestion/processing, distributed model training, and continuous evaluation/benchmarking.
Enhance Observability: Develop world-class observability, tracing, and visualization tools for our compute cluster to ensure reliability, diagnose performance bottlenecks (e.g., memory, bandwidth, communication).
Accelerate Pipelines: Collaborate closely with AI researchers and AI engineers to integrate innovative acceleration techniques (e.g., custom CUDA kernels, distributed training libraries) into production-ready, scalable training and inference pipelines.
Infrastructure Management: Champion the adoption and optimization of modern cloud and container technologies (Kubernetes, Ray) for elastic, cost-efficient scaling of our distributed systems.
We are looking for a highly motivated engineer with deep experience operating and optimizing AI infrastructure at scale.
Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
5+ years of full-time industry experience in large-scale MLOps, AI infrastructure, or HPC systems.
Experience with data frameworks and standards like Ray, Apache Spark, LanceDB
Strong proficiency in Python and a high-performance language such as C++ for developing core infrastructure components.
Deep understanding and hands-on experience with modern orchestration and distributed computing frameworks such as Kubernetes and Ray.
Experience with core ML frameworks such as PyTorch, TensorFlow, or JAX.
Master's or PhD in Computer Science or a related technical field.
Demonstrated Tech Lead experience, driving projects from conceptual design through to production deployment across cross-functional teams.
Prior experience building infrastructure specifically for Generative AI models (e.g., diffusion models, GANs, or large language models) where cost and latency are critical.
Proven background in building and operating large-scale data infrastructure (e.g., Ray, Apache Spark) to manage petabytes of multi-modal data (video, audio, text).
HeyGen is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Ready to apply?
Apply to HeyGen
At HeyGen, our mission is to make visual storytelling accessible to all. Over the last decade, visual content has become the preferred method of information creation, consumption, and retention. But the ability to create such content, in particular videos, continues to be costly and challenging to scale. Our ambition is to build technology that equips more people with the power to reach, captivate, and inspire audiences.
Learn more at www.heygen.com. Visit our Mission and Culture doc here.
We are seeking a seasoned Software Engineer to build and scale the foundational compute infrastructure that powers our state-of-the-art AI models—from multimodal training data pipelines to high-throughput, low-latency video generation.
You will be the core engineer responsible for building the robust, efficient, and scalable platform that enables our research and production teams to rapidly iterate on HeyGen's generative video models. Your contributions will directly impact model performance, developer productivity, and the final quality of every AI-generated video.
Optimize GPU Utilization: Design and implement mechanisms to aggressively optimize GPU and cluster utilization across thousands of devices for inference, training, data processing and large-scale deployment of our state-of-art video generation models.
Develop Large-Scale AI Job Framework: Build highly scalable, reliable frameworks for launching and managing massive, heterogeneous compute jobs, including multi-modal high-volume data ingestion/processing, distributed model training, and continuous evaluation/benchmarking.
Enhance Observability: Develop world-class observability, tracing, and visualization tools for our compute cluster to ensure reliability, diagnose performance bottlenecks (e.g., memory, bandwidth, communication).
Accelerate Pipelines: Collaborate closely with AI researchers and AI engineers to integrate innovative acceleration techniques (e.g., custom CUDA kernels, distributed training libraries) into production-ready, scalable training and inference pipelines.
Infrastructure Management: Champion the adoption and optimization of modern cloud and container technologies (Kubernetes, Ray) for elastic, cost-efficient scaling of our distributed systems.
We are looking for a highly motivated engineer with deep experience operating and optimizing AI infrastructure at scale.
Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
5+ years of full-time industry experience in large-scale MLOps, AI infrastructure, or HPC systems.
Experience with data frameworks and standards like Ray, Apache Spark, LanceDB
Strong proficiency in Python and a high-performance language such as C++ for developing core infrastructure components.
Deep understanding and hands-on experience with modern orchestration and distributed computing frameworks such as Kubernetes and Ray.
Experience with core ML frameworks such as PyTorch, TensorFlow, or JAX.
Master's or PhD in Computer Science or a related technical field.
Demonstrated Tech Lead experience, driving projects from conceptual design through to production deployment across cross-functional teams.
Prior experience building infrastructure specifically for Generative AI models (e.g., diffusion models, GANs, or large language models) where cost and latency are critical.
Proven background in building and operating large-scale data infrastructure (e.g., Ray, Apache Spark) to manage petabytes of multi-modal data (video, audio, text).
HeyGen is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Ready to apply?
Apply to HeyGen
At HeyGen, our mission is to make visual storytelling accessible to all. Over the last decade, visual content has become the preferred method of information creation, consumption, and retention. But the ability to create such content, in particular videos, continues to be costly and challenging to scale. Our ambition is to build technology that equips more people with the power to reach, captivate, and inspire audiences.
Learn more at www.heygen.com. Visit our Mission and Culture doc here.
Position Summary:
At HeyGen, we are at the forefront of developing applications powered by our cutting-edge AI research. As a Data Infrastructure Engineer, you will lead the development of fundamental data systems and infrastructure. These systems are essential for powering our innovative applications, including Avatar IV, Photo Avatar, Instant Avatar, Interactive Avatar, and Video Translation. Your role will be crucial in enhancing the efficiency and scalability of these systems, which are vital to HeyGen's success.
Key Responsibilities:
Qualifications:
Preferred Qualifications:
What HeyGen Offers
HeyGen is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Ready to apply?
Apply to HeyGen
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.