Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We're hiring a compensation partner to join our Operations team. You'll own the full spectrum of compensation at Thinking Machines – from building our philosophy and job architecture to partnering on day-to-day offer and pay decisions.
This role directly impacts our ability to hire and retain exceptional talent. We're looking for a compensation partner who brings a point of view, not just a process.
Minimum qualifications:
Preferred Qualifications
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We are hiring a developer productivity engineer to advance how we build software internally: safely, quickly, and with delight. The main focus are AI tools and coding agents. You’ll partner with platform, security, and product engineers to build state-of-the-art tooling for AI-assisted software development, and make our inner loop dramatically faster.
The scope of this role includes both setting up company-wide platforms and working with developers to accelerate their individual workflows.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an engineer to join us and contribute to data infrastructure. You'll join a small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.
Infrastructure is critical to us: it's the bedrock that enables every breakthrough. You'll work directly with researchers to accelerate experiments, develop new datasets, improve infrastructure efficiency, and enable key insights across our data assets.
If you're excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we'd love to hear from you.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community.
We're looking for a Site Reliability Engineer to drive the reliability of Tinker end-to-end. You'll work alongside the engineers building the platform and research teams to make every layer of the system more robust and resilient.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Thinking Machines builds multimodal-first. We’re looking for new team members to advance the science of visual perception and multimodal learning. We think about how vision and language interact at scale. We design architectures that fuse pixels and text, build datasets and evaluation methods that test real-world comprehension, and develop representations that let models ground abstract concepts in the physical world. Our goal is to create multimodal systems that support seamless integration into real-world environments.
You’ll work at the intersection of visual understanding, multimodal reasoning, and large-scale model training. You’ll help develop the architectures, data, and evaluation tools that teach AI to see, understand, and collaborate. The best candidate is curious about multimodal interfaces, has experience running large scale experiments and is comfortable contributing to complex engineering systems. While we are looking for a person with expertise in multimodality, Thinking Machines Lab operates in a unified fashion and expects new hires to work across modalities as one team.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
As a Research Product Manager (RPM) at Thinking Machines Lab, you’ll play a central role in driving complex, high-impact technical products and programs that span research, infrastructure, and applied. You’ll help turn ambitious ideas into reality by driving cross-functional efforts, maintaining momentum across projects, and creating clarity in fast-moving, ambiguous environments.
Your work will connect people, ideas, and systems to ensure that our most important research initiatives stay aligned, well-scoped, and moving forward efficiently. It’s for someone who thrives in deeply technical discussions, understands the rhythm of research, can abstract at high-level and get into the weeds, with the ultimate goal of helping the company execute at scale.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
The role of pre-training researchers sits at the core of our roadmap. This work advances the science of how large models learn from data. You’ll explore new pre-training methods, architectures, and learning objectives that make model training efficient, robust, and aligned with human goals.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
The role of pre-training researchers sits at the core of our roadmap. This work blends research with large-scale data engineering to help assemble the pre-training datasets and data systems that underpin the next generation of AI models. You’ll design and implement methods for sourcing, curating, and analyzing pre-training data for quality and performance.
You’ll work with automated pipelines and human-in-the-loop processes, contributing both scientific insight and production-grade code. It’s ideal for someone who enjoys working at the intersection of data, machine learning, and systems, and who’s excited by the challenge of shaping frontier AI.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
The role of post-training researchers sits at the core of our roadmap. This is the critical bridge between raw model intelligence and a system that is actually useful, safe, and collaborative for humans.
Post-training data research work sits at the intersection of human insight and machine learning. Our work combines human and synthetic data techniques, along with other innovative approaches, to capture the nuances of human behavior and use them to steer models. We research and model the mechanisms that create value for people to explain, predict, and optimize for human preferences, behaviors, and satisfaction. Our goal is to turn research ideas into data by scoping well-run data labeling or collection campaigns, and understanding the science behind what makes the data high quality and useful to train our models. We also develop and evaluate quantitative metrics that measure the success and impact of our data and training interventions.
Beyond execution, we explores new paradigms for human-ai interaction and scalable oversight, experimenting with how humans can best supervise, guide, and collaborate with models. It’s interdisciplinary work that blends research, data operations, and technical implementation to advance the frontier of aligned, human-centered AI systems.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
The role of post-training researchers sits at the core of our roadmap. This is the critical bridge between raw model intelligence and a system that is actually useful, safe, and collaborative for humans.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an infrastructure research engineer to design and build the core systems that enable scalable, efficient training of large models for deployment and research. Your goal is to make experimentation and training at Thinking Machines fast and reliable to ensure our research teams can focus on science, not system bottlenecks.
This role is ideal for someone who blends deep systems and performance expertise with a curiosity for machine learning at scale. You’ll take ownership of the training stack end to end, ensuring every GPU cycle drives scientific progress.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community.
We're hiring a Forward Deployed Engineer to be the main point of contact with our customers, which range from individual hackers to major enterprises. You will diagnose their problems and needs and turn those into product improvements, taking ownership of both ends of this process.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an infrastructure research engineer to design and build the core systems that enable scalable, efficient training of large models through reinforcement learning.
This role sits at the intersection of research and large-scale systems engineering: a builder who understands both the algorithms behind RL and the realities of distributed training and inference at scale. You’ll wear many hats, from optimizing rollout and reward pipelines to enhancing reliability, observability, and orchestration, collaborating closely with researchers and infra teams to make reinforcement learning stable, fast, and production-ready.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an infrastructure research engineer to design and build the core systems that enable efficient large-scale model training with a focus on numerics. You will focus on improving the numerical foundations of our distributed training stack, from precision formats and kernel optimizations to communication frameworks that make training trillion-parameter models stable, scalable, and fast.
This role is ideal for someone who thrives at the intersection of research and systems engineering: a builder who understands both the math of optimization and the realities of distributed compute.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an infrastructure research engineer to design, optimize, and maintain the compute foundations that power large-scale language model training. You will develop high-performance ML kernels (e.g., CUDA, CuTe, Triton), enable efficient low-precision arithmetic, and improve the distributed compute stack that makes training large models possible.
This role is perfect for an engineer who enjoys working close to the metal and across the research boundary. You’ll collaborate with researchers and systems architects to bridge algorithmic design with hardware efficiency. You’ll prototype new kernel implementations, profile performance across hardware generations, and help define the numerical and parallelism strategies that determine how we scale next-generation AI systems.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an infrastructure research engineer to design, optimize, and scale the systems that power large AI models. Your work will make inference faster, more cost-effective, more reliable, and more reproducible to enable our teams to focus on advancing model capabilities rather than managing bottlenecks.
Our focus is on performant and efficient model inference both to power real-world applications and to accelerate research. This role is responsible for the infrastructure that ensures every experiment, evaluation, and deployment runs smoothly at scale.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Thinking Machines builds multimodal-first. For us, there is no separate multimodal work. It’s at the core of everything we do, from the scientific goals we’re setting to the infrastructure we’re building. We’re looking for researchers to advance the frontier of audio capabilities. You’ll explore how audio models enable more natural and efficient communication/collaboration, preserving more information and capturing user intent.
This is a highly collaborative role. You’ll work closely across pre-training, post-training, and product with world-class researchers, infrastructure engineers, and designers. This is an opportunity to shape the fundamental capabilities of AI systems that millions of people will use.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We're hiring a Recruiting Coordinator to help us build a best-in-class recruiting engine as we scale. This is a coordination-heavy role, but the scope goes beyond scheduling - you'll own the systems and processes that keep the team organized and moving, and you'll be expected to improve them over time.
This means helping to build the processes, documentation, and systems that grow with us, not just executing the ones that already exist. Beyond scheduling, you'll plug into broader recruiting operations such as talent mapping, reporting, referral processing, and events - wherever the team needs support.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an infrastructure engineer to own and evolve the security infrastructure that underpins our foundation models. In this role, you’ll work across compute, storage, networking, and data platforms, making sure our systems are secure, reliable, and built to scale. You’ll shape controls, architecture, and tooling so that security is part of how the platform works by default. You’ll partner closely with research and product teams, enabling them to move quickly while keeping our models, data, and environments protected.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some even if you don't meet all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are a small team of scientists, engineers, and builders who've created some of the most widely used AI products including ChatGPT, Character.ai, and PyTorch. As we scale our team, some of the hardest challenges we face are about empowering and aligning our people and helping our managers make crucial decisions under uncertainty.
The new role of HR Business Partner will combine two tasks: leadership coaching, and people systems design. The two require different skillsets, but a shared vision for managing talent.
You will coach managers at Thinking Machines Lab to be more effective leaders. You will provide strategic support on researcher and engineer performance, team dynamics, and personal growth. You will also build the people infrastructure that will scale this support as the company grows: performance and feedback systems, compensation structures, and career frameworks.
Minimum qualifications:
Preferred qualifications:
We encourage you to apply even if you don’t meet all preferred qualifications.
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community.
We're hiring a GTM Strategy & Operations lead to build the commercial engine for Tinker. You will craft the strategies, processes, and playbooks that turn organic adoption into a scalable, repeatable revenue engine. You'll drive how we package, price, launch, and sell fine-tuning capabilities across segments, working in close partnership with product, engineering, and research.
Tinker is a technical product sold to sophisticated buyers. The GTM lead needs to be credible in conversations about training infrastructure and fluent in how developers evaluate and adopt tools.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We're hiring an Engineering Manager to lead a team of senior and staff-level engineers across ML infrastructure and product. You will help the team build and scale systems that are reliable, performant, and easy to operate.
This role combines collaboration with hand-on work. You’ll partner with tech leads to set the technical direction for your team and own its execution. You should also be ready to go deep on system design and contribute directly when needed.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We're hiring an Executive Business Partner to support several technical leaders out of our San Francisco office. You will help our team stay focused and organized, managing personal logistics and any tasks that might fall through the cracks.
This is a non-traditional EA role, requiring creativity in adapting to different people’s work styles and the new challenges that emerge at a fast-moving startup. The role entails real autonomy in making decisions without tight supervision.
Minimum qualifications:
Preferred qualifications:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re hiring a designer to define how people experience our artificial intelligence. You will own aspects of the whole stack, from the interface used by our customers and developers to the model behavior itself. Along with our researchers and engineers, you will co-design AI model experiences that feel intuitive, empowering, and crafted with care.
You will need to combine your design instincts with an understanding of how cutting-edge AI is trained and evaluated. This role requires exceptional taste, conceptual thinking, and the technical skills to ship your work in code.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
As data partnerships lead at Thinking Machines Lab, you'll own the end-to-end pipeline of data procurement for frontier model training, from understanding what our research teams need, to sourcing and closing providers, to managing the quality and delivery of data. You will be the connective tissue between research, legal, and external vendors, making sure the right data reaches the right teams at the right time.
This role is ideal for a technical-leaning person who wants to get deep into the data world in service of an ambitious research agenda. You'll need to be comfortable context-switching between planning out the data needed for training runs and negotiating pricing with data vendors. Over time, you'll build the repeatable, scalable processes that allow our data operations to keep pace with the speed of our research.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community.
We're looking for a full stack engineer to build and ship the products and services that Tinker users interact with every day. You'll work across frontend, backend, and infrastructure building the Tinker console, developer tools, and whatever Tinker needs most.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community.
We're looking for a software engineer to own the platform systems that enable Tinker — billing and usage metering, permissions and access control, organizations and teams, data exports, audit logging, and the admin surfaces that tie them together. This role partners with everyone from product to legal, as every new feature, pricing change, and enterprise deal flows through your work.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for a full stack engineer to build and ship products from prototype to scale and to maintain tools that accelerate research and product teams. You’ll work across frontend and backend components, and contribute to reliability, observability, and security in production.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for a software engineer focused on making our products secure by default while supporting fast and ambitious product iteration. You’ll embed with product and research teams to bake security into design and development and to build tooling and automation that keep systems safe at scale.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an engineer to design, build, and operate the GPU supercomputing environment that powers large‑scale training and inference. You will deliver high‑performant, reliable, and cost‑efficient compute so our users and researchers can move fast at scale.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for generalist infrastructure and systems engineers to help build the systems that power our foundation models and the internal teams on research and product development to be able to create the models and ship the products powered by our models.
You'll join a small, high-impact team responsible for architecting and scaling the core infrastructure behind everything we do. You’ll work across the full technical stack, solving complex distributed systems problems and building robust, scalable platforms.
Infrastructure is critical to us: it's the bedrock that enables every breakthrough. You'll work directly with researchers to accelerate experiments, improve infrastructure efficiency, and enable key insights across our models, products, and data assets.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
We interview generally, but during project selection we’ll take into account your interests and experience alongside organizational needs. This flexible approach allows us to match talented engineers with the infrastructure teams where they'll have the greatest impact and growth potential.
Here are example areas you may contribute to depending on your area of expertise and interest:
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We are hiring our first technical sourcer to ensure that Thinking Machines has a pipeline of exceptional candidates for technical roles. You will develop a sourcing strategy that meets both challenges: identifying and tracking the best talent, and getting them interested in our company. This role requires nurturing relationships, understanding technical fields, and staying focused on the long game.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.