Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Thinking Machines builds multimodal-first. We’re looking for new team members to advance the science of visual perception and multimodal learning. We think about how vision and language interact at scale. We design architectures that fuse pixels and text, build datasets and evaluation methods that test real-world comprehension, and develop representations that let models ground abstract concepts in the physical world. Our goal is to create multimodal systems that support seamless integration into real-world environments.
You’ll work at the intersection of visual understanding, multimodal reasoning, and large-scale model training. You’ll help develop the architectures, data, and evaluation tools that teach AI to see, understand, and collaborate. The best candidate is curious about multimodal interfaces, has experience running large scale experiments and is comfortable contributing to complex engineering systems. While we are looking for a person with expertise in multimodality, Thinking Machines Lab operates in a unified fashion and expects new hires to work across modalities as one team.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
The role of pre-training researchers sits at the core of our roadmap. This work advances the science of how large models learn from data. You’ll explore new pre-training methods, architectures, and learning objectives that make model training efficient, robust, and aligned with human goals.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
The role of pre-training researchers sits at the core of our roadmap. This work blends research with large-scale data engineering to help assemble the pre-training datasets and data systems that underpin the next generation of AI models. You’ll design and implement methods for sourcing, curating, and analyzing pre-training data for quality and performance.
You’ll work with automated pipelines and human-in-the-loop processes, contributing both scientific insight and production-grade code. It’s ideal for someone who enjoys working at the intersection of data, machine learning, and systems, and who’s excited by the challenge of shaping frontier AI.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
The role of post-training researchers sits at the core of our roadmap. This is the critical bridge between raw model intelligence and a system that is actually useful, safe, and collaborative for humans.
Post-training data research work sits at the intersection of human insight and machine learning. Our work combines human and synthetic data techniques, along with other innovative approaches, to capture the nuances of human behavior and use them to steer models. We research and model the mechanisms that create value for people to explain, predict, and optimize for human preferences, behaviors, and satisfaction. Our goal is to turn research ideas into data by scoping well-run data labeling or collection campaigns, and understanding the science behind what makes the data high quality and useful to train our models. We also develop and evaluate quantitative metrics that measure the success and impact of our data and training interventions.
Beyond execution, we explores new paradigms for human-ai interaction and scalable oversight, experimenting with how humans can best supervise, guide, and collaborate with models. It’s interdisciplinary work that blends research, data operations, and technical implementation to advance the frontier of aligned, human-centered AI systems.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
The role of post-training researchers sits at the core of our roadmap. This is the critical bridge between raw model intelligence and a system that is actually useful, safe, and collaborative for humans.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Thinking Machines builds multimodal-first. For us, there is no separate multimodal work. It’s at the core of everything we do, from the scientific goals we’re setting to the infrastructure we’re building. We’re looking for researchers to advance the frontier of audio capabilities. You’ll explore how audio models enable more natural and efficient communication/collaboration, preserving more information and capturing user intent.
This is a highly collaborative role. You’ll work closely across pre-training, post-training, and product with world-class researchers, infrastructure engineers, and designers. This is an opportunity to shape the fundamental capabilities of AI systems that millions of people will use.
This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.