Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We are hiring a developer productivity engineer to advance how we build software internally: safely, quickly, and with delight. The main focus are AI tools and coding agents. You’ll partner with platform, security, and product engineers to build state-of-the-art tooling for AI-assisted software development, and make our inner loop dramatically faster.
The scope of this role includes both setting up company-wide platforms and working with developers to accelerate their individual workflows.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an engineer to join us and contribute to data infrastructure. You'll join a small, high-impact team responsible for architecting and scaling the core infrastructure behind distributed training pipelines, multimodal data catalogs, and intelligent processing systems that operate over petabytes of data.
Infrastructure is critical to us: it's the bedrock that enables every breakthrough. You'll work directly with researchers to accelerate experiments, develop new datasets, improve infrastructure efficiency, and enable key insights across our data assets.
If you're excited by distributed systems, large-scale data mining, open-source tools like Spark, Kafka, Beam, Ray, and Delta Lake, and enjoy building from the ground up, we'd love to hear from you.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community.
We're looking for a Site Reliability Engineer to drive the reliability of Tinker end-to-end. You'll work alongside the engineers building the platform and research teams to make every layer of the system more robust and resilient.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community.
We're hiring a Forward Deployed Engineer to be the main point of contact with our customers, which range from individual hackers to major enterprises. You will diagnose their problems and needs and turn those into product improvements, taking ownership of both ends of this process.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community.
We're looking for a full stack engineer to build and ship the products and services that Tinker users interact with every day. You'll work across frontend, backend, and infrastructure building the Tinker console, developer tools, and whatever Tinker needs most.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
Tinker is our fine-tuning API that empowers researchers and developers to customize frontier AI to their needs — opening access to capabilities that have previously been concentrated in a handful of labs. We manage the infrastructure while allowing Tinkerers full flexibility in training open weights models with their own data, algorithms, and for their own needs. Tinker is rapidly adding new customers, features, and novel use-cases. We’re hiring to grow the platform alongside the Tinker community.
We're looking for a software engineer to own the platform systems that enable Tinker — billing and usage metering, permissions and access control, organizations and teams, data exports, audit logging, and the admin surfaces that tie them together. This role partners with everyone from product to legal, as every new feature, pricing change, and enterprise deal flows through your work.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for a full stack engineer to build and ship products from prototype to scale and to maintain tools that accelerate research and product teams. You’ll work across frontend and backend components, and contribute to reliability, observability, and security in production.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for a software engineer focused on making our products secure by default while supporting fast and ambitious product iteration. You’ll embed with product and research teams to bake security into design and development and to build tooling and automation that keep systems safe at scale.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for an engineer to design, build, and operate the GPU supercomputing environment that powers large‑scale training and inference. You will deliver high‑performant, reliable, and cost‑efficient compute so our users and researchers can move fast at scale.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Share this job
Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.
We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.
We’re looking for generalist infrastructure and systems engineers to help build the systems that power our foundation models and the internal teams on research and product development to be able to create the models and ship the products powered by our models.
You'll join a small, high-impact team responsible for architecting and scaling the core infrastructure behind everything we do. You’ll work across the full technical stack, solving complex distributed systems problems and building robust, scalable platforms.
Infrastructure is critical to us: it's the bedrock that enables every breakthrough. You'll work directly with researchers to accelerate experiments, improve infrastructure efficiency, and enable key insights across our models, products, and data assets.
Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.
We interview generally, but during project selection we’ll take into account your interests and experience alongside organizational needs. This flexible approach allows us to match talented engineers with the infrastructure teams where they'll have the greatest impact and growth potential.
Here are example areas you may contribute to depending on your area of expertise and interest:
Minimum qualifications:
Preferred qualifications — we encourage you to apply if you meet some but not all of these:
As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
Ready to apply?
Apply to Thinking Machines Lab
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.