Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is bringing Claude to democracies worldwide, and we’re looking for startup-minded engineers to build out our public sector engineering team. Engineers in this role will be working across the stack to scale our products from 0 to 1 and beyond.
While we’re looking for candidates with prior public sector experience, we are also excited about hiring product-minded engineers with a track record of building and scaling products. There’s no shortage of things to build, so startup experience will help in maintaining a sharp focus and plotting an optimal path forward. This role is full-stack and entrepreneurial: you’ll own projects such as building out deployments for new government customers, implementing products to better surface critical information, and working directly with customers to understand how we can best leverage and adapt Claude to power their workflows.
You’d be joining at an exciting inflection point where we are evolving past deployments of basic Claude access for the public sector and toward cutting-edge AI capabilities and a comprehensive public sector experience. The projects the team tackles enables governments to deploy AI applications for their most critical operations—from national security to public services. As the team continues to grow, our focus includes delivering solutions that directly influence how democratic institutions deploy and use advanced AI systems.
Build specialized AI applications and products for governments by developing deep understanding of their processes and workflows
Take ownership of architecting new deployments and designing public sector specific features
Collaborate closely with research on AI models for government applications and workflows
Partner directly with government customers and internal go-to-market teams to translate requirements into technical and product roadmaps
Design and build scalable systems for deployment, user management, and administrative controls
Create clarity and technical direction in a fast-moving environment with unique constraints
Have 8+ years of experience as a full stack software engineer
Experience integrating and working with AI/ML models and understanding their capabilities
Strong technical background with proven success building and shipping enterprise or government-grade products
Have excellent collaboration skills, with ability to work effectively across functions
Startup experience, particularly in scaling products from zero to one
Experience partnering with sales, customer success, and professional services teams to drive product adoption
Experience delivering software to government networks
Active federal security clearance (Secret or above)
This position requires verification of U.S. citizenship due to citizenship-based legal restrictions. Specifically, this position supports a United States federal, state, and/or local government agency customer and is subject to certain citizenship-based restrictions where required or permitted by applicable law. To meet this legal requirement, citizenship will be verified via a valid passport, or other approved documents, or verified US government clearance.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
A growing ecosystem of partners is building products on top of Claude. We're hiring an Engineering Manager to expand our Marketplace Platform. You'll shape the platform capabilities of a product surface that spans customer-facing discovery, partner-facing self-serve tooling, the business logic that orchestrates deals between them, and the authentication layer that lets users move seamlessly between Claude and partner products. This is a 0→1 role with a broad mandate and tight integration seams across several other platform teams.
The team owns a new product layer within the Platform organization. Its scope spans the customer-facing surfaces where enterprises engage with the broader Claude ecosystem, the partner-facing surfaces that support companies building on Anthropic's platform, and the workflows and orchestration that sit between them. The team builds on top of identity, tenancy, authorization, and billing primitives owned by other platform teams rather than rebuilding them — a meaningful part of the role is making those seams clean and durable as the product grows.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As Claude reaches more people — individuals, businesses, developers building on our APIs, and platforms building on top of those — knowing who's on the other side of a request becomes foundational to running our products responsibly. Passport is the team building that capability: a centralized identity, verification, and trust-grant platform.
We're hiring an Engineering Manager to build our Passport team. You'll expand the early team, define the architecture, and ship the first version of a platform that touches Claude.ai, our first-party APIs, and the third-party platforms built on top of them. This is a role with an unusually broad surface area: identity verification flows, a centralized trust graph, integration with our Safeguards organization, and a user-facing portal — all on the critical path for how Anthropic scales access to Claude safely.
Passport will own the customer verification and trust stack across Anthropic's products. The team's scope includes the entry flows customers go through to establish identity, a centralized model for coordinating and managing access, integration with review processes and the user-facing surfaces customers use to manage their trust across connected products. The team sits close to the product surfaces it serves and works closely with our safeguards, platform, and product organizations.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are looking for experienced software engineers to join our Platform organization. We build the foundational primitives that accelerate product development across Anthropic, and own infrastructure and systems that teams depend on to ship reliably and at scale - either internally or by hundreds of thousands of external global users and companies at all stages.
You'll independently scope complex, multi-month projects, drive cross-org alignment through ambiguous problem spaces, and make architectural decisions that shape how Anthropic builds and scales its products. You'll partner directly with research to productize cutting-edge capabilities, and will have lasting impact on the platform that hundreds of thousands of companies and internal/external engineers depend on every day.
We have multiple teams that are currently hiring. Team placement occurs after the interview process, taking into account your interests and experience alongside organizational needs. This flexible approach allows us to match talented engineers with the backend product efforts where they'll have the greatest impact and growth potential.
Platform Acceleration: We work on maximizing the developer productivity of product engineers at Anthropic. You'll architect and optimize the critical development infrastructure that powers our AI product development, including dev environments, observability, and CI/CD pipelines. You’ll partner closely with product teams to understand their development workflow and eliminate friction points. Your work will have an extraordinary multiplier effect, enhancing productivity across our entire product organization and accelerating our mission.
Service Infra: we build and maintain the core infrastructure that powers Anthropic's engineering organization, from service mesh and observability systems to deployment pipelines and shared libraries. Our work enables product teams to build and operate reliable services at scale, making us a critical force multiplier across the entire company.
Multicloud: We build and maintain the infrastructure that enables Anthropic to operate across multiple cloud providers. We focus on cloud-agnostic tooling, cross-cloud networking, and multi-region deployments.
Auth & Identity: We build and maintain the critical infrastructure that powers identity and authentication across Anthropic's product suite. We work closely with product teams, security, support, and trust & safety as customers. We create scalable solutions for user authentication, authorization, role-based access control, and single sign-on that form the backbone of our company's identity management operations. We maintain a user-centric approach, building reliable systems that our users and company can depend on as we tackle complex challenges at the intersection of security, scalability, and user experience.
Connectivity: Our mission is to make Claude the most connected AI. We own the MCP proxy that routes every tool call and the OAuth and token management that keeps connections authenticated. We're also the core contributors to the MCP spec — now an open standard under the Linux Foundation — and maintain the official Python and TypeScript SDKs. You'll work on problems where reliability and enterprise trust are the bar: token refresh at scale, admin controls that let IT govern what agents can do, proxy infrastructure that stays up when partner servers don't. We ship for claude.ai, Claude Code, Cowork, and the API. Relevant experience includes OAuth, API gateways, multi-tenant platforms, building for enterprise, and MCP.
API Distributability: The Claude API today is a rapidly growing platform serving developers and enterprises at scale—but reaching the next tier of enterprise customers requires transforming how and where we deploy it. The Distributability team owns that transformation: making the Claude API a cloud-native, managed product that runs wherever our customers need it, cross-cloud and on Anthropic's own infrastructure, with the enterprise-grade security, compliance, and operational capabilities to support it.
Platform Intelligence: We build the training systems that adapt Claude to specific customer workloads. The core problem is task-specific adaptation: getting the right intelligence, cost, and latency profile for a particular use case, and building toward systems where that adaptation can deepen as the customer's usage grows. We work closely with research on training methods and with agent platform teams on data paths. Relevant experience: ML training infra, production ML pipelines, backend engineering. Finetuning experience is a plus.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
Location Preference: Preference will be given to candidates based in NY, SEA, SF or the Bay Area given the current location of team.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic exists to ensure the world safely navigates the transition through transformative AI. That work is funded by commercial revenue — and every dollar of it runs through Billing Platform. We build and operate the infrastructure that turns product usage into revenue across everything Anthropic ships: Claude.ai, the Developer Platform, Claude Code, and whatever comes next. The faster we can price it, charge for it, and account for it correctly, the faster Anthropic can fund the mission.
We’re looking for a software engineer to join Billing. Depending on where you land, you might be building the integration contracts product teams plug into so they can launch a paid SKU without becoming billing experts, automating the quote-to-cash lifecycle so enterprise deals don’t bottleneck on manual ops, or building the reconciliation and data foundations that keep every revenue number trustworthy at audit time. This is production-critical infrastructure where correctness is non-negotiable — a rounding error here is a revenue discrepancy somewhere else — and you’ll work closely with Finance, Product, Sales, and Accounting to build systems that scale with the business.
Design and ship billing capabilities end to end — pricing primitives, payment flows, contract and entitlement models, revenue data pipelines — depending on the surface area you own.
Build for correctness first. Billing data is financial data; you’ll develop the instincts for where idempotency, reconciliation, and audit trails matter and where they don’t.
Own the interfaces. Whether it’s the billing API a product team integrates against or the revenue dataset Finance closes the books on, you’ll design contracts that hide the complexity and hold up under change.
Make build-vs-buy calls. We lean heavily on third-party billing, payment, and tax platforms, and you’ll decide where to extend them and where to build our own primitives around them.
Own the payments stack. Authorization rates and retry logic so a good card doesn’t silently decline, processing cost as a real number you drive down, automated dispute response, and fraud controls that stop bad actors without blocking real customers.
Drive down the manual work. Enterprise contract amendments, pricing config changes, billing support escalations — a lot of it is still human-in-the-loop, and closing that gap is a direct line to scale.
Operate what you ship. You’ll own SLOs on customer-facing flows, instrument the revenue pipeline so discrepancies surface before month-end close, and be on the hook when it pages.
Partner across the company. Growth wants promotional mechanics, Sales wants quote flexibility, Finance wants GAAP-clean data, product teams want it yesterday — you’ll help thread that needle.
Have 8+ years building backend or platform systems in production, 3+ years leading large scale, complex projects or teams as an engineer or tech lead
Have a track record of shipping in fast-moving environments where the requirements shift under you and you still land something correct and maintainable.
Care about correctness and data integrity as first-class design concerns, not afterthoughts — and know how to build systems that are easy to reason about when money is on the line.
Enjoy cross-functional work. Billing sits between Product, Finance, Sales, and Legal, and a lot of the job is translating between those worlds.
Built billing, payments, subscription, or fintech systems before — and know firsthand how a small pricing change ripples through the whole stack.
Direct experience integrating or extending third-party billing and payment platforms, and opinions on where they fall short.
Worked on revenue reconciliation, financial reporting pipelines, or systems that had to survive an audit.
Experience with enterprise commerce — CPQ, quote-to-cash, contract management, or the operational side of B2B billing.
Owned payments optimization in a self-serve or PLG environment — authorization rates, payment cost, fraud, chargebacks — and know what it takes to move those numbers without breaking the good customers.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Claude.ai is Anthropic's flagship consumer product, spanning web, mobile, desktop, and browser extensions—and it's growing fast. The UI Platform team owns the foundational layer that makes all of this possible: the shared infrastructure, components, APIs, and tooling that let every other product team at Anthropic ship quickly, reliably, and with confidence.
As a Software Engineer on this team, you'll be hands-on building the platform that other engineers depend on every day. This means designing and shipping shared components and design-system-level abstractions, evolving the backend-for-frontend (BFF) APIs that power our client applications, and improving the build, deploy, and observability systems that keep Claude.ai running smoothly across surfaces. Your work will directly shape how fast teams across Anthropic can innovate—and how stable their experiences are when they do.
This is a great fit if you care deeply about developer experience and want your engineering work to have outsized leverage: instead of shipping one feature, you're building the tools and systems that make dozens of features possible.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
UX research at Anthropic is a small team with outsized leverage. We work across products used by millions of people—and on interactions that don't have established conventions yet.
We inform critical decisions through rigorous research, with stakeholders who eagerly incorporate findings into their work. Research has a seat at the table for product strategy, and we partner closely with design, product, and engineering. We combine qualitative and quantitative methods to understand user needs in a space that's evolving faster than anyone can track.
We move fast, favor impact over polish, and increasingly build our own tools—often with AI doing the heavy lifting—to do research at the speed this environment demands.
This is a senior individual-contributor role. You'll own research for a complex, fast-moving product area where the surface is expanding quickly and the most important user questions haven't been articulated yet, let alone answered.
You won't work from a fixed research roadmap. You'll prioritize constantly—making clear, well-informed bets about what to learn next, executing on them fast, and pivoting the moment the picture changes. Our product teams are deeply technical and move on instinct; your job is to make that instinct better-calibrated without slowing it down.
Roles here are fluid. On any given week you might be the researcher, the analyst, the one framing the next product bet, or the one building a prototype because that's the fastest way to find out.
Where this role lands depends on team needs and your strengths—we'll figure that out together.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We're looking for seasoned iOS engineers to join our Claude mobile product team and help build apps that harness the transformative power of advanced language models. Our mission is to unlock the potential of advanced AI through elegant, user-friendly mobile applications that put unprecedented capabilities at users' fingertips. You will work with a talented team of engineers, researchers, design and Product teams to design and implement key components of our products. Join us in this exciting mission to transform how people engage with technology and unlock new realms of human potential.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We're looking for seasoned Android engineers to join our Claude mobile team and help build apps that harness the transformative power of advanced language models. Our mission is to unlock the potential of advanced AI through elegant, user-friendly mobile applications that put unprecedented capabilities at users' fingertips. You will work with a talented team of engineers, researchers, design and Product teams to design and implement key components of our products. Join us in this exciting mission to transform how people engage with technology and unlock new realms of human potential.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Verticals team builds AI products purpose-built for specific industries—financial services, life sciences, healthcare, and legal. Most of these teams are being built 0→1 right now: you'll be forming the team, defining the product, and shipping the first version in markets where no one has done this well yet. Where we're further along, products are already live with enterprise customers and growing fast.
We're hiring Engineering Managers to lead the teams building Claude for Financial Services, Life Sciences, and Healthcare. You'll lead a team shipping AI into professional workflows—owning execution, working directly with customers and go-to-market, and helping shape where the broader Verticals group goes next.
We're hiring for all four verticals through this posting. Team placement happens during the interview process based on your background, interests, and organizational need—if you have deep experience in one of these domains, let us know in your application.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic’s Product Engineering org is looking for experienced engineers with strong full-stack fundamentals to join one of teams owning our current or emergent products. You'll be a technical leader who thinks holistically about the consumer and/or enterprise end user experience - be that via Claude.ai, the Anthropic API, enterprise deployments, Claude Code, or mission-driven applications - and carry real end-to-end ownership.
You'll partner with engineering managers, product leaders, designers, and researchers to understand new model capabilities and redefine what is possible for users in the world of LLMs - and how to build it. We’ll look to you to have a product-oriented mindset and input, own technical quality across the stack (performance, accessibility, reliability, and developer experience), scale your efforts to millions of users on a global scale, and carry genuine excitement about what AI makes possible.
We have multiple teams that are currently hiring. Team placement occurs after the interview process, taking into account your interests and experience alongside organizational needs. This flexible approach allows us to match talented engineers with the backend product efforts where they'll have the greatest impact and growth potential.
Developer Experience: You'll build the products and tools - console, SDKs, docs, and observability - that give developers the confidence to rely on Claude in their most critical applications. Our mission is to meet developers where they are and bring them closer to the ceiling of what's possible with our models and API. That means building trust through great platform fundamentals, inspiring developers to raise the bar in how they use Claude's capabilities, and accelerating the path to agentic development by designing for both human and AI developers. You'll work closely with teams across the Platform org to turn Claude's newest capabilities into experiences developers can actually pick up and run with - from first prompt to production agent.
Beneficial Deployments: Beneficial Deployments Engineering brings Claude to organizations doing the most good with the fewest resources — nonprofits, schools, healthcare providers, researchers, and economic mobility programs.We're looking for a full-stack engineer to build the access programs, tooling, and product work that make frontier AI usable for teams that couldn't afford this capacity any other way.
Enterprise AI Products: You'll work on the products that make Claude a daily-use tool for enterprise customers across industries – the connective tissue that lets Claude operate effectively across workflows. On this team you will systematically understand why Enterprise users aren't activating, what's blocking adoption, and building the capabilities to close those gaps.Some of our focuses are:
Vertical AI Products: Purpose-built experiences for specific industries where Claude can transform complex professional work. We're currently building for three verticals, with more to come:
Enterprise Foundations: You'll build the systems large organizations require before they can adopt Claude at scale: identity and permissions, security and compliance controls, and the admin analytics that let them see how it's being used. This is the work that turns "we love the demo" into a signed enterprise deal.The role is part product, part platform. You'll work closely with Product and GTM to understand what our largest customers need, then build it once in a way that works across Claude.ai, Claude Code, and Cowork.
Growth: Drive user acquisition, engagement, retention, and monetization through data-driven strategies and technical implementations. At Anthropic, we're not just building AI tools; we're reimagining how AI can enhance and expand its user base! As a member of the growth team, you will have a unique opportunity to shape our growth strategy. You will work with a cross-functional team of engineers, data scientists, marketers, and product managers to design, implement, and optimize growth initiatives that scale our AI-powered tools and maximize their impact.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
Location Preference: Preference will be given to candidates based in NY, SEA, SF or the Bay Area given the current location of teams.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Design at Anthropic sits at the intersection of craft, research, and product intuition. We're a small team working on products that millions of people use daily—and on interactions that don't have established conventions yet.
Our work shapes how people experience AI: whether Claude feels like a tool or a collaborator, whether it earns trust or erodes it. We partner closely with engineers and researchers, often designing around capabilities that are emerging in real-time. That means staying close to the models, prototyping rapidly, and being comfortable with ambiguity.
We care deeply about craft—the details that make something feel polished and trustworthy—but we ship fast and learn in the open. We'd rather get something in front of users and iterate than wait for perfection.
Read more here for the type of features we build.
Specific projects & responsibilities depend on the team staffing, which we’ll talk about once we’ve seen some more of your work and understood your strengths & interests. A few examples across teams:
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic’s Product Engineering org is looking for experienced engineers with strong full-stack fundamentals to join one of teams owning our current or emergent products. You'll be a technical leader who thinks holistically about the consumer and/or enterprise end user experience - be that via Claude.ai, the Anthropic API, enterprise deployments, Claude Code, or mission-driven applications - and carry real end-to-end ownership.
You'll partner with engineering managers, product leaders, designers, and researchers to understand new model capabilities and redefine what is possible for users in the world of LLMs - and how to build it. We’ll look to you to have a product-oriented mindset and input, own technical quality across the stack (performance, accessibility, reliability, and developer experience), scale your efforts to millions of users on a global scale, and carry genuine excitement about what AI makes possible.
We have multiple teams that are currently hiring. Team placement occurs after the interview process, taking into account your interests and experience alongside organizational needs. This flexible approach allows us to match talented engineers with the backend product efforts where they'll have the greatest impact and growth potential.
Developer Experience: You'll build the products and tools - console, SDKs, docs, and observability - that give developers the confidence to rely on Claude in their most critical applications. Our mission is to meet developers where they are and bring them closer to the ceiling of what's possible with our models and API. That means building trust through great platform fundamentals, inspiring developers to raise the bar in how they use Claude's capabilities, and accelerating the path to agentic development by designing for both human and AI developers. You'll work closely with teams across the Platform org to turn Claude's newest capabilities into experiences developers can actually pick up and run with - from first prompt to production agent.
Beneficial Deployments: Beneficial Deployments Engineering brings Claude to organizations doing the most good with the fewest resources — nonprofits, schools, healthcare providers, researchers, and economic mobility programs.We're looking for a full-stack engineer to build the access programs, tooling, and product work that make frontier AI usable for teams that couldn't afford this capacity any other way.
Enterprise AI Products: You'll work on the products that make Claude a daily-use tool for enterprise customers across industries – the connective tissue that lets Claude operate effectively across workflows. On this team you will systematically understand why Enterprise users aren't activating, what's blocking adoption, and building the capabilities to close those gaps.Some of our focuses are:
Vertical AI Products: Purpose-built experiences for specific industries where Claude can transform complex professional work. We're currently building for three verticals, with more to come:
Enterprise Foundations: You'll build the systems large organizations require before they can adopt Claude at scale: identity and permissions, security and compliance controls, and the admin analytics that let them see how it's being used. This is the work that turns "we love the demo" into a signed enterprise deal.The role is part product, part platform. You'll work closely with Product and GTM to understand what our largest customers need, then build it once in a way that works across Claude.ai, Claude Code, and Cowork.
Growth: Drive user acquisition, engagement, retention, and monetization through data-driven strategies and technical implementations. At Anthropic, we're not just building AI tools; we're reimagining how AI can enhance and expand its user base! As a member of the growth team, you will have a unique opportunity to shape our growth strategy. You will work with a cross-functional team of engineers, data scientists, marketers, and product managers to design, implement, and optimize growth initiatives that scale our AI-powered tools and maximize their impact.
Marketplace: builds the platform that connects Claude-powered enterprise tools through technology partnerships and deeper customer relationships. We work closely with business development, sales, product, and GTM teams, creating the infrastructure that powers partner onboarding, customer storefronts, transaction and entitlement flows at scale. We're building the technical scaffolding for a new offering, tackling the challenges at the intersection of commercial motions, platform architecture, and partner integrations so that enterprises, platforms, and Anthropic can transact with confidence.
Passport: We're building the identity and verification product layer that enables safe model launches as Claude's capabilities expand. This critical effort partners with Safeguards, Auth & Identity, Policy, and Product teams across API, Claude.ai, and third-party platforms as customers, creating the systems for KYC/KYB, trust grant issuance and inheritance, and end-user verification that flow across every Anthropic surface. We sit at the intersection of trust, compliance, and product velocity, delivering the verification primitives that let Anthropic ship advanced model capabilities to the right users at massive scale.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
Location Preference: Preference will be given to candidates based in NY, NJ, SEA, SF or the Bay Area given the current location of team.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is looking for experienced, product-minded engineers to own the backend systems that power user experiences across our API, Claude Code, and Claude.ai. You'll independently scope complex, multi-month projects through ambiguous problem spaces and lead peers through technical and product decisions; you'll drive alignment with product, peer engineering teams, and research to identify capability gaps and translate frontier model improvements into shipped products. You'll make architectural decisions that affect the reliability and scalability of systems serving hundreds of thousands of global users (including internal teams), and design processes that help your team operate effectively and never fail the same way twice - all while staying hands-on with the code and our models.
We have multiple teams that are currently hiring. Team placement occurs after the interview process, taking into account your interests and experience alongside organizational needs. This flexible approach allows us to match talented engineers with the backend product efforts where they'll have the greatest impact and growth potential:
API Core: You'll build and scale the foundation of the Claude API—the systems that deliver Claude's intelligence to every developer, from startups to enterprise. You'll own the performance, reliability, and efficiency of our core serving path, ensuring users get the most speed and value from our models. You'll partner closely with inference and safeguards to optimize the full stack.
API Capabilities: You'll bring frontier model capabilities to developers through the Claude API, owning core features like vision, tool use, and computer use. You'll launch new models and ship the primitives that make Claude more capable with every release. You'll partner directly with research and inference to productionize what's next.
API Knowledge: You'll focus on transforming Claude into a true knowledge worker by ensuring the model has access to and understanding of the right knowledge at the right time. You'll work on making it possible for developers to securely give Claude access to their data while automatically processing and retrieving relevant information. You'll partner directly with research to bring state-of-the-art retrieval advancements to developers.
Developer Experience: You’ll focus on building products and tools to enable developers to harness the full power of LLMs to create successful, reliable, and groundbreaking applications with ease. You’ll build the tools to accelerate developers from idea to deployment. You'll help figure out how to leverage Claude to improve developer's usage of the API, such as generating and evaluating prompts while collaborating closely with the teams above to bring Claude's current and future capabilities to developers.
API Agents: You'll focus on building the infrastructure and APIs that enable developers to create powerful agentic applications within the Claude API. You'll help developers with agent orchestration through capabilities like tool use, multi-step reasoning, and long-running task execution that allow Claude to take actions and accomplish complex goals on behalf of users. You'll partner with research to bring cutting-edge agent capabilities to production.
Enterprise Foundations: We're looking for a software engineer to join our Enterprise Foundations team—the team that makes Claude enterprise-ready at scale. When a Fortune 500 company wants to roll out Claude to 100,000 employees, we're the team that makes it possible.You'll build the foundational systems that large organizations require before they can deploy AI at scale: user and permissions management, security and compliance features, and analytics infrastructure. This work directly converts product-market fit into revenue by removing the deployment blockers that prevent large organizations from adopting Claude broadly.
Passport: We're building the identity and verification product layer that enables safe model launches as Claude's capabilities expand. This critical effort partners with Safeguards, Auth & Identity, Policy, and Product teams across API, Claude.ai, and third-party platforms as customers, creating the systems for KYC/KYB, trust grant issuance and inheritance, and end-user verification that flow across every Anthropic surface. We sit at the intersection of trust, compliance, and product velocity, delivering the verification primitives that let Anthropic ship advanced model capabilities to the right users at massive scale.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Claude.ai is one of the most-used AI products in the world, and the experience we build for our users directly shapes how people interact with AI every day. We're looking for a talented engineer to join the team that builds the consumer web app — the interfaces, interactions, and moments that turn Claude from a capable model into a product people genuinely enjoy using.
This is a product engineering role first and foremost. You'll work closely with designers and product managers to bring features from concept to shipped experience, obsessing over the details that make the difference: how something feels when you first land on it, how smoothly a new interaction flows, how an interface holds up under real-world use. Great UX isn't just about how something looks — it's about how fast it feels, how reliably it works, and how well it adapts to the device and context someone is using it on. You won't need to be building backend infrastructure or platform tooling — your focus is on crafting the best possible experience for the people using Claude.ai on the web.
The pace is fast, the product is evolving quickly, and the opportunity to have a visible, direct impact on how millions of people use AI is real. If you love building consumer products and care about getting the details right, this could be a great fit.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We're looking for a Staff Software Engineer to set technical direction at the intersection of engineering and research on the Claude Code team. In this role, you'll partner directly with Anthropic's researchers and engineering leadership to shape how we measure, understand, and improve Claude's coding capabilities. You'll architect the systems, tooling, and evaluation infrastructure that determine how quickly our research can move—and you'll be accountable for the technical decisions that ripple across the team and beyond. This is a senior individual contributor role for someone who has already built and owned systems at significant scale, and who is ready to operate as a technical leader: driving architecture, mentoring engineers, and influencing the direction of Claude Code itself.
Set technical direction for evaluation systems, research infrastructure, and internal tooling across the Claude Code team
Architect eval frameworks that measure model capabilities across diverse coding tasks and scale with our research roadmap
Lead the design of infrastructure that enables researchers to run experiments at scale, and make the foundational tradeoffs that shape how the team operates for years
Identify the highest-leverage engineering investments—often before anyone has asked for them—and drive them to completion
Serve as a senior technical bridge between product and research, using strong product intuition to influence which capabilities we prioritize and how we measure progress against them
Mentor and raise the bar for other engineers on the team; review designs, unblock peers, and model the engineering standards we want to scale
Partner with research leads to translate ambiguous research questions into durable engineering solutions
Own critical systems end-to-end, from architecture through production reliability, and take responsibility for their long-term health
Have 10+ years of software engineering experience, with a track record of operating as a Staff or Principal engineer (or equivalent) at a high-caliber organization
Have architected and owned complex, high-stakes systems—pipelines, infrastructure, or platforms that orchestrate many components, handle significant state and logic, and serve multiple teams
Have a history of setting technical direction that others follow—through design docs, architectural decisions, or technical strategy that shaped how a team or org operates
Thrive in high-intensity environments with fast iteration cycles, and have the judgment to know when to move fast and when to invest in durability
Take full ownership of ambiguous, open-ended problems and drive them to completion with minimal direction
Are a power user of agentic coding tools with deep intuition about model capabilities and limitations
Can dive into unfamiliar technical domains—ML systems, research workflows, novel infrastructure—and get to the frontier quickly
Care deeply about correctness and reliability, and have raised engineering standards on teams you've been part of
Are energized by working at the boundary between engineering and AI research, and by the prospect of influencing both
Designing or scaling eval/evaluation frameworks for ML systems
Reinforcement learning infrastructure or training systems
Leading technical initiatives in high-performance, demanding environments—trading firms, quant funds, frontier research labs, or fast-moving startups where intensity and technical excellence are the norm
Research computing, scientific infrastructure, or developer platforms at scale
A strong quantitative foundation (math, physics, or related fields)
Expertise in Python and TypeScript
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking an exceptional Senior Staff Software Engineer to join the Claude Developer Platform team and serve as the senior-most individual contributor across API Engineering. Since launch, the Claude API has seen rapid growth and adoption by companies of all sizes to build AI applications with our industry-leading models. The API serves as the primary channel for safely and broadly distributing AI's benefits across all sectors of the economy.
This role sets the technical direction for the systems that make Claude accessible to developers, enterprises, and partners at scale. You will operate at the intersection of technical strategy and execution, partnering closely with Research, Inference, Platform, Infrastructure, and Safeguards to ensure the Claude API is reliable, capable, and positioned to grow with Anthropic's ambitions.
This is a high-agency role that spans all API Engineering teams: API Core owns the foundational reliability and performance of the Claude API; API Capabilities ships frontier model capabilities—vision, tool use, computer use—directly to developers; API Knowledge builds retrieval and grounding systems that let Claude reason over external data; API Distributability ensures Claude reaches customers wherever they need it, with the enterprise-ready infrastructure to support it; and API Agents builds the infrastructure for long-horizon agentic workflows in production.
Define and drive multi-year technical strategy for the Claude API, setting direction across API Core, Capabilities, Knowledge, Distributability, and Agents.
Identify and personally lead the highest-complexity, highest-impact engineering initiatives spanning multiple teams.
Serve as the primary technical decision-maker for major architectural decisions with org-wide scope.
Partner with Research to evaluate and integrate frontier capabilities; work with Inference and Platform for reliable delivery at scale; collaborate with Infrastructure and Safeguards for reliability, security, and responsible deployment.
Mentor and develop Staff-level engineers across the org.
Drive alignment across Product, GTM, Safety, and beyond while proactively identifying and addressing systemic technical risks.
Have 12+ years of engineering experience with a clear track record operating at Staff or Senior Staff level.
Have demonstrably shaped technical strategy for large-scale API or distributed systems platforms.
Drive the highest-leverage technical outcomes without formal authority—you lead through influence, quality of thinking, and trust.
Have deep expertise in distributed systems and API architecture, and are effective writing design docs, making architectural calls, and coding in critical paths.
Are highly effective across org boundaries—you build trust with Research, Inference, Infrastructure, Safeguards, and business stakeholders alike.
Bring strong product instincts and a craftsperson's approach to API design; you communicate clearly with both technical and non-technical audiences.
Languages: Python, TypeScript
Frameworks: FastAPI, React
Infrastructure: GCP, Kubernetes, Cloud Run, AWS, Azure
Databases: PostgreSQL (AlloyDB), Vector Stores, Firestore
Tools: Feature Flagging, Prometheus, Grafana, Datadog
Deadline to apply: None. Applications will be reviewed on a rolling basis.
Location Preference: Preference will be given to candidates based in New York or the San Francisco Bay Area as these positions are part of an SF- or NY-based team.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We’re looking for prompt and context engineers to join our product engineering team to help build AI-first products, features, and evaluations. Your mission will be to bridge the gap between model capabilities and real product experience, working with product teams to build consistent, safe, and beneficial user experiences across all product surfaces.
You will be deeply involved in new product feature and model releases at Anthropic, combining engineering expertise with an understanding of frontier AI applications and model quality. You’ll become an expert on Claude’s behavioral quirks and capabilities and apply that knowledge to deliver the best possible user experience across models and domains. You’ll be the first resource for product teams working on Claude’s AI infrastructure: system prompts, tool prompts, skills, and evaluations.
This role requires someone who can effectively balance caring deeply about making Claude the best it can be while also supporting a wide variety of concurrent projects and efforts across many product teams.
Required Qualifications
Preferred Qualifications
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic exists to ensure the world safely navigates the transition through transformative AI. That work is funded by commercial revenue — and that revenue runs through Billing Platform. The Product Monetization team builds the on-rails that let every product team at Anthropic — Claude.ai, the Developer Platform, Claude Code — monetize features and convert users into paying customers without becoming billing experts themselves. The faster product teams can ship and charge, the faster Anthropic can fund the mission.
We're looking for an Engineering Manager to lead this team. You'll own the platform layer where product meets revenue: the integration contracts product teams plug into, the constructs that let the business run pricing experiments, the self-serve and promotional capabilities Growth builds on, and the payment methods that open up global markets. You'll partner closely with Growth, Safeguards, Legal, and the core Billing Infrastructure team, and work tightly with Customer Support to continuously improve the billing experience for customers.
Lead and grow a team of engineers building the revenue-enablement platform — hiring, coaching, and setting a high technical bar.
Build the on-rails: clean billing API abstractions and integration contracts so product teams can onboard and launch paid SKUs seamlessly, without billing logic leaking into their stack.
Define and own the platform's reliability bar — tiered SLOs and SLAs organized by customer impact: error budgets on the "can the customer do the thing" flows, quality targets on everything else.
Build the platform constructs that let the business run pricing experiments and iterate on monetization models with real data.
Partner closely with Growth — own the promotional, trial, and self-serve billing platform they build campaigns on top of. Your team owns the platform; Growth owns the campaigns.
Expand payment method coverage globally, driving up authorization rates and conversion in markets where default card flows underperform.
Ship new monetization models that keep pace with how the product portfolio evolves.
Drive down time-to-market for new SKUs and product launches — the team's core input is "product team needs to charge for X," and your job is to make that fast.
Have 3+ years of engineering management experience, with 8+ years total in the industry.
Have led teams building payments, billing, subscription, or monetization systems — and know firsthand how a small pricing change ripples through the whole stack.
Have strong opinions about platform boundaries and API design.
Are good at cross-functional work — Growth, Legal, Customer Support, Safeguards, and product teams will all have opinions and you need to keep them aligned.
Direct experience with modern billing and payment platforms — and opinions on when to build vs. buy.
Built or scaled local payment method adoption for international markets.
Run a promotional/offers platform that fed directly into growth or subscription conversion funnels.
Experience in a PLG or self-serve SaaS environment where billing UX is a conversion lever, not an afterthought.
Managed teams through regulatory or compliance-heavy work (consumer financial terms, tax, etc.).
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is looking for an Engineering Manager to lead the Agent Prompts & Evals team. This team owns the infrastructure that lets Anthropic ship model and prompt changes with confidence — the eval frameworks, system prompt pipelines, and regression-detection systems that every model launch depends on.
When a new Claude model is ready to ship, this team is the one answering “is it actually better in our products?” When a product team wants to change how Claude behaves, this team owns the tooling that tells them whether they broke something. It’s a platform team whose platform is model behavior itself.
The team sits deliberately at the seam between product engineering and research. You’ll partner closely with other evals groups across the company on shared infrastructure and methodology, with product teams who are shipping features on top of Claude, and with the TPMs and research PMs driving model launches. The pace is set by the model release cadence, and the team operates as both a platform owner and a hands-on partner during launch periods.
You don’t need a research background, but you do need to want to learn how to measure things like “is Claude being too sycophantic” or “did web search get worse.” The best version of this role is someone who’s built strong platform or devtools teams before and is excited to apply that skillset to a domain where the thing you’re measuring is a language model.
Lead and grow a team of prompt engineers and platform software engineers
Own the product-side eval platform: the frameworks, dashboards, bulk runners, and CI integrations that product teams use to measure Claude’s behavior and catch regressions before they ship
Own system prompt infrastructure: versioning, deployment, rollback, and review tooling for the prompts that run in production across claude.ai, the API, and agentic surfaces
Be a steady hand through model launches — these are the team’s highest-stakes operational moments and the EM is the backstop when things get chaotic
Build durable collaboration with other evals groups across the company; this means real work on ownership boundaries, shared roadmaps, and avoiding tragedy-of-the-commons on shared eval infrastructure
Recruit, close, and retain engineers who want to work at the intersection of product engineering and model behavior
Shape where the team invests next: there are credible paths into frontier eval development, model launch automation, and deeper prompt engineering support, and part of the job is sequencing them
Push the team toward measuring things that are hard to measure — behavioral drift, prompt quality, harness parity — not just things that are easy
8+ years in software engineering with 3+ years managing engineering teams, including experience leading a platform, infra, or developer-tooling team where your customers were other engineers
A track record of building “pits of success” — tooling and process that made it easy for other teams to do the right thing without needing to understand all the details
Comfort managing a team with a mixed charter: platform ownership, service-to-other-teams, and a launch-driven operational rhythm, all at once
Enough technical depth to engage on system design, review pipeline architecture, and be credible in debates with strong ICs — you don’t need to be writing code by hand every day, but you should be able to read it, review it, and be comfortable leveraging Claude to understand, design, and occasionally build.
A product mindset and willingness to wear multiple hats when the work calls for it
Demonstrated ability to build and maintain peer relationships with partner orgs that have different cultures and incentives — negotiating ownership, aligning roadmaps, and holding ground when it matters without being territorial about it
Experience recruiting and closing senior ICs in a competitive market
Prior exposure to LLM evals, ML experimentation platforms, or model quality work — even tangentially
Experience with A/B testing infrastructure, feature flagging, or gradual rollout systems
Background in devtools, CI/CD platforms, or testing infrastructure at scale
A history of managing teams that sit between two larger orgs and making that position an asset rather than a liability
Interest in AI safety and alignment — not required, but it makes the “why” of the work land harder
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We're looking for desktop app engineers to join our Claude product team and help build native applications that harness the transformative power of advanced language models. Your mission will be to package our AI capabilities into elegant desktop experiences that put unprecedented capabilities at users' fingertips. You will work with a talented team of engineers, researchers, and product specialists to design and implement key components that bridge our AI technology with intuitive user interfaces. Join us in this exciting mission to transform how people engage with our AI technology and unlock new realms of human potential.
Today, our app is mostly web-based – but we’re eager to build the best possible version of Claude for macOS and Windows with a mix of both native and web technologies. You might be a great fit for this role if you’re up for the challenge of learning multiple programming languages.
We are more interested in strong engineers excited to learn new languages and technologies to build the best possible experience than we are looking for people with hands-on experience in Chromium, Electron, C++, or the various native macOS and Windows APIs. We believe that strong engineers can learn whatever technology is best suited to build the best possible experience – and we understand that even the best engineers will have little experience with one of the operating systems and areas we’d like to invest in.
We’re looking for curious engineers excited to figure out, with us, what AI on the desktop could do.
We’re working with a large number of technologies - and you might be an excellent candidate even if you’ve only worked with a subset of them. An ideal candidate may have never used Electron before but does have hands-on experience in working with Chromium’s C++. Similarly, a great candidate may have worked on Electron apps, but hasn’t used C++ or Swift before.
We understand that even the best engineers will have more experience with some platforms than others.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We're looking for engineers to join a new effort building AI-powered products and capabilities for cybersecurity. You'll work across the stack to prototype new ideas and build from the ground up.
This role sits at the intersection of research, product, and go-to-market. You'll work closely with research teams to develop new model capabilities for security applications, prototype and iterate quickly to validate ideas, and engage directly with customers and partners to inform what we build. The right candidate has the technical depth to engage with research, the product instincts to know what's worth building, and the drive to move fast.
Prototype and build new AI-powered products for cybersecurity
Iterate quickly based on customer feedback and what you learn
Collaborate with research teams to identify and develop new model capabilities for security applications
Engage directly with customers and partners to understand workflows and inform product direction
Have 7+ years of experience as a software engineer
Experience developing cybersecurity products
Enjoy fast iteration and are energized by prototyping new ideas
Have strong product instincts and enjoy defining what to build, not just how to build it
Are comfortable working closely with research and go-to-market teams
Have strong communication skills and can work effectively across functions
Experience in incident response, reverse engineering, network analysis, penetration testing, or similar fields
Experience working with AI/ML models and building products on top of them
Experience building agentic applications
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic