Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As part of our growing Data Science and Analytics team, you will play an instrumental role in our company’s mission of building safe and beneficial artificial intelligence by driving data-informed decision making across our organization. You’ve worked in cultures of excellence in the past, and are eager to apply that experience to help shape the cultural norms and best practices of a growing data science team as Anthropic continues to scale. In this unique company, technology, and moment in history, your work will be critical to informing our strategy as we deploy safe, frontier AI at scale to the world.
We’re hiring across multiple pillars
Applying for this role will allow you to be considered for all pillars currently hiring. You will be asked to select a preference when submitting an application.
This role is embedded with the Claude Code product team, driving data-informed decisions for Anthropic's agentic coding tool that enables developers to delegate coding tasks directly to Claude from their terminal. You'll help the team understand how developers interact with AI coding assistants, measure developer productivity and product quality, and identify opportunities to innovate the developer experience as the product scales. Key focus areas include developer usage patterns across the platform, driving adoption within the developer ecosystem, and developer segmentation.
Strong candidates may have:
You will be embedded with our Consumer product team. This team is responsible for building all consumer-facing Claude experiences—including web, mobile, desktop, and browser extensions. In this role, you'll shape how millions of users interact with Claude daily, driving product insights to product recommendations for interfaces that are intuitive, responsive, and push the boundaries of what AI-powered applications can be.
Strong candidates may have:
You will partner closely with product, engineering, and go-to-market teams to understand how developers and enterprise customers build on and adopt the Claude Developer Platform—spanning our core API, agent orchestration, tool and MCP integrations, and knowledge management capabilities. You'll identify growth opportunities, surface insights about how AI agents are being built and deployed at scale, and drive data-informed decisions that shape our platform roadmap.
Strong candidates may have:
You will work closely with product, engineering, and research leaders to bring data-driven rigor to every phase of model development and launch. Sitting at the true bleeding edge of putting frontier AI research into the public domain, you will leverage data from both external customers and internal testing to define and measure key company success metrics, and analyze user and model behavior to identify new opportunities to push the frontier.
Strong candidates may have:
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is compute-constrained, and how we allocate that compute is one of the highest-leverage decisions we make as a company. Today, those allocation choices are only loosely tied to the user outcomes we ultimately care about — retention, lifetime value, and the experience of people relying on Claude. You will change that.
As a hands-on technical IC on the Supply pillar of our Data Science & Analytics team, you'll sit alongside the infrastructure engineers who run our compute and help decide how our scarcest resource gets used. You'll design and run the analyses, observational and synthetic experiments, and optimization frameworks that turn opaque supply decisions into shared, measurable understanding across the company. Your work will directly shape how frontier AI reaches the world at scale, and your findings will go in front of senior leadership, including our CTO and his staff.
This role is a fit for someone who thinks natively in terms of constrained allocation and queueing, who enjoys getting close to the system rather than analyzing it from a distance, and who wants their analyses to translate into operational changes that ship.
Deadline to apply: None. Applications are accepted on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As part of our growing Data Science and Analytics team, you'll play an instrumental role in Anthropic's mission of building safe and beneficial AI by driving data-informed decision making across the company. This role sits at the intersection of data science, developer experience, and AI tooling — and offers the unusual opportunity to study frontier AI usage from the inside, with the builders themselves as your users.
You'll define how Anthropic understands and improves developer productivity — both through classic software engineering effectiveness measures and through the emerging challenge of understanding AI-augmented development workflows. You'll own the quantitative foundation for how Anthropic's engineers build: what slows them down, what accelerates them, where tooling investments pay off, and how AI-assisted development is changing the shape of engineering work. Your analyses will directly inform infrastructure priorities, tooling roadmaps, and how we think about scaling engineering output as Anthropic grows.
You've worked in cultures of excellence in the past, and are eager to apply that experience to help shape the cultural norms and best practices of a growing data science team as Anthropic continues to scale.
Deadline to apply: None. Applications are reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Research Engineer on the Economic Research team, you will design, build, maintain critical infrastructure that powers Anthropic's research on AI's economic impact. You will work with data systems from across Anthropic, including our research tools for privacy-preserving analysis.
The Economic Research team at Anthropic studies the economic implications of AI on individual, firm, and economy-wide outcomes. We build scalable systems to monitor AI usage patterns and directly measure the impact of AI adoption on real-world outcomes. We publish research and data that is clear-eyed about the economic effects of AI to help policymakers, businesses, and the public understand and navigate the transition to powerful AI. We use our insights to inform Anthropic decisions internally across the business.
In this role, you will work closely with teams across Anthropic—including Data Science and Analytics, Data Infrastructure, Societal Impacts, and Public Policy—to build scalable and robust data systems that support high-leverage, high-impact research. Strong candidates will have a track record building data processing pipelines, architecting & implementing high-quality internal infrastructure, working in a fast-paced startup environment, navigating ambiguity, and demonstrating an eagerness to develop their own research & technical skills.
Build and maintain data pipelines that process large scale Claude usage logs into canonical, reusable datasets while maintaining user privacy.
Expand privacy-preserving tools to enable new analytic functionality to support research needs.
Design and implement novel data systems leveraging language models (e.g., CLIO) where traditional software engineering patterns don't yet exist.
Develop and maintain data pipelines that are interoperable across data sources (including ingesting external data) and are designed to support economic analysis.
Contribute to the strategic development of the economic research data foundations roadmap
Ensure data reliability, integrity, and privacy compliance across all economic research data infrastructure
Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions
Create documentation and best practices that enable self-serve data access for researchers while maintaining security and governance standards.
Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission
Have experience working with Research Scientists and Economists on ambiguous AI and economic projects
Have experience with building and maintaining data infrastructure, large datasets, and internal tools in production environments.
Have experience with cloud infrastructure platforms such as AWS or GCP.
Take pride in writing clean, well-documented code in Python that others can build upon
Are comfortable making technical decisions with incomplete information while maintaining high engineering standards
Are comfortable getting up-to-speed quickly on unfamiliar codebases, and can work well with other engineers with different backgrounds across the organization
Have a track record of using technical infrastructure to interface effectively with machine learning models
Have experience deriving insights from imperfect data streams
Have experience building systems and products on top of LLMs
Have experience incubating and maturing tooling platforms used by a wide variety of stakeholders
A passion for Anthropic's mission of building helpful, honest, and harmless AI and understanding its economic implications.
A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even if it requires going outside the original job description.
Strong communication skills to collaborate effectively with economists, researchers, and cross-functional partners who may have varying levels of technical expertise.
Background in econometrics, statistics, or quantitative social science research
Experience building data infrastructure and data foundations for research
Familiarity with large language models, AI systems, or ML research workflows
Prior work on projects related to labor economics, technology adoption, or economic measurement
Deadline to apply: None. Applications are reviewed on a rolling basis
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic