Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Transformative AI Research Economist at Anthropic, you will build macroeconomic models of AI that could be genuinely transformative and develop the scenario-based forecasting tools that let us reason quantitatively about economic trajectories with no historical precedent. You will work on questions of aggregate growth, income distribution, and economic governance under scenarios that most of the profession has not yet modeled seriously.
You will ground projections in microeconomic signals from the Anthropic Economic Index — usage patterns across millions of real-world AI interactions, surfaced through privacy-preserving measurement — so that scenario forecasts are disciplined by what we actually observe about task transformation and productivity. You will use frontier methods in growth theory, computational macro, and structural estimation, and contribute to AI-powered tools that expand what economic research can do.
Our team combines rigorous empirical methods with novel measurement approaches. We're building first-of-its-kind datasets tracking AI's impact on labor markets, productivity, and economic transformation. Using our privacy-preserving measurement system, we analyze millions of real-world AI interactions to understand how AI augments and automates work across different occupations and tasks.
Build macroeconomic models of transformative AI spanning growth, labor markets, and income distribution
Develop and maintain scenario-based forecasting tools; publish forecasts for GDP, productivity, and unemployment under a range of AI-capability trajectories
Ground macroeconomic projections in microeconomic data from the Anthropic Economic Index, constraining theory with observed patterns of adoption and task transformation
Analyze questions of income distribution and economic governance under transformative-AI scenarios
Contribute to the development of AI-powered research tools for economics
Contribute to Economic Index Reports and publish Research Briefs on first-order questions as they arise
Build and maintain relationships with academic institutions, policy think tanks, and other research partners
Amplify external engagement through research publications, policy briefs, and presentations to diverse stakeholders
PhD in Economics, or an exceptional candidate close to completion
Background in macroeconomics, growth theory, or public finance ideally with exposure to task-based frameworks and labor economics
A research record that engages seriously with the possibility of transformative AI — you treat the scenarios in this posting as live questions worth modeling rigorously, not speculation to be hedged against
Relevant experience in some of:
Macroeconomic modeling and structural estimation
Scenario-based and time-series forecasting
Task-based approaches to technological change
Computational methods, agent-based modeling, or large-scale simulation
Income distribution and inequality
Using large language models in the research workflow
Technical skills including:
Proficiency in Python, Julia, or similar for computational economics
Facility with AI coding agents as part of a research workflow
Comfort learning new technical tools and frameworks
Demonstrated ability to:
Lead research projects from conception to publication
Ship on tight timelines and revise in public as new data arrives
Communicate technical findings to diverse audiences
Strong interest in ensuring AI development benefits humanity
Labor market impacts of AI: A new measure and early evidence
Anthropic Economic Index Report: Uneven Geographic and Enterprise AI Adoption
For this role, we're looking for candidates who combine rigorous macroeconomic theory with computational fluency, and who are willing to model economic scenarios that fall outside the profession's usual range. The ideal candidate works at the intersection of growth theory, forecasting, and frontier AI.
Deadline to apply: None. Applications are reviewed on a rolling basis
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a member of the National Security Policy team at Anthropic, you will work directly with our most strategic national security customers and partners to drive transformational AI adoption. You will leverage your technical skills to architect innovative solutions that address our customers' business needs, meet their technical requirements, and provide a high degree of reliability and safety.
In collaboration with the Sales, Product, Research, and Engineering teams, you’ll help national security partners develop strategies and implementation plans to integrate leading-edge AI systems into their mission. You will employ your excellent communication skills to explain and demonstrate complex solutions persuasively to technical and non-technical audiences alike. You also will play a critical role in identifying opportunities to innovate and differentiate our AI systems, while maintaining our best-in-class safety standards. We expect our team members to operate autonomously, thrive under ambiguity, and represent Anthropic at the highest level in customer environments.
Act as a primary technical advisor for senior government leaders and prospective National Security customers evaluating Claude. Demonstrate how Claude can support U.S. and democratic allies’ national security operations and address customer use cases through proofs of concept. Provide technical guidance on integration, deployment, and adoption best practices.
Partner closely with the policy team and sales account executives to understand customer requirements. Develop customized pilots and prototypes, as well as evaluation suites to make the case for customer adoption.
Drive technical decision making by partnering on optimal setup, architecture, and integration of Claude into the customer's existing infrastructure. Demonstrate solutions to technical roadblocks.
Act as the voice of our customers and a key collaborator with our Product and Research teams to ensure we are delivering critical capabilities to the National Security community.
Travel to customer sites for senior leader meetings, AI implementation, technical enablement, and building relationships.
Establish a shared vision for creating solutions that enable beneficial and safe AI
Lead the vision, strategy, and execution of innovative solutions that leverage our latest models’ capabilities.
Active TS/SCI security clearance (required)
2+ years of experience as a Customer Engineer, Forward Deployed Engineer, Sales Engineer, Solutions Architect, or Platform Engineer within the National Security space
Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include senior executives, engineering & IT teams, and more
Experience in the defense, technology, or cybersecurity industries
Experience designing novel and innovative solutions for technical platforms in a developing mission area
Strong technical aptitude to partner with engineers and strong proficiency in at least one programming language (Python preferred)
Understanding of and experience with LLM fundamentals
The ability to navigate and execute amidst ambiguity, and to flex into different domains based on the business problem at hand, finding simple, easy-to-understand solutions
Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
A love of teaching, mentoring, and helping others succeed
Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic