Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The IT Operations team keeps Anthropic running — we make sure every employee can do their best work without friction. We're seeking an IT Support Engineer who combines deep technical skills with a genuine service mindset and sound judgment.
Anthropic is growing fast, and our IT operations need to keep pace. That means onboarding at scale, automating repetitive processes (including with Claude), and continuously improving the employee experience. You'll handle support challenges across our primarily macOS environment while contributing to the operational improvements that help us scale. You'll work closely with IT Engineering and Security, with opportunities to grow into more technical infrastructure work over time.
End-User Support
Communication & Documentation
Operational Improvement
This role is based in our Dublin office. There is a 5 day per week in office expectation.
You'll be joining a high-impact IT team supporting some of the world's leading AI researchers and engineers. The pace is fast and you'll have real ownership over your work. This isn't a role where you follow rigid scripts — you'll be trusted to use your judgment, improve our processes, and grow with the team. Because we're scaling rapidly, you'll have meaningful opportunities to shape how IT operations work at Anthropic.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry's largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.
The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Within Integrity & Compliance, the Privacy Programs pillar owns how we operationalize privacy across the company — from how we handle personal data in our products and research, to how we meet our obligations under the GDPR, CCPA, and the growing patchwork of global privacy law. We work closely with our Privacy Legal team on all privacy related matters.
We're hiring a Privacy Governance Lead to own the governance backbone of that work. You'll set the strategy for how privacy governance operates at Anthropic, define the policies and controls that translate privacy principles into operating practice, and help manage the relationship with internal and external stakeholders who depend on that framework holding up under scrutiny.
This is a foundational role with significant scope. You'll be shaping a privacy governance function from a relatively early stage, with the autonomy to set the standard and the mandate to drive cross-functional change. You'll partner closely with Privacy Legal, Security, Product, Research, and the wider I&C team, and you'll contribute directly to reporting that reaches the Audit Committee and boards. You'll report to the Head of Integrity & Compliance.
Set the strategy and roadmap for Anthropic's privacy governance framework, including the policies, standards, and internal controls that map to GDPR, CCPA/CPRA, and other applicable global privacy regimes
Own the privacy documentation lifecycle end-to-end — Data Protection Impact Assessments, Records of Processing, Transfer Impact Assessments, and other accountability artifacts — including the methodology, the tooling, and the quality bar
Establish governance forums and approval workflows for privacy-significant product, research, and vendor decisions, and chair the forums where novel or high-risk questions are resolved
Own the privacy controls testing program: define what "good" looks like, set the testing cadence, and present results to the Head of Integrity & Compliance and other leadership forums
Partner with Privacy Legal to anticipate emerging privacy law and translate new obligations into concrete control changes ahead of enforcement
In partnership with Legal, co-lead privacy regulator engagement on governance matters, including responses to inquiries, audits, and complaints
Oversee the management of inputs for regulatory responses with the Privacy Program pillar
Drive privacy training and awareness strategy for engineering, product, research, and go-to-market teams, calibrated to the actual decisions those teams make
Represent the privacy governance function in Internal Audit reporting, and in cross-functional risk and compliance forums
Build and develop the privacy governance team over time
Deep working knowledge of GDPR and at least one major US state privacy regime (CCPA/CPRA, or equivalent), including how their requirements translate into operational controls at scale
Demonstrated track record building, scaling, or transforming a privacy governance program end-to-end — policies, DPIAs, ROPAs, controls libraries, governance forums, and the operating model that supports them
Strong written communication, with the ability to produce clear policies, board-ready reporting, and practical guidance that engineering and product teams will actually use
Comfort owning hard cross-functional decisions and operating across legal, technical, and operational boundaries
A privacy certification such as CIPP/E, CIPP/US, or CIPM, or equivalent demonstrated expertise
Senior privacy governance leadership experience at a technology company operating under multiple privacy regimes simultaneously, ideally including one with novel data processing (AI/ML, large-scale platforms, or similar)
Direct experience engaging privacy regulators, particularly EU data protection authorities or the Irish DPC, on governance matters such as inquiries, audits, or complaints
Familiarity with AI-specific privacy considerations: training data governance, model memorization, output filtering, and the intersection with emerging AI regulation
Experience standing up governance functions in a high-growth environment, including building from a blank page
Demonstrated experience presenting to Audit Committees, boards, or equivalent senior governance bodies on privacy matters
Background that bridges privacy and broader compliance disciplines (security, regulatory, ABAC, enterprise risk management)
Role-specific policy: For this role, we expect staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Our Regulatory Programs pillar is a key pillar of our overall Integrity & Compliance function and covers a range of compliance domain areas including economic sanctions, US export controls, and regulatory compliance programs stemming from global AI safety regulation.
As a Content Moderation Specialist, you'll own day-to-day program management of Anthropic's global content moderation and online safety regulatory compliance program. Online safety regulation is one of the fastest-moving areas of technology law, and AI sits squarely in its sights. Regimes including the EU Digital Services Act, the UK Online Safety Act, the Australia Online Safety Act, and a growing set of emerging frameworks globally create novel obligations for how AI products are built, deployed, and governed. You will be at the forefront of translating those obligations into a defensible, well-documented compliance program — with regulatory risk assessments as the core of the work.
This is a deeply cross-functional role. You'll partner closely with internal counsel, Safeguards, and operations teams across Anthropic to build the compliance program and frameworks that demonstrate Anthropic meets its obligations under content regulation. This is a builder's role at a company that takes integrity seriously and moves fast — you'll exercise independent judgment on issues without clear precedent and help build durable programs that let Anthropic move quickly while honoring its obligations to regulators, customers, and the public.
Own the global content regulation risk assessment program, including the roadmap of required assessments across jurisdictions, a consistent and repeatable risk assessment methodology and framework, and the coordination of inputs, consultation, and approvals for each assessment
Build and maintain systems and trackers to assess, operationalize, and report on relevant regulatory requirements across Anthropic's products and jurisdictions
Partner with internal counsel, Safeguards, Policy, engineering, and operations teams to align internal practices with external commitments and legal obligations
Maintain a controls inventory and the compliance documentation library for content regulation, ensuring documentation is drafted, reviewed by the right stakeholders, and kept current
Conduct gap analysis when new or amended content regulations come into scope, and stand up the compliance readiness plan and workback for each
Provide regular written program status reporting to stakeholders and leadership, proactively surfacing stalled or at-risk items with a proposed path to unblock
Take on additional related work as the program evolves; job duties and responsibilities may change from time to time at Anthropic's discretion or as required by applicable law
Experience managing regulatory or compliance programs at a technology company or in a regulated industry
Hands-on experience conducting or program-managing regulatory risk assessments, including coordinating inputs across multiple functions
Demonstrated ability to build and maintain compliance program artifacts, including policies, risk assessment documentation, controls inventories, program trackers, and readiness plans
A track record of executing cross-functionally, driving outcomes across legal, product, policy, and operations partners without direct authority
Excellent written and verbal communication skills, including producing clear program documentation and status reporting for senior stakeholders
Sound judgment and the ability to make decisions and move work forward with incomplete information in an evolving regulatory environment
5+ years of relevant experience in regulatory program management or content moderation compliance
Direct experience with online safety or content moderation regulation, such as the EU Digital Services Act, UK Online Safety Act, Australia Online Safety Act, or comparable regimes (strongly preferred)
Experience in trust and safety, online safety, or regulatory compliance at a large consumer technology platform
Prior experience in a Big 4 or other professional services firm advising on content regulation, online safety, or platform compliance engagements
Experience designing risk assessment methodologies or compliance frameworks from first principles
Experience with multi-jurisdictional compliance programs in a rapidly scaling environment
Familiarity with how generative AI products intersect with content and online safety regulation
Role-specific policy: For this role, we expect staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As frontier AI regulation matures globally — with the EU AI Act, evolving US state and federal frameworks, the UK's framework, and emerging regimes across APAC — Anthropic needs a dedicated owner for the compliance program that translates these obligations into operational reality. We're hiring an experienced AI Compliance Officer to design, build, and run that program.
You'll sit within Anthropic's Integrity & Compliance function and partner closely with Regulatory Legal, Policy, Product, Security, Safeguards, and the Responsible Scaling team. You will own the systems, processes, controls, documentation, and cadences that make Anthropic's compliance with frontier AI regulation defensible to regulators, auditors, and our Board. This is a senior role with significant scope to shape how a leading AI lab approaches a brand-new and rapidly evolving area of regulation. You'll report to the Head of Integrity & Compliance.
Own the design, build-out, and ongoing maintenance of Anthropic's compliance program for frontier AI regulation across the EU AI Act and other in-scope global regimes
Serve as owner of compliance policies for AI governance regulation, and as an accountable reviewer of regulator-facing documentation
Partner with Regulatory Legal on regulator engagement, regulatory strategy, and external communications; maintain the regulator engagement log and ensure coordinated responses
Own the compliance controls framework, ensuring controls are appropriate, kept current, tested on a defined cadence, and mapped back to specific regulatory obligations
Own specific second-line compliance controls directly
Define the testing and monitoring program for AI governance controls, and lead audit and regulatory inspection readiness
Recommend compliance readiness plans for new or amended regulations, in alignment with Regulatory Legal's gap analysis
Present compliance program status, metrics, and material risks to leadership and the Board on a regular cadence; coordinate inputs from partner teams
Own training content for AI regulatory obligations, partnering with Legal on sign-off where materials convey legal positions
Build and lead a small team as the program scales
Experience building or running a compliance program in a regulated industry (e.g., technology, financial services or fintech, medical devices, data protection, or telecoms)
Hands-on experience operationalizing a complex regulatory regime end-to-end, including translating legal requirements into controls, documentation, training, and reporting
Demonstrated ability to work effectively in genuine ambiguity: novel regulation, evolving regulator expectations, and a fast-moving product environment
Experience engaging directly with regulators and building trusted relationships across jurisdictions
Strong written and verbal communication skills, with the ability to engage credibly with legal, technical, and executive audiences
Experience presenting to a Board, Audit Committee, or equivalent governance body
8+ years of experience in regulatory compliance, with meaningful time spent owning a program rather than supporting one
Direct experience with the EU AI Act, NIST AI RMF, ISO/IEC 42001, SB 53, or comparable AI governance frameworks
Experience standing up a second-line compliance function at a technology company
Background in regulatory engagement, including drafting filings, responding to regulator inquiries, or supporting examinations
Familiarity with how AI systems are developed, evaluated, and deployed, even from a non-technical background
Relevant certifications such as CCEP, CIPP/E, or IAPP AIGP
Genuine interest in getting AI governance right and helping build this program from the ground up
Role-specific policy: For this role, we expect staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Solutions Architect on the Startups Applied AI team at Anthropic, you will win the trust of founders and engineers by being an exceptional technical partner, helping startups successfully build on the Claude Developer Platform as they grow from early product to scale. You'll combine deep technical expertise with a builder-first mindset to help startups architect innovative LLM solutions, win technical evaluations, and get the most out of Claude.
Working closely with Account Executives and the broader Sales, Product, and Engineering teams, you'll guide startups from initial technical discovery through successful deployment and beyond. You'll leverage your expertise to help founders understand Claude's capabilities, develop evals, and design architectures that maximize the value of our AI systems.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
You will play a critical role in scaling our revenue by managing complex deals and developing standardized processes that balance speed with control. You'll work at the intersection of finance, sales, and legal teams to ensure our products reach customers to support Anthropic's rapid growth while maintaining the financial discipline and risk management practices essential for a frontier AI company. You'll have the opportunity to build scalable processes from the ground up and directly impact some of our most strategic customer relationships.
Financial Analysis: Conduct growth and margin analysis, discount impact assessment, and deal profitability modeling to support data-driven decision making
Deal Structure Optimization: Collaborate with sales teams to structure deals that meet customer needs while protecting Anthropic's financial and strategic interests
Deal Review and Approval Management: Review and analyze enterprise deals exceeding standard parameters, focusing on pricing structures, contract terms, and financial implications
Cross-Functional Coordination: Serve as the primary liaison between sales, finance, legal, and compliance teams throughout the deal lifecycle, ensuring smooth coordination and timely approvals
Risk Assessment: Evaluate potential risks in non-standard deal terms and escalate appropriately to senior stakeholders
Policy Implementation: Help develop and maintain deal approval policies, pricing guidelines, and exception handling procedures aligned with company objectives
Process Development: Create and refine approval workflows, escalation procedures, and standardized templates that enable efficient processing of complex deals while maintaining appropriate controls
Documentation and Reporting: Maintain comprehensive deal documentation and provide regular reporting on deal desk metrics, trends, and performance
Have 3+ years of experience in deal desk, finance, sales strategy and operations, or related analytical roles, preferably in a high-growth technology environment
Possess hands-on experience building sophisticated financial models, conducting pricing optimization, and structuring complex deals
Have excellent communication and stakeholder management abilities, with proven success coordinating across multiple departments
Are detail-oriented with strong process improvement instincts and the ability to design scalable workflows
Have experience with CRM systems, CPQ tools, and contract management platforms
Can work effectively in fast-paced, ambiguous environments while maintaining accuracy and attention to detail
Have a collaborative mindset and enjoy solving complex problems through cross-functional partnership
Experience at a SaaS, cloud, or AI/ML company with complex enterprise sales cycles
Background in consulting, investment banking, or other roles requiring deep analytical and strategic thinking
Experience with deal desk operations at companies with subscription and consumption-based business models
Knowledge of enterprise software contract terms and industry-standard commercial practices
Understanding of AI business models and the unique commercial considerations
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking a Regulatory Counsel to lead our legal work on online content regulation globally, and to interface with EU regulators on content regulation and AI regulation. Our international footprint covers some material regulatory regimes affecting Anthropic's business, including the EU AI Act, Digital Services Act, the UK Online Safety Act, and the rapidly evolving online-content regimes emerging across APAC and other international markets.
You will provide upfront regulatory-readiness counselling as new laws are developed and implemented, ongoing day-to-day advice once those laws are in force, and lead the non-contentious side of Anthropic's engagement with regulators. You will sit at the intersection of novel legal questions, fast-moving product development, and an unusually engaged regulatory environment, and will work cross-functionally with Legal, Compliance, Safeguards, Security, Product and Operations teams.
Role-specific policy: For this role, we expect all staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are seeking a Procurement Business Partner to serve as Anthropic's procurement lead for all EMEA purchasing operations. This generalist role will own end-to-end procurement across EMEA markets with particular depth in Real Estate & Workplace and Professional Services categories. You will be the go-to procurement partner for EMEA business stakeholders, building procurement infrastructure and vendor relationships outside the US as Anthropic scales globally.
This role requires someone who can operate independently across time zones, navigate EMEA contracting complexities, and bring strategic sourcing rigour to a high-growth environment.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As the EMEA AE Lead, Beneficial Deployments at Anthropic, you'll build and lead a foundational sales team driving Claude adoption across mission-driven organisations in Europe, the Middle East, and Africa. You'll leverage your consultative sales expertise and passion for social impact to secure strategic partnerships with nonprofits, foundations, INGOs, educational institutions, and social enterprises across the EMEA market.
This is a player-coach role requiring someone who can personally close complex deals while building a high-performing regional team. You'll operate with significant autonomy across time zones while maintaining tight alignment with global strategy. Our team and verticals are evolving rapidly—the ideal candidate thrives in ambiguity, is energised by building from scratch, and can flex across changing priorities as we learn what works.
The ideal candidate brings deep experience in the EMEA nonprofit or social impact technology landscape, established relationships with mission-driven institutions, and a proven track record of building teams that drive revenue and mission impact simultaneously.
This role will lead EMEA sales efforts across Beneficial Deployments verticals, which currently include:
Nonprofits & Foundations: INGOs, charitable trusts, foundations, and social enterprises across Europe, Middle East, and Africa. Navigate federated organisational structures, EU/UK regulatory requirements, and diverse funding mechanisms.
Education: Educational institutions, EdTech organisations, and learning-focused nonprofits working to expand access and improve outcomes.
Emerging Markets: Partnerships in Africa and India with organisations driving social impact at scale.
Note: Verticals and priorities may evolve as the team learns and grows. We're looking for someone comfortable with a shifting remit who can help shape what this role becomes.
Win new business and drive revenue for Anthropic within EMEA mission-driven organisations. Navigate complex multi-stakeholder ecosystems to reach decision-makers, educate them about Claude, and help them succeed with Anthropic
Build and lead a regional team supporting EMEA customers, both inbound and outbound. Establish team structure, hiring priorities, and operational processes for scaling—while rolling up your sleeves to close deals yourself
Design and execute innovative sales strategies tailored to diverse EMEA contexts: nonprofit budget cycles and grant timelines, foundation giving patterns, and varying regulatory environments across jurisdictions
Navigate complex stakeholder ecosystems including INGO executive teams, foundation programme officers, university leadership, trustees, executive directors, and IT departments to build consensus
Develop and maintain relationships with key EMEA ecosystem players: nonprofit networks (Bond, NCVO, European Foundation Centre), education networks, and implementation partners
Inform product roadmaps by gathering feedback from EMEA nonprofit and education users. Provide insights on regional requirements including data sovereignty, language support, and compliance needs
Continuously refine the EMEA sales methodology by incorporating learnings into playbooks, templates, and best practices. Adapt global processes for regional contexts while contributing insights back to the global team
Ensure all sales activities comply with relevant data protection regulations (GDPR, UK GDPR) and address customer concerns about data sovereignty, AI ethics, and responsible deployment
Partner effectively with SF-based teams across time zones, maintaining regular cadence with Elizabeth Kelly and cross-functional stakeholders while operating with significant regional autonomy
Help shape team processes and culture as we scale from 1 to N
8+ years of B2B sales experience in nonprofit technology, EdTech, or social impact sectors, preferably in EMEA SaaS or emerging technologies
Track record of managing complex sales cycles within nonprofits, INGOs, foundations, or educational institutions, securing strategic deals by understanding both mission requirements and technical needs
Experience building and scaling sales teams, with proven ability to recruit, develop, and retain top talent while operating across multiple time zones and cultural contexts
Deep understanding of nonprofit or education sector operations, including INGO federated structures, European foundation giving, UK charity sector dynamics, and/or higher education procurement
Demonstrated ability to navigate diverse stakeholder ecosystems including trustees, executive directors, programme officers, and procurement committees
A scrappy mentality—comfortable wearing multiple hats, building from scratch, driving clarity in ambiguous situations, and doing whatever it takes to further the mission
Strong understanding of GDPR and UK data protection, with ability to address customer concerns about AI ethics and responsible deployment
Proven experience exceeding revenue targets while operating autonomously, managing an evolving pipeline across multiple market segments and time zones
Excellent communication skills with ability to adapt style across cultural contexts
Fluency in English required; proficiency in French valued given Francophone Africa coverage; additional European languages a plus
A genuine passion for social impact and experience with or commitment to advancing mission-driven work through technology
Active involvement in the EMEA nonprofit or education community through board service, advisory roles, or sector leadership
Existing relationships with major INGOs (Save the Children, Oxfam, IRC, MSF, CARE, World Vision), foundations, or educational institutions
Familiarity with nonprofit data privacy requirements, AI ethics frameworks, and responsible technology deployment
Track record of building strategic partnerships with foundations or philanthropic advisors
Experience presenting at nonprofit conferences (Bond Conference, NCVO Conference, Skoll World Forum) or education forums
Understanding of specific verticals: education technology, digital health, financial inclusion/economic mobility programmes
Location: London. Must be able to travel within EMEA (up to 30%) and to SF headquarters quarterly.
Time Zone Coverage: Must maintain regular overlap with SF-based teams (typically 4-5 hours daily) while covering EMEA business hours.
Travel: Regular travel within EMEA for customer meetings, conferences, and team gatherings; quarterly travel to SF for alignment and planning.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are seeking a detail-oriented and skilled Payroll Specialist to join our growing Finance team in Dublin, Ireland. In this role, you will be central to managing and scaling our international payroll operations, ensuring accuracy, compliance, and timely delivery of payroll services to our employees across multiple countries. This role will report to our International Payroll Lead.
This role combines operational excellence with strategic thinking. The ideal candidate brings deep experience from a large multinational environment with proven expertise in managing complex, multi-country payroll operations and third-party vendor relationships.
Execute full-cycle payroll processing for international entities including Ireland, UK, Switzerland, France, Germany, Japan and South Korea
Coordinate cross-functional inputs from HR, Benefits, and Finance to ensure accurate payroll processing
Develop and maintain payroll calendars, checklists, and processing schedules across all jurisdictions
Ensure payroll compliance with local statutory requirements across all jurisdictions, including tax filings, social security, and pension contributions
Support expansion of payroll operations to new countries across EMEA and APAC
Support Corporate Accounting with month-end payroll related accounting tasks
Manage relationships with third-party payroll providers
Respond to employee enquiries regarding international payroll matters in a timely and professional manner
Resolve international payroll tax notices from various tax authorities
Assist in preparing international payroll reports and analytics for management review
Support international equity compensation processes and reporting
Document and maintain standard operating procedures and compliance documentation
Support payroll systems integration projects and process automation initiatives
Hold a bachelor's degree in Accounting, Finance, or related field
Have 5-6+ years of experience in payroll operations, with at least 3 years in international payroll
Are proficient in payroll processing for various compensation types (regular, off-cycle, severance, taxable benefits, and equity)
Have strong knowledge of international payroll regulations and compliance requirements
Are experienced with payroll journal entries, account reconciliations, and month-end close procedures
Have experience working with Workday HRIS and global payroll providers
Possess excellent attention to detail, ability to multitask and maintain accuracy under tight deadlines
Are a strong communicator who can collaborate effectively across time zones and cultures
Care about building infrastructure that supports a rapidly growing organization
Have strong analytical and problem-solving skills
Are proficient in Microsoft Excel and Google Sheets
Have experience supporting rapid international expansion in technology or startup environments
Possess knowledge of business traveler payroll tax requirements
Have experience with payroll systems implementations
Be familiar with Workday or other major HRIS platforms
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking an exceptional Commercial Counsel to join our founding Commercial Legal team in EMEA! You'll serve as a strategic legal partner to our sales teams, supporting complex deal negotiations and commercial activities that fuel our expansion with leading EMEA companies and organizations across all market segments, from SMB to large enterprises. In this highly impactful role, you'll guide sophisticated technology transactions across diverse industries and client sizes while providing strategic counsel on the unique legal and regulatory considerations for responsible AI deployment throughout the EMEA region.
Responsibilities:
You might be a good fit if you have:
Strong candidates may have:
Role-specific policy: For this role, we expect all staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
As an EMEA Nonprofit Account Executive at Anthropic, you'll drive adoption of safe, frontier AI by securing strategic partnerships with nonprofit organisations across Europe, the Middle East, and Africa. You'll leverage your consultative sales expertise to propel revenue growth while becoming a trusted partner to nonprofit leaders, helping them embed and deploy AI to amplify their impact across programme delivery, fundraising, research, and operations.
This role requires deep understanding of the diverse nonprofit landscape across EMEA, including international development organisations (INGOs), humanitarian agencies, foundations, and charitable trusts. You'll navigate varying regulatory frameworks, data protection requirements (including GDPR), and cultural contexts while building relationships across multiple time zones and languages.
The ideal candidate will be an exceptional salesperson with experience selling into EMEA markets — and specifically into Spanish-speaking contexts — a passion for developing new market segments, and the ability to operate autonomously while partnering closely with SF-based teams. By driving deployment of Anthropic's emerging products in the EMEA nonprofit sector, you will help organisations amplify their social impact while advancing the ethical development of AI.
Responsibilities
You May Be a Good Fit If You Have
Strong Candidates May Also Have
Logistics
Location: London or Dublin preferred.
Travel: Up to 40% travel within EMEA for customer meetings and events; quarterly travel to SF headquarters expected.
Education: Bachelor's degree or equivalent experience.
Visa Sponsorship: We sponsor visas where possible and retain immigration support for successful candidates.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are seeking an experienced Senior Accounts Receivable Analyst to join our Finance team in Dublin, reporting directly to the International Revenue Accounting LEAD in Dublin with a dotted line to the Global Process Owner in the US. In this role, you will be responsible for the day-to-day collection’s activities for all EMEA-based customers, ensuring timely cash application and dispute resolution across the region. This role plays an important part in ensuring global consistency by following Anthropic’s global credit and collections policies and maintaining key KPIs in line with the rest of the organization. You will serve as a key cross-functional partner to Sales, Legal, and Customer Success teams, while driving process improvements to scale AR operations as Anthropic continues its rapid growth.
Collections & Cash Application
Manage end-to-end collections for EMEA customers, proactively contacting accounts to ensure timely payment in accordance with agreed terms.
Monitor aging reports daily, prioritize outreach based on balance, risk, and strategic importance, and maintain DSO targets for the EMEA portfolio.
Ensure accurate and timely cash application, investigating and resolving unapplied or misapplied payments.
Prepare and distribute customer account statements and payment reminders on a regular cadence.
Dispute Management
Investigate and resolve billing disputes, short payments, and customer queries in a timely and professional manner.
Collaborate with Sales, Customer Success, and Legal to resolve complex disputes, including contract interpretation issues and pricing discrepancies.
Track dispute trends and root causes, providing insights and recommendations to reduce future occurrences.
Maintain detailed records of all dispute activity and resolutions for audit and reporting purposes.
Cross-Functional Partnership
Serve as the primary AR point of contact for EMEA Sales and Customer Success teams, providing account-level insights and supporting deal structuring with payment terms guidance to the Credit and Deal Desk teams.
Partner with Legal on contract terms impacting receivables, including payment clauses, late-payment provisions, and currency considerations.
Collaborate with Revenue Accounting and FP&A on cash forecasting, revenue recognition support, and month-end close activities.
Support external audit requirements by preparing AR-related schedules and responding to auditor inquiries.
Global Consistency & Compliance
Adhere to and champion Anthropic’s global credit and collections policies, ensuring EMEA operations are fully aligned with worldwide standards and procedures.
Maintain key AR performance KPIs (DSO, aging buckets, collections rates, dispute resolution time, etc.) in line with global benchmarks and targets.
Participate in regular global AR reviews with the US-based team, providing EMEA performance updates and aligning on policy changes or process enhancements.
Ensure consistent application of payment terms, escalation protocols, and customer communication standards across all EMEA markets in accordance with global guidelines as well as local practices.
Partner with the Global Process Owner on the rollout or modification of policies, tools, and initiatives for EMEA for adoption and effectiveness monitoring.
Process Improvement & Systems
Identify opportunities to automate and streamline AR workflows, including collections outreach, cash application, and reporting.
Contribute to the evaluation, implementation, and optimization of AR tools and technologies (e.g., ERP systems, collections platforms, dunning automation).
Maintain AR Policies, procedures & SOX-Compliant controls for the EMEA region, aligned with global standards.
Manage and update dashboards and reporting to provide visibility into EMEA AR performance, including DSO, aging, and collections effectiveness metrics.
7–10 years of progressive experience in accounts receivable or order-to-cash operations, preferably within a high-growth technology or SaaS company.
Deep expertise in B2B collections across multiple European jurisdictions, with strong knowledge of local payment practices, regulations, and cultural nuances.
Proficiency with ERP, Billing & Collections systems (e.g., NetSuite, Stripe, Workday, Zuora, Tesorio, HighRadius) and advanced to expert level Excel/Google Sheets skills.
Strong understanding of IFRS revenue recognition standards and their impact on AR processes.
Understanding of expected credit loss methodologies (IFRS 9 and CECL) and their application to accounts receivable, including the ability to support reserve calculations and provide data inputs for loss modelling.
Excellent communication and negotiation skills, with the ability to engage diplomatically with customers at all levels.
Demonstrated ability to operate within a global framework, adhering to centralized policies and KPI targets while adapting execution to local market requirements.
Proven ability to work independently, manage competing priorities, and meet deadlines in a fast-paced environment.
Bachelor’s degree in Finance, Accounting, Business Administration, or a related field.
Professional qualification (ACA, ACCA, CIMA) or equivalent certification.
Familiarity with Salesforce CRM and its integration with billing/AR systems.
Experience operating across multiple currencies and managing FX-related AR complexities.
Prior experience building AR processes from the ground up in a scaling organization.
Fluency in one or more additional European languages is a strong plus.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As the EMEA AE Lead, Beneficial Deployments at Anthropic, you'll build and lead a foundational sales team driving Claude adoption across mission-driven organisations in Europe, the Middle East, and Africa. You'll leverage your consultative sales expertise and passion for social impact to secure strategic partnerships with nonprofits, foundations, INGOs, educational institutions, and social enterprises across the EMEA market.
This is a player-coach role requiring someone who can personally close complex deals while building a high-performing regional team. You'll operate with significant autonomy across time zones while maintaining tight alignment with global strategy. Our team and verticals are evolving rapidly—the ideal candidate thrives in ambiguity, is energised by building from scratch, and can flex across changing priorities as we learn what works.
The ideal candidate brings deep experience in the EMEA nonprofit or social impact technology landscape, established relationships with mission-driven institutions, and a proven track record of building teams that drive revenue and mission impact simultaneously.
This role will lead EMEA sales efforts across Beneficial Deployments verticals, which currently include:
Nonprofits & Foundations: INGOs, charitable trusts, foundations, and social enterprises across Europe, Middle East, and Africa. Navigate federated organisational structures, EU/UK regulatory requirements, and diverse funding mechanisms.
Education: Educational institutions, EdTech organisations, and learning-focused nonprofits working to expand access and improve outcomes.
Emerging Markets: Partnerships in Africa and India with organisations driving social impact at scale.
Note: Verticals and priorities may evolve as the team learns and grows. We're looking for someone comfortable with a shifting remit who can help shape what this role becomes.
Win new business and drive revenue for Anthropic within EMEA mission-driven organisations. Navigate complex multi-stakeholder ecosystems to reach decision-makers, educate them about Claude, and help them succeed with Anthropic
Build and lead a regional team supporting EMEA customers, both inbound and outbound. Establish team structure, hiring priorities, and operational processes for scaling—while rolling up your sleeves to close deals yourself
Design and execute innovative sales strategies tailored to diverse EMEA contexts: nonprofit budget cycles and grant timelines, foundation giving patterns, and varying regulatory environments across jurisdictions
Navigate complex stakeholder ecosystems including INGO executive teams, foundation programme officers, university leadership, trustees, executive directors, and IT departments to build consensus
Develop and maintain relationships with key EMEA ecosystem players: nonprofit networks (Bond, NCVO, European Foundation Centre), education networks, and implementation partners
Inform product roadmaps by gathering feedback from EMEA nonprofit and education users. Provide insights on regional requirements including data sovereignty, language support, and compliance needs
Continuously refine the EMEA sales methodology by incorporating learnings into playbooks, templates, and best practices. Adapt global processes for regional contexts while contributing insights back to the global team
Ensure all sales activities comply with relevant data protection regulations (GDPR, UK GDPR) and address customer concerns about data sovereignty, AI ethics, and responsible deployment
Partner effectively with SF-based teams across time zones, maintaining regular cadence with Elizabeth Kelly and cross-functional stakeholders while operating with significant regional autonomy
Help shape team processes and culture as we scale from 1 to N
8+ years of B2B sales experience in nonprofit technology, EdTech, or social impact sectors, preferably in EMEA SaaS or emerging technologies
Track record of managing complex sales cycles within nonprofits, INGOs, foundations, or educational institutions, securing strategic deals by understanding both mission requirements and technical needs
Experience building and scaling sales teams, with proven ability to recruit, develop, and retain top talent while operating across multiple time zones and cultural contexts
Deep understanding of nonprofit or education sector operations, including INGO federated structures, European foundation giving, UK charity sector dynamics, and/or higher education procurement
Demonstrated ability to navigate diverse stakeholder ecosystems including trustees, executive directors, programme officers, and procurement committees
A scrappy mentality—comfortable wearing multiple hats, building from scratch, driving clarity in ambiguous situations, and doing whatever it takes to further the mission
Strong understanding of GDPR and UK data protection, with ability to address customer concerns about AI ethics and responsible deployment
Proven experience exceeding revenue targets while operating autonomously, managing an evolving pipeline across multiple market segments and time zones
Excellent communication skills with ability to adapt style across cultural contexts
Fluency in English required; proficiency in French valued given Francophone Africa coverage; additional European languages a plus
A genuine passion for social impact and experience with or commitment to advancing mission-driven work through technology
Active involvement in the EMEA nonprofit or education community through board service, advisory roles, or sector leadership
Existing relationships with major INGOs (Save the Children, Oxfam, IRC, MSF, CARE, World Vision), foundations, or educational institutions
Familiarity with nonprofit data privacy requirements, AI ethics frameworks, and responsible technology deployment
Track record of building strategic partnerships with foundations or philanthropic advisors
Experience presenting at nonprofit conferences (Bond Conference, NCVO Conference, Skoll World Forum) or education forums
Understanding of specific verticals: education technology, digital health, financial inclusion/economic mobility programmes
Location: London. Must be able to travel within EMEA (up to 30%) and to SF headquarters quarterly.
Time Zone Coverage: Must maintain regular overlap with SF-based teams (typically 4-5 hours daily) while covering EMEA business hours.
Travel: Regular travel within EMEA for customer meetings, conferences, and team gatherings; quarterly travel to SF for alignment and planning.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
As an EMEA Nonprofit Account Executive at Anthropic, you'll drive adoption of safe, frontier AI by securing strategic partnerships with nonprofit organisations across Europe, the Middle East, and Africa. You'll leverage your consultative sales expertise to propel revenue growth while becoming a trusted partner to nonprofit leaders, helping them embed and deploy AI to amplify their impact across programme delivery, fundraising, research, and operations.
This role requires deep understanding of the diverse nonprofit landscape across EMEA, including international development organisations (INGOs), humanitarian agencies, foundations, and charitable trusts. You'll navigate varying regulatory frameworks, data protection requirements (including GDPR), and cultural contexts while building relationships across multiple time zones and languages.
The ideal candidate will be an exceptional salesperson with experience selling into EMEA markets — and specifically into French-speaking contexts — a passion for developing new market segments, and the ability to operate autonomously while partnering closely with SF-based teams. By driving deployment of Anthropic's emerging products in the EMEA nonprofit sector, you will help organisations amplify their social impact while advancing the ethical development of AI.
Responsibilities
You May Be a Good Fit If You Have
Strong Candidates May Also Have
Logistics
Location: London or Dublin preferred.
Travel: Up to 40% travel within EMEA for customer meetings and events; quarterly travel to SF headquarters expected.
Education: Bachelor's degree or equivalent experience.
Visa Sponsorship: We sponsor visas where possible and retain immigration support for successful candidates.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
As an EMEA Nonprofit Account Executive at Anthropic, you'll drive adoption of safe, frontier AI by securing strategic partnerships with nonprofit organisations across Europe, the Middle East, and Africa. You'll leverage your consultative sales expertise to propel revenue growth while becoming a trusted partner to nonprofit leaders, helping them embed and deploy AI to amplify their impact across programme delivery, fundraising, research, and operations.
This role requires deep understanding of the diverse nonprofit landscape across EMEA, including international development organisations (INGOs), humanitarian agencies, foundations, and charitable trusts. You'll navigate varying regulatory frameworks, data protection requirements (including GDPR), and cultural contexts while building relationships across multiple time zones and languages.
The ideal candidate will be an exceptional salesperson with experience selling into EMEA markets — and specifically into French-speaking contexts — a passion for developing new market segments, and the ability to operate autonomously while partnering closely with SF-based teams. By driving deployment of Anthropic's emerging products in the EMEA nonprofit sector, you will help organisations amplify their social impact while advancing the ethical development of AI.
Responsibilities
You May Be a Good Fit If You Have
Strong Candidates May Also Have
Logistics
Location: London or Dublin preferred.
Travel: Up to 40% travel within EMEA for customer meetings and events; quarterly travel to SF headquarters expected.
Education: Bachelor's degree or equivalent experience.
Visa Sponsorship: We sponsor visas where possible and retain immigration support for successful candidates.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Our Inference team is responsible for building and maintaining the critical systems that serve Claude to millions of users worldwide. We bring Claude to life by serving our models via the industry's largest compute-agnostic inference deployments. We are responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators.
The team has a dual mandate: maximizing compute efficiency to serve our explosive customer growth, while enabling breakthrough research by giving our scientists the high-performance inference infrastructure they need to develop next-generation models. We tackle complex, distributed systems challenges across multiple accelerator families and emerging AI hardware running in multiple cloud platforms.
As a Staff Software Engineer on our Inference team, you will work end to end, identifying and addressing key infrastructure blockers to serve Claude to millions of users while enabling breakthrough AI research. Strong candidates should have familiarity with performance optimization, distributed systems, large-scale service orchestration, and intelligent request routing. Familiarity with LLM inference optimization, batching strategies, and multi-accelerator deployments is highly encouraged but not strictly necessary.
High-performance, large-scale distributed systems
Implementing and deploying machine learning systems at scale
Load balancing, request routing, or traffic management systems
LLM inference optimization, batching, and caching strategies
Kubernetes and cloud infrastructure (AWS, GCP)
Python or Rust
Have significant software engineering experience, particularly with distributed systems
Are results-oriented, with a bias towards flexibility and impact
Pick up slack, even if it goes outside your job description
Want to learn more about machine learning systems and infrastructure
Thrive in environments where technical excellence directly drives both business results and research breakthroughs
Care about the societal impacts of your work
Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators
Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads
Building production-grade deployment pipelines for releasing new models to millions of users
Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage
Contributing to new inference features (e.g., structured sampling, prompt caching)
Supporting inference for new model architectures
Analyzing observability data to tune performance based on real-world production workloads
Managing multi-region deployments and geographic routing for global customers
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an EMEA Nonprofit Account Executive at Anthropic, you'll drive adoption of safe, frontier AI by securing strategic partnerships with nonprofit organisations across Europe, the Middle East, and Africa. You'll leverage your consultative sales expertise to propel revenue growth while becoming a trusted partner to nonprofit leaders, helping them embed and deploy AI to amplify their impact across programme delivery, fundraising, research, and operations.
This role requires deep understanding of the diverse nonprofit landscape across EMEA, including international development organisations (INGOs), humanitarian agencies, foundations, and charitable trusts. You'll navigate varying regulatory frameworks, data protection requirements (including GDPR), and cultural contexts while building relationships across multiple time zones and languages.
The ideal candidate will be an exceptional salesperson with experience selling into EMEA markets — and specifically into Portuguese-speaking contexts — a passion for developing new market segments, and the ability to operate autonomously while partnering closely with SF-based teams. By driving deployment of Anthropic's emerging products in the EMEA nonprofit sector, you will help organisations amplify their social impact while advancing the ethical development of AI.
Location: London preferred. Remote within UK/EU considered for exceptional candidates.
Travel: Up to 40% travel within EMEA for customer meetings and events; quarterly travel to SF headquarters expected.
Time Zone Coverage: Must be able to maintain regular overlap with SF-based teams (typically 4–5 hours daily).
Education: Bachelor's degree or equivalent experience.
Visa Sponsorship: We sponsor visas where possible and retain immigration support for successful candidates.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Claude has your back. AIRE has Claude's. Help us keep Claude reliable for everyone who depends on it.
AIRE (AI Reliability Engineering) partners with teams across Anthropic to improve reliability across our most critical serving paths -- every hop from the SDK through our network, API layers, serving infrastructure, and accelerators and back. We jump into the trenches alongside partner teams to make the systems that deliver Claude more robust and resilient, be it during an incident or collaborating on projects.
Reliability here is an emergent phenomenon that transcends any single team's boundaries, so someone has to zoom out and look at the whole picture. That's us -- and it means few teams at Anthropic offer this kind of dynamic, cross-cutting exposure to the systems that matter most.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
As an EMEA Nonprofit Account Executive at Anthropic, you'll drive adoption of safe, frontier AI by securing strategic partnerships with nonprofit organisations across Europe, the Middle East, and Africa. You'll leverage your consultative sales expertise to propel revenue growth while becoming a trusted partner to nonprofit leaders, helping them embed and deploy AI to amplify their impact across programme delivery, fundraising, research, and operations.
This role requires deep understanding of the diverse nonprofit landscape across EMEA, including international development organisations (INGOs), humanitarian agencies, foundations, and charitable trusts. You'll navigate varying regulatory frameworks, data protection requirements (including GDPR), and cultural contexts while building relationships across multiple time zones and languages.
The ideal candidate will be an exceptional salesperson with experience selling into EMEA markets — and specifically into Spanish-speaking contexts — a passion for developing new market segments, and the ability to operate autonomously while partnering closely with SF-based teams. By driving deployment of Anthropic's emerging products in the EMEA nonprofit sector, you will help organisations amplify their social impact while advancing the ethical development of AI.
Responsibilities
You May Be a Good Fit If You Have
Strong Candidates May Also Have
Logistics
Location: London or Dublin preferred.
Travel: Up to 40% travel within EMEA for customer meetings and events; quarterly travel to SF headquarters expected.
Education: Bachelor's degree or equivalent experience.
Visa Sponsorship: We sponsor visas where possible and retain immigration support for successful candidates.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We're seeking an experienced Order Management professional to join our International Revenue Accounting Team. In this pivotal role, you'll spearhead the development and scaling of our International order management processes while solving complex, cross-functional challenges. You'll collaborate across the organization to drive critical financial infrastructure improvements. If you're passionate about making a significant impact at an innovative company at the forefront of AI development, join us in our mission to build cutting-edge, safe AI systems.
Core Operations & Financial Management
● Drive aspects of order management and billing operations, ensuring accuracy, completeness, and timeliness
● Independently resolve complex billing scenarios, including contract modifications, usage disputes, and non-standard pricing structures
● Lead comprehensive User Acceptance Testing (UAT) for new product launches and influence product introduction processes by providing expert guidance on billing and order management implications
● Support monthly accounting close activities, including contract review, usage validation, invoice verification, journal entries, and analytics
● Develop and track operational metrics to support strategic decision-making
● Support global business operations and international customer requirements
Strategic Partnerships & Collaboration
● Collaborate closely with other pillars of the Revenue Accounting and Operations Team, including Revenue Accounting, Technical Revenue Accounting, and AR & Collections
● Cultivate strategic partnerships with cross-functional teams across the Quote-to-Cash ecosystem, including GTM, Legal, Tax, Billing Engineering, and Finance Systems
Process & System Optimization
● Identify and implement process improvements to enhance efficiency, scalability, and overall customer experience
● Partner with vendors to optimize billing systems, evaluate new features, and implement innovative solutions
● Independently own the end-to-end RFP process and implementation of new systems in the order management domain, from initial requirements gathering through selection, deployment, and integration
Compliance, Controls & Documentation
● Establish and maintain robust controls and segregation of duties within order management and billing operations
● Support audit requirements by preparing documentation and addressing inquiries
● Develop and maintain documentation for team processes and procedures
● Bachelor's degree in Accounting, Finance, or related field
● 7+ years of progressive experience in Billing/Order Management within high-growth SaaS/technology companies, including 5+ years in a reviewer role
● Working knowledge of ASC 606 revenue recognition principles.
● Expert understanding of Quote-to-Cash processes for SaaS, covering both subscription and consumption-based business models across B2B and B2C products
● Extensive experience with contracting systems, billing platforms, payment processors, and ERP systems (e.g., Salesforce, Stripe, NetSuite, Oracle, Workday Financial, Zuora)
● Proven track record leading large-scale strategic initiatives end-to-end, in high growth technology environments
● Outstanding communication and interpersonal skills
● Demonstrated ability to build relationships with diverse stakeholders and influence without direct authority
● Professional accounting qualification (ACA, ACCA, CIMA or equivalent)
● Exceptional organizational skills with meticulous attention to detail
● Proactive problem-solver who can identify opportunities for process optimization
● Adaptability to thrive in fast-paced, ambiguous environments
● Data-driven approach to business process development; SQL and database experience a plus
● Experience with third-party marketplace integrations (AWS, GCP, Azure)
● Proven ability to provide guidance, mentorship, and project leadership to team members and contractors
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Sales Development Manager for EMEA at Anthropic, you will lead and scale our business development function across Europe, the Middle East, and Africa. You will build and manage a team of 6-8 BDRs primarily in Dublin. This role requires exceptional agility, cultural fluency across diverse European markets, and the ability to develop segment-specific strategies while navigating complex regulatory environments and regional nuances. You will be instrumental in establishing Anthropic's regional presence and building the foundation for long-term growth in EMEA.
Build, lead, and scale a team of 6-8 BDRs across EMEA markets including SEU, NEU, and DACH.
Develop and execute region-specific prospecting strategies that account for local market dynamics, cultural nuances, and competitive landscapes across diverse European markets
Support all sales segments (Startups, Commercial, Enterprise) with agility to shift resources based on regional opportunities
Partner with regional AEs and sales leadership to align pipeline generation with territory plans and revenue targets
Establish KPIs and tracking mechanisms that account for regional differences while maintaining global consistency
Create localized training programs and enablement materials that resonate with diverse European business cultures
Build and maintain relationships with regional marketing teams to optimize lead quality and campaign effectiveness
Own regional Pipeline Reviews with sales leadership covering market-specific insights and growth opportunities
Navigate complex hiring and employment regulations across multiple European countries, partnering with HR and Legal
Coach and develop BDRs on region-specific prospecting techniques and career progression
3-6 years of experience managing sales development or inside sales teams in EMEA
Proven track record of growing and scaling teams across multiple European countries/offices
Experience managing distributed teams across different time zones and cultures within EMEA
Strong understanding of business practices, sales cycles, and decision-making processes in key EMEA markets
Experience adapting global sales processes for European markets while maintaining consistency
Deep understanding of GDPR, EU AI Act, and other European regulatory requirements affecting enterprise sales
Strong analytical skills with ability to identify and act on regional market opportunities
Experience with Salesforce and sales technology stack
Excellent communication skills with ability to operate effectively across European cultures
Bachelor's degree or equivalent work experience
Experience at US-headquartered technology companies expanding in EMEA
Background in AI/ML, cloud infrastructure, or developer platforms
Track record of building BDR/SDR functions from scratch in new European markets
Experience managing both velocity (Startup/Commercial) and strategic (Enterprise) sales motions
Fluency in German, French, Spanish or other major European languages
Network of talent for BDR hiring across EMEA markets
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Manager on the Startups team at Anthropic, you'll lead a team of 5-10 Growth Account Executives responsible for driving expansion and retention across our fastest-growing startup customers. You'll build and develop a high-performing team while establishing the frameworks, processes, and best practices that enable them to help customers harness the transformative potential of safe, frontier AI. In this role, you'll be responsible for growing team revenue, driving optimal commercial outcomes, and ensuring Anthropic is building long-term partnerships with the world’s fastest growing AI native startups.
Responsibilities:
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an EMEA Nonprofit Account Executive at Anthropic, you'll drive adoption of safe, frontier AI by securing strategic partnerships with nonprofit organisations across Europe, the Middle East, and Africa. You'll leverage your consultative sales expertise to propel revenue growth while becoming a trusted partner to nonprofit leaders, helping them embed and deploy AI to amplify their impact across programme delivery, fundraising, research, and operations.
This role requires deep understanding of the diverse nonprofit landscape across EMEA, including international development organisations (INGOs), humanitarian agencies, foundations, and charitable trusts. You'll navigate varying regulatory frameworks, data protection requirements (including GDPR), and cultural contexts while building relationships across multiple time zones and languages.
The ideal candidate will be an exceptional salesperson with experience selling into EMEA markets — and specifically into Portuguese-speaking contexts — a passion for developing new market segments, and the ability to operate autonomously while partnering closely with SF-based teams. By driving deployment of Anthropic's emerging products in the EMEA nonprofit sector, you will help organisations amplify their social impact while advancing the ethical development of AI.
Location: London preferred. Remote within UK/EU considered for exceptional candidates.
Travel: Up to 40% travel within EMEA for customer meetings and events; quarterly travel to SF headquarters expected.
Time Zone Coverage: Must be able to maintain regular overlap with SF-based teams (typically 4–5 hours daily).
Education: Bachelor's degree or equivalent experience.
Visa Sponsorship: We sponsor visas where possible and retain immigration support for successful candidates.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
You will play a critical role in scaling our revenue by managing complex deals and developing standardized processes that balance speed with control. You'll work at the intersection of finance, sales, legal, and compliance teams to ensure our products reach customers to support Anthropic's rapid growth. Day-to-day, you'll quarterback complex deals through CPQ, contract management, and internal approvals. Over time, you'll also identify and drive improvements to the systems and workflows you operate in — embedding approved deal structures into Salesforce, improving CPQ configurations, and helping build the operational infrastructure that lets a lean team support enterprise deal volume at scale.
Quarterback deals through internal processes — CPQ quote-building, approval routing, contract execution — ensuring accurate, timely completion
Own Anthropic’s deal-making execution processes, ensuring that all commercial workflows can seamlessly navigate internal processes
Build AI-native tooling with internal teams to meaningfully scale our commercial processes
Optimize operational workflows that reduce friction in the sales cycle and accelerate time-to-close
Act as the bridge between Sales and supporting teams, maintaining contract and compliance management systems and ensuring agreement terms align with company standards
Maintain visibility on high-priority deals, providing appropriate escalation paths when commercial terms require executive input
Collaborate with Legal teams to validate that agreements reflect the correct commercial intent and approvals
Track and analyze deal patterns to identify bottlenecks and propose data-driven improvements to our sales operations
Maintain and improve deal documentation: approval policies, pricing guidelines, SKU guidance, and process playbooks
Have 3+ years of experience in deal desk, sales strategy, commercial operations, or related roles, preferably in a high-growth technology environment
Are comfortable both executing within existing systems (CPQ, SFDC, contract platforms) and improving those systems as you go
Bring a detail-oriented, process-improvement mindset: you notice when something is manual or error-prone and you want to fix it
Have excellent communication and stakeholder management abilities, with proven success coordinating across multiple departments
Strong project management skills with experience juggling multiple critical initiatives
Bias toward action and comfort operating in ambiguity
Have experience with CRM systems, CPQ tools, and contract management platforms
Have a collaborative mindset and enjoy solving complex problems through cross-functional partnership
Experience with Salesforce, CPQ tools (Nue, Apttus, Steelbrick), or contract management platforms (Ironclad, Docusign CLM)
Background in project management, sales strategy, or commercial operations, where you’ve balanced operational excellence with building scaled systems
Experience with deal operations at companies with subscription and/or consumption-based business models
Knowledge of enterprise software contract terms and industry-standard commercial practices
Understanding of AI business models and the unique commercial considerations
Strong communication skills
Business partnership (experience working with Sales teams)
Deep operational experiences
Systems and process optimization
Capable of building new AI-powered workflows
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic