Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic’s model weights and training infrastructure are among the highest-value targets in the technology sector, facing nation-state, supply-chain interdiction, and insider threats that conventional enterprise security programs were not built to address. As Commercial Counsel, Infrastructure Security for Compute and Infrastructure at Anthropic, you’ll be the day-to-day legal partner to the Chief Security Officer’s infrastructure-security and the Governance, Risk and Compliance teams. You will own the contractual and regulatory layer of physical and facility security, hardware and supply-chain security, network security, vendor personnel and insider-risk flow-downs, and security regulatory and assurance.
You’ll work in close partnership with Anthropic teammates in Frontier, Product, Litigation, Employment, and Commercial Legal, in addition to specialized outside counsel. You will serve as the primary legal owner ensuring security requirements are in the contract before signature and that external work product aligns with Anthropic’s security posture and commercial objectives.
Draft and negotiate security design-basis and site-hardening specifications in build-to-suit, lease, and colo agreements (perimeter, access control, CCTV, intrusion detection); guard-force statements of work, post orders, and KPI regimes; visitor, contractor, and badging policy; and security clauses in shared-campus and multi-tenant arrangements
Own provenance, anti-tamper, and chain-of-custody warranties in silicon, ODM, and OEM paper; trusted-supplier and country-of-origin restrictions; NDAA §889/§5949 and CHIPS-Act guardrail flow-downs; BIS/EAR advanced-computing and semiconductor export-control flow-downs and end-use/end-user certifications; firmware integrity, secure-boot, and golden-image escrow terms; secure logistics; counterfeit-part and grey-market controls; and secure decommissioning and certified media-destruction terms
Draft security schedules in carrier and fiber agreements (encryption-in-transit, route integrity, lawful-intercept handling), and security obligations in peering agreements
Set background-screening, training, and badge-revocation requirements for vendor and contractor personnel with site or hardware access, and flow Anthropic personnel-security standards into guard-force, security-integrator, and EPC vendor MSAs
Support CFIUS and outbound-investment screening on infrastructure vendors and sites, provide NIST/ISO/SOC 2 physical-control evidence for customer and auditor assurance in partnership with security teams; and support security representations in customer contracts that reference physical infrastructure with Commercial Legal
Work closely with specialized outside counsel, ensuring their work product aligns with Anthropic’s security and commercial objectives
Build the function: develop and maintain the security-schedule library, design-basis templates, advise on vendor security questionnaire templates, and negotiation playbooks; train Procurement, Datacenter, and Network teams to apply them at scale
Serve as direct counsel to the CSO’s infrastructure-security organization, coordinating with Product Legal and Litigation on incident response, threat intelligence, law-enforcement and intelligence-community engagement, insider-threat governance, and model-weight security policy under Anthropic’s Responsible Scaling Policy
Escalate novel structures or terms that create downstream risk for Anthropic’s security posture or operational flexibility; ensure security requirements accommodate AI-specific threats including hardware tamper, supply-chain interdiction, and high-value-target facility risk
Monitor and assess the evolving regulatory landscape affecting security and data protection, identifying higher-risk obligations for the business and partnering with security to operationalize them through policies, controls, and compliance programs
Advise on risk assessments, risk acceptance decisions, and reporting to leadership and the board; and review remediation commitments arising from assessments, customer audits and regulator inquiries.
JD and active membership in at least one U.S. state bar
Fluency in security design-basis specifications, guard-force and access-control contracting, and how security schedules interact with build-to-suit, colo, procurement, and carrier agreements
Experience with NDAA §889/§5949, CHIPS-Act guardrails, CFIUS/outbound-investment screening, and trusted-supplier or country-of-origin programs
Comfort with NIST, ISO 27001, and SOC 2 physical-control frameworks and the evidence and attestation process that supports customer and auditor assurance
Ability to coordinate effectively with multiple internal legal teams, and specialized outside counsel while maintaining strategic direction
Strong judgment about when contractual security terms create downstream risk for Anthropic’s security posture, audit position, or operational flexibility
Effective collaboration skills for working with the CSO’s organization, procurement, datacenter, and network teams
Communication skills that translate security and supply-chain-integrity concepts into clear risk assessments for business stakeholders
Genuine interest in infrastructure security and appreciation for why physical, hardware, and network security is mission-critical for frontier AI
At least 10-12 years of relevant legal experience with meaningful exposure to physical and facility security contracting, hardware and supply-chain security, network security schedules, or security regulatory and assurance work for critical infrastructure
In-house experience at cloud service providers, hyperscalers, defense and aerospace primes, telecom carriers, utilities, semiconductor companies, or datacenter operators supporting physical-security, supply-chain-security, or security-assurance programs; or U.S. government experience at DoD, DHS/CISA, BIS, or CFIUS staff
Experience at large technology companies with first-party datacenter or hardware programs supporting security contracting from the buy side
Law firm experience at practices with national-security, supply-chain, or critical-infrastructure specialization, particularly those who have worked on NDAA §889, CFIUS, or trusted-supplier matters
Prior involvement in transactions requiring sophisticated understanding of secure logistics, chain-of-custody, firmware integrity, and certified media destruction
Familiarity with CCTV/biometrics privacy regimes, executive-protection contracting, and the contractual layer of insider-risk programs
Ability to obtain and maintain a U.S. security clearance
Role-specific policy: For this role, we expect staff to be able to work from our San Francisco, Seattle, Washington D.C., or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We're looking for a Manager, Commercial Counsel, GTM to help lead our legal team at the forefront of shaping how cutting-edge AI technology reaches users, researchers, and enterprises worldwide. You'll work alongside experienced colleagues who operate as true strategic partners—moving fast, thinking creatively, and working collaboratively to support deals that matter. In this role, you'll support our enterprise technology and startups Go-to-Market teams, handling enterprise agreements and customer negotiations, as well as team growth and scaling initiatives.
This is an excellent opportunity for a seasoned attorney to help lead and grow the commercial legal team while contributing directly to responsible AI development.
Draft, review, and negotiate commercial agreements, with a primary focus on GTM/Sales agreements with enterprise technology customers and startups.
Partner closely with Sales and Go-to-Market leadership and teams to support deal velocity and provide practical legal guidance
Help build and maintain our commercial legal infrastructure, including contract templates, playbooks, and process improvements
Manage a team to support enterprise technology and startups segments
Identify and escalate legal risks appropriately while supporting business momentum
Stay current on legal developments affecting AI technologies
JD and active membership in at least one U.S. state bar
Experience drafting and negotiating commercial contracts, SaaS agreements, or technology transactions
Demonstrated people management experience, including hiring, developing, and leading attorneys or legal professionals
Clear written and verbal communication skills, including the ability to translate legal concepts for non-lawyer stakeholders
8+ years of legal experience, with substantial time spent on commercial contracts, SaaS agreements, or technology transactions
Active membership in the California state bar
Experience leading legal teams through high-growth phases at technology companies
Experience navigating multi-stakeholder agreements involving technical and policy considerations
Curiosity about AI technology and AI policy, and a commitment to responsible AI development
A collaborative, low-ego approach to working with colleagues and external partners
A "doer" attitude—willingness to roll up your sleeves and pitch in where needed-
Comfort in fast-paced environments with shifting priorities, and a willingness to pitch in across what the team needs
Role-specific policy: For this role, we expect all staff to be able to work from our San Francisco office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Within Integrity & Compliance, the Privacy Programs pillar owns how we operationalize privacy across the company — from how we handle personal data in our products and research, to how we meet our obligations under the GDPR, CCPA, and the growing patchwork of global privacy law. We work closely with our Privacy Legal team on all privacy related matters.
We're hiring a Privacy Governance Lead to own the governance backbone of that work. You'll set the strategy for how privacy governance operates at Anthropic, define the policies and controls that translate privacy principles into operating practice, and help manage the relationship with internal and external stakeholders who depend on that framework holding up under scrutiny.
This is a foundational role with significant scope. You'll be shaping a privacy governance function from a relatively early stage, with the autonomy to set the standard and the mandate to drive cross-functional change. You'll partner closely with Privacy Legal, Security, Product, Research, and the wider I&C team, and you'll contribute directly to reporting that reaches the Audit Committee and boards. You'll report to the Head of Integrity & Compliance.
Set the strategy and roadmap for Anthropic's privacy governance framework, including the policies, standards, and internal controls that map to GDPR, CCPA/CPRA, and other applicable global privacy regimes
Own the privacy documentation lifecycle end-to-end — Data Protection Impact Assessments, Records of Processing, Transfer Impact Assessments, and other accountability artifacts — including the methodology, the tooling, and the quality bar
Establish governance forums and approval workflows for privacy-significant product, research, and vendor decisions, and chair the forums where novel or high-risk questions are resolved
Own the privacy controls testing program: define what "good" looks like, set the testing cadence, and present results to the Head of Integrity & Compliance and other leadership forums
Partner with Privacy Legal to anticipate emerging privacy law and translate new obligations into concrete control changes ahead of enforcement
In partnership with Legal, co-lead privacy regulator engagement on governance matters, including responses to inquiries, audits, and complaints
Oversee the management of inputs for regulatory responses with the Privacy Program pillar
Drive privacy training and awareness strategy for engineering, product, research, and go-to-market teams, calibrated to the actual decisions those teams make
Represent the privacy governance function in Internal Audit reporting, and in cross-functional risk and compliance forums
Build and develop the privacy governance team over time
Deep working knowledge of GDPR and at least one major US state privacy regime (CCPA/CPRA, or equivalent), including how their requirements translate into operational controls at scale
Demonstrated track record building, scaling, or transforming a privacy governance program end-to-end — policies, DPIAs, ROPAs, controls libraries, governance forums, and the operating model that supports them
Strong written communication, with the ability to produce clear policies, board-ready reporting, and practical guidance that engineering and product teams will actually use
Comfort owning hard cross-functional decisions and operating across legal, technical, and operational boundaries
A privacy certification such as CIPP/E, CIPP/US, or CIPM, or equivalent demonstrated expertise
Senior privacy governance leadership experience at a technology company operating under multiple privacy regimes simultaneously, ideally including one with novel data processing (AI/ML, large-scale platforms, or similar)
Direct experience engaging privacy regulators, particularly EU data protection authorities or the Irish DPC, on governance matters such as inquiries, audits, or complaints
Familiarity with AI-specific privacy considerations: training data governance, model memorization, output filtering, and the intersection with emerging AI regulation
Experience standing up governance functions in a high-growth environment, including building from a blank page
Demonstrated experience presenting to Audit Committees, boards, or equivalent senior governance bodies on privacy matters
Background that bridges privacy and broader compliance disciplines (security, regulatory, ABAC, enterprise risk management)
Role-specific policy: For this role, we expect staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Our Regulatory Programs pillar is a key pillar of our overall Integrity & Compliance function and covers a range of compliance domain areas including economic sanctions, US export controls, and regulatory compliance programs stemming from global AI safety regulation.
As a Content Moderation Specialist, you'll own day-to-day program management of Anthropic's global content moderation and online safety regulatory compliance program. Online safety regulation is one of the fastest-moving areas of technology law, and AI sits squarely in its sights. Regimes including the EU Digital Services Act, the UK Online Safety Act, the Australia Online Safety Act, and a growing set of emerging frameworks globally create novel obligations for how AI products are built, deployed, and governed. You will be at the forefront of translating those obligations into a defensible, well-documented compliance program — with regulatory risk assessments as the core of the work.
This is a deeply cross-functional role. You'll partner closely with internal counsel, Safeguards, and operations teams across Anthropic to build the compliance program and frameworks that demonstrate Anthropic meets its obligations under content regulation. This is a builder's role at a company that takes integrity seriously and moves fast — you'll exercise independent judgment on issues without clear precedent and help build durable programs that let Anthropic move quickly while honoring its obligations to regulators, customers, and the public.
Own the global content regulation risk assessment program, including the roadmap of required assessments across jurisdictions, a consistent and repeatable risk assessment methodology and framework, and the coordination of inputs, consultation, and approvals for each assessment
Build and maintain systems and trackers to assess, operationalize, and report on relevant regulatory requirements across Anthropic's products and jurisdictions
Partner with internal counsel, Safeguards, Policy, engineering, and operations teams to align internal practices with external commitments and legal obligations
Maintain a controls inventory and the compliance documentation library for content regulation, ensuring documentation is drafted, reviewed by the right stakeholders, and kept current
Conduct gap analysis when new or amended content regulations come into scope, and stand up the compliance readiness plan and workback for each
Provide regular written program status reporting to stakeholders and leadership, proactively surfacing stalled or at-risk items with a proposed path to unblock
Take on additional related work as the program evolves; job duties and responsibilities may change from time to time at Anthropic's discretion or as required by applicable law
Experience managing regulatory or compliance programs at a technology company or in a regulated industry
Hands-on experience conducting or program-managing regulatory risk assessments, including coordinating inputs across multiple functions
Demonstrated ability to build and maintain compliance program artifacts, including policies, risk assessment documentation, controls inventories, program trackers, and readiness plans
A track record of executing cross-functionally, driving outcomes across legal, product, policy, and operations partners without direct authority
Excellent written and verbal communication skills, including producing clear program documentation and status reporting for senior stakeholders
Sound judgment and the ability to make decisions and move work forward with incomplete information in an evolving regulatory environment
5+ years of relevant experience in regulatory program management or content moderation compliance
Direct experience with online safety or content moderation regulation, such as the EU Digital Services Act, UK Online Safety Act, Australia Online Safety Act, or comparable regimes (strongly preferred)
Experience in trust and safety, online safety, or regulatory compliance at a large consumer technology platform
Prior experience in a Big 4 or other professional services firm advising on content regulation, online safety, or platform compliance engagements
Experience designing risk assessment methodologies or compliance frameworks from first principles
Experience with multi-jurisdictional compliance programs in a rapidly scaling environment
Familiarity with how generative AI products intersect with content and online safety regulation
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Our Regulatory Programs pillar is a key pillar of our overall Integrity & Compliance function and covers a range of compliance domain areas including economic sanctions, US export controls, and regulatory compliance programs stemming from global AI safety regulation.
As a Content Moderation Specialist, you'll own day-to-day program management of Anthropic's global content moderation and online safety regulatory compliance program. Online safety regulation is one of the fastest-moving areas of technology law, and AI sits squarely in its sights. Regimes including the EU Digital Services Act, the UK Online Safety Act, the Australia Online Safety Act, and a growing set of emerging frameworks globally create novel obligations for how AI products are built, deployed, and governed. You will be at the forefront of translating those obligations into a defensible, well-documented compliance program — with regulatory risk assessments as the core of the work.
This is a deeply cross-functional role. You'll partner closely with internal counsel, Safeguards, and operations teams across Anthropic to build the compliance program and frameworks that demonstrate Anthropic meets its obligations under content regulation. This is a builder's role at a company that takes integrity seriously and moves fast — you'll exercise independent judgment on issues without clear precedent and help build durable programs that let Anthropic move quickly while honoring its obligations to regulators, customers, and the public.
Own the global content regulation risk assessment program, including the roadmap of required assessments across jurisdictions, a consistent and repeatable risk assessment methodology and framework, and the coordination of inputs, consultation, and approvals for each assessment
Build and maintain systems and trackers to assess, operationalize, and report on relevant regulatory requirements across Anthropic's products and jurisdictions
Partner with internal counsel, Safeguards, Policy, engineering, and operations teams to align internal practices with external commitments and legal obligations
Maintain a controls inventory and the compliance documentation library for content regulation, ensuring documentation is drafted, reviewed by the right stakeholders, and kept current
Conduct gap analysis when new or amended content regulations come into scope, and stand up the compliance readiness plan and workback for each
Provide regular written program status reporting to stakeholders and leadership, proactively surfacing stalled or at-risk items with a proposed path to unblock
Take on additional related work as the program evolves; job duties and responsibilities may change from time to time at Anthropic's discretion or as required by applicable law
Experience managing regulatory or compliance programs at a technology company or in a regulated industry
Hands-on experience conducting or program-managing regulatory risk assessments, including coordinating inputs across multiple functions
Demonstrated ability to build and maintain compliance program artifacts, including policies, risk assessment documentation, controls inventories, program trackers, and readiness plans
A track record of executing cross-functionally, driving outcomes across legal, product, policy, and operations partners without direct authority
Excellent written and verbal communication skills, including producing clear program documentation and status reporting for senior stakeholders
Sound judgment and the ability to make decisions and move work forward with incomplete information in an evolving regulatory environment
5+ years of relevant experience in regulatory program management or content moderation compliance
Direct experience with online safety or content moderation regulation, such as the EU Digital Services Act, UK Online Safety Act, Australia Online Safety Act, or comparable regimes (strongly preferred)
Experience in trust and safety, online safety, or regulatory compliance at a large consumer technology platform
Prior experience in a Big 4 or other professional services firm advising on content regulation, online safety, or platform compliance engagements
Experience designing risk assessment methodologies or compliance frameworks from first principles
Experience with multi-jurisdictional compliance programs in a rapidly scaling environment
Familiarity with how generative AI products intersect with content and online safety regulation
Role-specific policy: For this role, we expect staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking a Policy Counsel to support our public policy and legal initiatives at the intersection of AI technology and regulation. As a member of our legal team, you'll work cross-functionally to navigate emerging policy challenges and help build and advocate for frameworks that promote responsible AI development. This is an exceptional opportunity to tackle novel legal and policy questions while contributing directly to the future of AI governance.
Role-specific policy: For this role, we expect all staff to be able to work from our London or Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Within Integrity & Compliance, the Privacy Programs pillar owns how we operationalize privacy across the company — from how we handle personal data in our products and research, to how we meet our obligations under the GDPR, CCPA, and the growing patchwork of global privacy law. We work closely with our Privacy Legal team on all privacy related matters.
We're hiring a Privacy Governance Lead to own the governance backbone of that work. You'll set the strategy for how privacy governance operates at Anthropic, define the policies and controls that translate privacy principles into operating practice, and help manage the relationship with internal and external stakeholders who depend on that framework holding up under scrutiny.
This is a foundational role with significant scope. You'll be shaping a privacy governance function from a relatively early stage, with the autonomy to set the standard and the mandate to drive cross-functional change. You'll partner closely with Privacy Legal, Security, Product, Research, and the wider I&C team, and you'll contribute directly to reporting that reaches the Audit Committee and boards. You'll report to the Head of Integrity & Compliance.
Set the strategy and roadmap for Anthropic's privacy governance framework, including the policies, standards, and internal controls that map to GDPR, CCPA/CPRA, and other applicable global privacy regimes
Own the privacy documentation lifecycle end-to-end — Data Protection Impact Assessments, Records of Processing, Transfer Impact Assessments, and other accountability artifacts — including the methodology, the tooling, and the quality bar
Establish governance forums and approval workflows for privacy-significant product, research, and vendor decisions, and chair the forums where novel or high-risk questions are resolved
Own the privacy controls testing program: define what "good" looks like, set the testing cadence, and present results to the Head of Integrity & Compliance and other leadership forums
Partner with Privacy Legal to anticipate emerging privacy law and translate new obligations into concrete control changes ahead of enforcement
In partnership with Legal, co-lead privacy regulator engagement on governance matters, including responses to inquiries, audits, and complaints
Oversee the management of inputs for regulatory responses with the Privacy Program pillar
Drive privacy training and awareness strategy for engineering, product, research, and go-to-market teams, calibrated to the actual decisions those teams make
Represent the privacy governance function in Internal Audit reporting, and in cross-functional risk and compliance forums
Build and develop the privacy governance team over time
Deep working knowledge of GDPR and at least one major US state privacy regime (CCPA/CPRA, or equivalent), including how their requirements translate into operational controls at scale
Demonstrated track record building, scaling, or transforming a privacy governance program end-to-end — policies, DPIAs, ROPAs, controls libraries, governance forums, and the operating model that supports them
Strong written communication, with the ability to produce clear policies, board-ready reporting, and practical guidance that engineering and product teams will actually use
Comfort owning hard cross-functional decisions and operating across legal, technical, and operational boundaries
A privacy certification such as CIPP/E, CIPP/US, or CIPM, or equivalent demonstrated expertise
Senior privacy governance leadership experience at a technology company operating under multiple privacy regimes simultaneously, ideally including one with novel data processing (AI/ML, large-scale platforms, or similar)
Direct experience engaging privacy regulators, particularly EU data protection authorities or the Irish DPC, on governance matters such as inquiries, audits, or complaints
Familiarity with AI-specific privacy considerations: training data governance, model memorization, output filtering, and the intersection with emerging AI regulation
Experience standing up governance functions in a high-growth environment, including building from a blank page
Demonstrated experience presenting to Audit Committees, boards, or equivalent senior governance bodies on privacy matters
Background that bridges privacy and broader compliance disciplines (security, regulatory, ABAC, enterprise risk management)
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
The Governance & Oversight pillar is the operational and structural backbone of Integrity & Compliance. It owns the frameworks, policies, controls testing, training, GRC tooling, and reporting that hold the rest of the function together — and that allow Anthropic to demonstrate accountability to employees, customers, regulators, and the public as the company scales.
We're hiring a Compliance Governance & Oversight Lead to set the strategy for how compliance governance works at Anthropic, own the systems and reporting that give leadership a clear line of sight into program health, and partner with the leads of our other I&C pillars — Privacy Programs, Regulatory Programs, and Corporate Compliance — to ensure their work is supported by a coherent operating model.
This is a senior leadership role in a function being built from the ground up. You'll have significant autonomy to shape the design of the pillar, and you'll build and develop the team that runs it. You'll report to the Head of Integrity & Compliance.
Set the strategy for compliance governance and oversight at Anthropic, defining how policies, controls, training, and reporting come together into a coherent operating model across the I&C function
Own the policy lifecycle end-to-end — drafting standards, approval workflows, version control, attestation, and accessibility — and ensure Anthropic's compliance policies are current, coherent, and demonstrably communicated
Lead selection, implementation, and ongoing administration of the GRC platform that serves as the central system of record for I&C, including risk tracking, control documentation, issue management, and workflow
Design and run the controls testing and monitoring program across all I&C pillars, partnering with pillar leads to verify controls are operating as intended and to surface gaps before they become issues
Own the compliance training program, working with subject matter experts across I&C to develop, deliver, and evidence training that meets regulatory requirements and embeds compliance into how the company operates
Drive issue management across the function — ensuring findings, exceptions, and remediation are documented, owned, escalated where needed, and driven to closure
Build the metrics, dashboards, and reporting cadences that give the Head of I&C, the broader leadership team and Internal Audit a clear view of program health, key risks, and strategic priorities
Prepare and present compliance reporting to leadership synthesizing inputs from across pillars into clear narratives
Coordinate the function's responses to compliance-related due diligence requests from customers, partners, and investors
Manage the I&C budget and vendor portfolio, providing the administrative foundation for the function's operations
Build and develop the Governance & Oversight team as the pillar's scope and headcount grow, and embed a culture of practical, well-evidenced compliance across the company
Significant experience in compliance, risk, internal audit, or a closely related field, including leadership of a compliance operations, governance, or oversight function
Demonstrated track record building or substantially scaling a compliance program — policies, controls testing, training, GRC tooling, and reporting — end-to-end, ideally from an early stage
Deep working knowledge of compliance program fundamentals: policy management, training, controls testing, issue management, and program reporting
Experience selecting, implementing, and operating GRC technology platforms, and using technology to make compliance more effective and less burdensome for the wider organization
Track record of preparing and presenting compliance reporting to senior leadership, Internal Audit, and/or board-level governance bodies
Strong written and verbal communication, with the ability to translate substantive compliance requirements into operating processes that engineering, product, and go-to-market teams will actually adopt
Demonstrated ability to lead through influence across a senior, cross-functional stakeholder group, and to make and defend cross-functional decisions
Experience building and developing teams
10+ years of relevant experience
Prior experience at a Big 4 or comparable compliance consulting or advisory professional services firm, in addition to in-house leadership experience
Experience leading compliance governance at a high-growth technology company, ideally one operating across multiple regulatory regimes
Exposure to AI-specific compliance considerations and the emerging regulatory landscape for AI
Experience standing up or transforming a compliance function in a hyper-scaling environment, including building from a blank page
Direct experience presenting to or supporting Audit Committee or board-level reporting on compliance matters
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Our Sanctions & Regulatory Programs pillar is a key pillar of our overall Integrity & Compliance function and covers a range of compliance domain areas including economic sanctions, US export controls, and regulatory compliance programs stemming from global AI safety regulation.
As a member of the Sanctions & Regulatory Programs team, you'll be a core driver of Anthropic's trade compliance program, covering both economic sanctions and US export controls, where the rules governing AI models and compute are still being written, and where sanctions and export control exposure runs through globally accessible APIs, downstream and embedded-partner channels, and rapid geographic expansion.
In this role you'll roll up your sleeves across the full sanctions and export controls compliance remit: partnering with internal counsel on the interpretation of applicable law, setting the policy frameworks and escalation standards that govern how Anthropic operates, and advising first-line teams (Operations, Product, Engineering, Go-to-Market, Finance, and Safeguards) on how to implement them. This is a builder's role at a company that takes integrity seriously and moves fast. You'll be at the forefront of novel trade compliance questions in artificial intelligence, exercise independent judgment on issues without clear precedent, and help build durable programs that let Anthropic move quickly while honoring its obligations to regulators, customers, and the public.
Own the policy frameworks for Anthropic's sanctions and export controls programs, and keep them current as regulations and the business evolve
Serve as the day-to-day subject-matter expert on international sanctions and US export controls (e.g., OFAC and other applicable sanctions regimes, EAR classification and advisory, BIS rules, deemed exports, and restricted-party / end-use and end-user controls), partnering with Trade Compliance Counsel on interpretation and with first-line teams on implementation
Conduct sanctions and export controls risk assessments for new products, features, partner and reseller channels, and geographic expansion, and translate findings into clear control requirements
Define screening standards, thresholds, and escalation criteria for restricted-party screening, and partner with Regulatory Operations on program design, edge-case handling, and exception review
Advise on third-party, downstream, and embedded-partner access risk, including contractual trade controls provisions and diligence standards for resellers, marketplaces, and platform partners
Review escalated matters from first-line teams, disposition complex screening and end-use cases, and determine when regulator engagement, licensing, or voluntary self-disclosure is warranted in partnership with Legal
Monitor applicable regulatory developments and enforcement actions, assess impact on Anthropic, and drive any required program changes
Conduct and coordinate testing and assurance over first-line trade compliance controls, and track findings and remediation to closure
Develop trade compliance training and awareness content for relevant employee populations
Prepare program reporting for I&C and Legal leadership on program health, key risks, and emerging issues
Substantive, hands-on experience with economic sanctions compliance (OFAC, EU, UK, and other applicable regimes), including program design, risk assessment, and screening governance, in a commercial environment
Substantive, hands-on experience with US export controls (EAR/BIS), including classification, end-use/end-user analysis, and deemed-export assessment
Experience building or materially maturing a sanctions or export controls compliance program: policies, risk frameworks, controls, testing, and training
A track record of partnering with legal counsel to interpret trade controls requirements and translating them into practical guidance for product, engineering, operations, and commercial teams
Experience reviewing and advising on trade controls provisions in commercial agreements and on third-party / downstream-customer diligence
Excellent written and verbal communication skills, including drafting policy and preparing materials for senior leadership
7+ years of relevant experience in sanctions and/or export controls compliance
Prior experience in a Big 4 or other compliance consulting or advisory professional services firm
Experience in a hyper-scaling tech company or other fast-paced environment
Exposure to trade compliance in software, cloud, or AI contexts
Sound judgment and the ability to make risk-based decisions and move work forward with imperfect information
A leadership style characterized by integrity, humility, strategic thinking, and solution-orientation
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Our Corporate Compliance program comprises a key pillar of our overall Integrity and Compliance function and covers a range of compliance domain areas including anti-bribery, anti-corruption, employee conduct and third party management.
As a member of the Corporate Compliance team, you'll be a core driver of a small, high-leverage team enhancing and executing on Anthropic's Corporate Compliance program. This is a generalist seat anchored by deep anti-bribery/anti-corruption expertise. You'll roll up your sleeves across the full Corporate Compliance remit — third-party risk, gifts & hospitality, conflicts of interest, outside business activities, the Code of Conduct, speak-up matters, and internal investigations — and be a subject-matter expert the business looks to on ABAC: setting the standards for third-party risk management, advising on government interactions and high-risk hospitality, and supporting Legal on compliance terms in commercial agreements. You'll also be a thought partner on how we use AI to run a modern compliance function.
This is a builder's role at a company that takes integrity seriously and moves fast. We're looking for a pragmatic, solution-oriented problem-solver who operates comfortably in grey areas, brings a beginner's mind to the job, and holds the line — clearly and credibly — where it matters. You'll work directly with legal, policy, partnerships, and go-to-market teams across multiple regions, exercise independent judgment on novel questions, and help design the frameworks that enable Anthropic to scale responsibly.
Advise business teams on government-official interactions, gifts and hospitality, third-party risk, and related FCPA / UK Bribery Act / local-law questions
Support Legal on anti-corruption, conflict-of-interest, and related compliance terms in commercial agreements
Review and disposition conflicts of interest, outside business activity requests, and gifts & hospitality matters, including escalations and edge cases
Serve as an ABAC subject-matter expert to the owner of the third-party risk management framework — advising on risk tiering, screening standards, ongoing monitoring, and escalation paths
Conduct and support internal investigations into compliance matters, including running discrete matters end-to-end as first chair
Develop policies, guidance, and training; partner across the business on cross-functional compliance topics
Help design and supervise AI-assisted compliance workflows (screening, intake triage, diligence drafting), bringing judgment to what gets automated and what stays human
Substantial depth in anti-corruption / FCPA / third-party risk — gained in-house, at a law firm white-collar/compliance practice, and/or in government (DOJ, SEC, SFO, or similar)
Hands-on experience running or overseeing third-party due diligence end to end: scoping, screening/adverse media, questionnaires, risk analysis, and mitigation
Experience conducting internal investigations — scoping, evidence collection and review, witness interviews, findings memos, and remediation recommendations
Experience advising Legal and deal teams on anti-corruption and related compliance terms and risks in commercial agreements
Clear, practical writing and the ability to give business-usable advice
Sound, independent judgment in ambiguous, fast-moving situations — knowing when to align, when to push, and when to escalate
7+ years of relevant experience
Range across the broader corporate compliance portfolio (Gifts & Hospitality, Conflicts, Outside Business Activities, Code of Conduct, speak-up) and the curiosity to learn what you haven't done yet
High EQ — reads rooms and relationships well and earns trust quickly across stakeholders
Genuine interest in using AI tools to do compliance work better
Experience standing up or maturing a compliance program at a high-growth or technology company
International experience — advising on market entry, partner/reseller programs, or government engagement outside the U.S.
Significant first-chair internal investigations experience, including privileged investigations and presenting findings to senior leadership or a board/audit committee
Familiarity with sanctions/export, political-law, or human-rights/supply-chain risk management (adjacent areas this role will touch)
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking an exceptional Contracts Manager to support commercial and technology transactions. We'll rely on you to help review, draft, and negotiate commercial and procurement contracts to support our cutting-edge AI systems. You'll also build scalable contracting processes and collaborate across Research, Product, Finance, Security, Business Operations, and Procurement teams.
Draft, analyze, negotiate, and complete complex commercial transactions with the company’s vendors, including SaaS and IP licensing agreements
Propose and develop scalable solutions to improve contracting efficiency
Collaborate with business units and legal team members to streamline contract management processes, identify potential risks, and drive continuous improvement in contract handling efficiency
Collaborate with the Research, Product, Finance, Security, Business Operations, and Procurement teams to achieve key business objectives and deliver strategic, business-minded, and solutions-focused counsel
As a relatively small legal team supporting big things, each legal team member practices across multiple areas. We’d love to hear where else you’d be interested to pitch in, whether as a function of your experience or interests
Experience operating in a fast-paced technology startup in which priorities shift rapidly and schedules "move to the left," thriving in this dynamic environment and priding yourself on your adaptability and ability to pivot with speed and grace
Initiative and autonomy in managing complex contractual matters, effectively prioritizing competing deadlines
An understanding of what's important in the context of a contract, the organization's mission, when to be flexible, and when to draw a hard line
Excitement to grow with an organization and help shape the culture of the commercial function
A preference for straightforward, concise explanations over legalese and overwrought drafting
A knack for identifying and implementing efficient processes and policies
The ability to thrive as a member of cross-functional teams building frontier technologies, with a desire to develop a deep understanding of our technical teams and what we are building
Comfort operating outside your areas of expertise and in uncharted legal territory
A "doer" mentality, willing to roll up your sleeves to get things done as a team player who doesn't hesitate to jump in to do work that needs to get done, whether glamorous or not
7+ years of related contract management experience, with at least 3 years of experience supporting technology transactions, vendor engagements, or partnerships
Proven ability to collaborate across various organizational levels and functions to facilitate efficient contracting processes and execution while building strong relationships
Substantive knowledge of software agreements, reseller agreements, services agreements, strategic partnerships, and general contract terms
Excellent verbal and written communication, analytical, collaboration, and interpersonal skills
Motivated self-starter able to multitask and juggle multiple priorities in a dynamic environment
Authentic integrity and a deep understanding of the importance of ethics in business
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As frontier AI regulation matures globally — with the EU AI Act, evolving US state and federal frameworks, the UK's framework, and emerging regimes across APAC — Anthropic needs a dedicated owner for the compliance program that translates these obligations into operational reality. We're hiring an experienced AI Compliance Officer to design, build, and run that program.
You'll sit within Anthropic's Integrity & Compliance function and partner closely with Regulatory Legal, Policy, Product, Security, Safeguards, and the Responsible Scaling team. You will own the systems, processes, controls, documentation, and cadences that make Anthropic's compliance with frontier AI regulation defensible to regulators, auditors, and our Board. This is a senior role with significant scope to shape how a leading AI lab approaches a brand-new and rapidly evolving area of regulation. You'll report to the Head of Integrity & Compliance.
Own the design, build-out, and ongoing maintenance of Anthropic's compliance program for frontier AI regulation across the EU AI Act and other in-scope global regimes
Serve as owner of compliance policies for AI governance regulation, and as an accountable reviewer of regulator-facing documentation
Partner with Regulatory Legal on regulator engagement, regulatory strategy, and external communications; maintain the regulator engagement log and ensure coordinated responses
Own the compliance controls framework, ensuring controls are appropriate, kept current, tested on a defined cadence, and mapped back to specific regulatory obligations
Own specific second-line compliance controls directly
Define the testing and monitoring program for AI governance controls, and lead audit and regulatory inspection readiness
Recommend compliance readiness plans for new or amended regulations, in alignment with Regulatory Legal's gap analysis
Present compliance program status, metrics, and material risks to leadership and the Board on a regular cadence; coordinate inputs from partner teams
Own training content for AI regulatory obligations, partnering with Legal on sign-off where materials convey legal positions
Build and lead a small team as the program scales
Experience building or running a compliance program in a regulated industry (e.g., technology, financial services or fintech, medical devices, data protection, or telecoms)
Hands-on experience operationalizing a complex regulatory regime end-to-end, including translating legal requirements into controls, documentation, training, and reporting
Demonstrated ability to work effectively in genuine ambiguity: novel regulation, evolving regulator expectations, and a fast-moving product environment
Experience engaging directly with regulators and building trusted relationships across jurisdictions
Strong written and verbal communication skills, with the ability to engage credibly with legal, technical, and executive audiences
Experience presenting to a Board, Audit Committee, or equivalent governance body
8+ years of experience in regulatory compliance, with meaningful time spent owning a program rather than supporting one
Direct experience with the EU AI Act, NIST AI RMF, ISO/IEC 42001, SB 53, or comparable AI governance frameworks
Experience standing up a second-line compliance function at a technology company
Background in regulatory engagement, including drafting filings, responding to regulator inquiries, or supporting examinations
Familiarity with how AI systems are developed, evaluated, and deployed, even from a non-technical background
Relevant certifications such as CCEP, CIPP/E, or IAPP AIGP
Genuine interest in getting AI governance right and helping build this program from the ground up
Role-specific policy: For this role, we expect staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As Commercial Counsel, Networking & Datacenter Operations for Compute and Infrastructure at Anthropic, you’ll lead legal execution for the connective tissue and the steady-state running of Anthropic’s Infrastructure. This role spans dark fiber and IRU agreements, IP transit and peering, subsea-cable capacity, network-equipment procurement, and the Operations & Maintenance (O&M), facilities-management, commissioning, decommissioning, and resilience contracting that keeps production sites online once the keys are handed over.
You’ll work in close partnership with Anthropic teammates in Datacenter Legal, Strategic Technology & Procurement, Infrastructure Security, Treasury/Risk and Public Policy, in addition to specialized outside counsel. You will serve as the primary legal owner ensuring external work product aligns with Anthropic’s commercial objectives and integrates with our overall compute and availability strategy.
Negotiate dark fiber and IRU agreements (long-haul and metro), wavelength and lit-capacity services, IP transit and peering (settlement-free and paid), internet-exchange participation, subsea-cable capacity and landing-party agreements, carrier-hotel and meet-me-room agreements, and cross-connect MSAs
Support the telecom regulatory interface, including FCC Section 214 authorizations, submarine-cable landing licenses and Team Telecom (EO 13913 Committee) review, the FCC’s 2025 subsea-cable security and IRU rules, private-carrier status determinations, and rights-of-way, franchise, and pole-attachment where Anthropic builds first-party fiber — working in close coordination with Infrastructure Security Legal on the national-security overlay for subsea capacity and landing-party participation
Structure and negotiate O&M and integrated-facilities-management contracts, critical-spares and break-fix agreements, smart-hands SLAs with colo landlords, and disaster recovery and resilience contracting
Own asset disposition, decommissioning, and media-sanitization terms, and data-hall fit-out and tenant-improvement work, in coordination with Infrastructure Security and Datacenter Legal
Support property, builder’s-risk, business-interruption, and equipment-breakdown insurance placement with Treasury/Risk, and coordinate with Infrastructure Security Legal on guard-force, vendor-personnel, and physical-security flow-downs in O&M and facilities-management contracts
Work closely with specialized outside counsel, ensuring their work product aligns with Anthropic’s commercial objectives
Build and own the network and O&M contract template library, SLA-credit playbooks, and intake and approval workflows so Network and Site-Ops teams can self-serve standard transactions; train Procurement and operations partners to apply them at scale
Manage SLA-credit claims, outage and root-cause-related disputes, warranty and break-fix claims, and pre-litigation vendor escalations, in coordination with Litigation
Collaborate with the Compute team’s production-site and network-operations leads who handle operational execution, change management, and governance, providing legal and risk management expertise
Understand how networking and operations legal structures interact with the compute capacity they support — availability SLAs, route diversity, and maintenance windows
Escalate novel structures or terms that create downstream risk for Anthropic’s availability or operational flexibility; ensure vendor arrangements accommodate AI-specific requirements including high-density power, liquid cooling infrastructure, and rapid scaling capacity
JD and active membership in at least one U.S. state bar
Fluency in availability and SLA constructs, route diversity and how network and O&M terms interact with colo leases, build-to-suit agreements, and financing arrangements
Experience with high-volume, template-driven, recurring vendor contracting and the discipline to keep it fast without losing control of risk
Comfort with telecom regulatory concepts (FCC Section 214, cable-landing licenses, Team Telecom review, rights-of-way) and the operational hand-off from construction to commissioning to steady-state
Ability to coordinate effectively with specialized outside counsel while maintaining strategic direction
Strong judgment about when vendor terms create downstream risk for availability, security, or operational flexibility
Effective collaboration skills for working with business teams who run production sites and the network and excellent communication skills that translate network and operations concepts into clear risk assessments for business stakeholders
Genuine interest in digital infrastructure operations and appreciation for why network and site availability is mission-critical for frontier AI
At least 10–12 years of relevant legal experience with meaningful exposure to telecom and network transactions (dark fiber, IRUs, transit, peering, subsea), datacenter operations, facilities management, or critical-infrastructure O&M contracting
In-house experience at cloud service providers, neo-clouds, carriers, subsea-cable consortia, or datacenter operators supporting network, colo, and O&M transactions
Experience at large technology companies with first-party backbone or production-site programs supporting network and operations transactions from the buy side
Law firm experience at practices with telecom, technology transactions, or infrastructure specialization, particularly those who have worked on fiber, peering, subsea, or datacenter operations matters
Prior involvement in transactions requiring sophisticated understanding of commissioning requirements, operational handoff, change management, and decommissioning
Familiarity with insurance and risk-transfer structures used in large-scale operating infrastructure, and with international vendor contracting for non-US production sites
Role-specific policy: For this role, we expect staff to be able to work from our San Francisco, Seattle, Washington D.C., or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking a Regulatory Counsel to lead our legal work on online content regulation globally, and to interface with EU regulators on content regulation and AI regulation. Our international footprint covers some material regulatory regimes affecting Anthropic's business, including the EU AI Act, Digital Services Act, the UK Online Safety Act, and the rapidly evolving online-content regimes emerging across APAC and other international markets.
You will provide upfront regulatory-readiness counselling as new laws are developed and implemented, ongoing day-to-day advice once those laws are in force, and lead the non-contentious side of Anthropic's engagement with regulators. You will sit at the intersection of novel legal questions, fast-moving product development, and an unusually engaged regulatory environment, and will work cross-functionally with Legal, Compliance, Safeguards, Security, Product and Operations teams.
Role-specific policy: For this role, we expect all staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are looking for an experienced Corporate Legal Specialist to join Anthropic's Corporate Legal team. In this role, you will be a key partner to our team as well as cross-functional partners, supporting entity and board management, governance, and corporate transactions across Anthropic's growing portfolio of U.S. and international entities. This role is well-suited for someone who thrives in a fast-moving, high volume environment; enjoys building and developing procedures and processes; and is comfortable navigating the complexity of a rapidly scaling company.
You will work closely with colleagues across Legal, Finance, Tax, People, and external counsel to ensure Anthropic's corporate records, filings, and governance practices are accurate, comprehensive, timely, and audit-ready.
Manage the full lifecycle of U.S. entity administration, including formations, qualifications, dissolutions, annual reports, assist with franchise tax filings, and obtain certificates of good standing, certified copies and coordinate notarizations and apostilles.
Maintain accurate and up-to-date corporate records for Anthropic PBC and its subsidiaries, including minute books, stock ledgers, org. charts, officer/director lists, and entity management databases.
Support board and shareholder governance matters, including organizing directors, officers and/or authorized signatories’ appointments and resignations, preparing board and shareholder meeting materials, drafting resolutions, written consents, secretary's certificates, and incumbency certificates.
Assist with new entity incorporations and structuring, including domestic and international subsidiary formations and post-incorporation operational matters (i.e. opening of bank accounts, obtaining tax registrations, injection of capitalizations, etc.), in coordination with outside counsel and cross functional teams. Also, will help with entity dissolutions, mergers, and post-integration of acquired entities.
Support Anthropic's international entities with board and shareholder meetings, corporate maintenance, local filings, KYC requests, apostilles, legalizations, and cross-border documentation.
Coordinate signatory authority, powers of attorney, and corporate approvals for commercial, financing, real estate, and employment matters.
Respond to diligence requests in connection with financings, acquisitions, audits, commercial transactions, and regulatory inquiries.
Be able to assist with in-house notarizations (if licensed as a State of California Notary Public).
Own and continuously improve entity management tools, workflows, and internal knowledge resources; help build scalable processes as Anthropic's footprint grows.
Partner with cross-functional teams on matters including equity administration, tax, compliance, insurance, and trademark/IP record maintenance.
Have at least 8 years of relevant experience in a paralegal role in a corporate legal department or a law firm.
Have proven expertise in U.S. entity management and corporate administrative and compliance work; international entity management experience is preferred.
Have strong experience with entity structuring – incorporations, dissolutions, mergers, and restructurings – including related post-closing work.
Are highly organized, detail-oriented, and able to manage a high volume of simultaneous matters without losing accuracy.
Are a proactive self-starter who can operate independently, spot issues, and drive work to completion with minimal oversight.
Have excellent written and verbal communication skills and can work effectively with attorneys, executives, external counsel, and cross-functional partners.
Are comfortable working in a fast-paced environment where priorities shift and processes are still being built.
Have high judgment around confidentiality and sensitive corporate information.
Experience supporting international entity management, including formations, governance, and annual maintenance across EMEA, APAC and LATAM jurisdictions.
Experience at a venture-backed technology company, including supporting equity financings, secondary transactions, or IPO readiness.
Familiarity with entity and contract management platforms (e.g., Diligent Entities and Ironclad) and board portals.
Experience coordinating with outside counsel and vendors across multiple jurisdictions.
An ABA Paralegal certificate.
A valid State of California Notary Public License, or an interest in obtaining one.
Role-specific policy: For this role, we expect all staff to be able to work from our San Francisco office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Commercial Counsel for Compute and Infrastructure at Anthropic, you'll lead legal execution for Anthropic's infrastructure transactions. These could span the spectrum of server rack compute, networking, semiconductor procurement and development, robotics, manufacturing and supply chain operations, direct datacenter leases, construction agreements, energy procurement, data center operations and the financing structures that enable infrastructure investments at massive scale. You'll work in close partnership with Anthropic teammates in Frontier Legal, Security, Litigation and Public Policy in addition to specialized outside counsel, serving as the in-house coordinator who ensures external work product aligns with Anthropic's commercial objectives and integrates with our overall compute strategy.
Role-specific policy: For this role, we expect all staff to be able to work from our San Francisco, Seattle or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking an exceptional Contracts Manager to support commercial and technology transactions. We'll rely on you to help review, draft, and negotiate commercial and procurement contracts to support our cutting-edge AI systems. You'll also build scalable contracting processes and collaborate across Product, Sales, Finance, Business Operations, Research, and Procurement teams.
Role-specific policy: For this role, we expect all staff to be able to work from our San Francisco or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role
Anthropic is seeking an exceptional Contracts Manager, Public Sector to support commercial and technology transactions. We'll rely on you to help review, draft, and negotiate commercial and procurement contracts to support our cutting-edge AI systems. You'll also build scalable contracting processes and collaborate across Product, Sales, Finance, Business Operations, Research, and Procurement teams.
Responsibilities:
You might be a good fit if you have:
Strong candidates may have:
Role-specific policy: For this role, we expect all staff to be able to work from our San Francisco, DC or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking an exceptional Commercial Counsel to join our founding Commercial Legal team in EMEA! You'll serve as a strategic legal partner to our sales teams, supporting complex deal negotiations and commercial activities that fuel our expansion with leading EMEA companies and organizations across all market segments, from SMB to large enterprises. In this highly impactful role, you'll guide sophisticated technology transactions across diverse industries and client sizes while providing strategic counsel on the unique legal and regulatory considerations for responsible AI deployment throughout the EMEA region.
Responsibilities:
You might be a good fit if you have:
Strong candidates may have:
Role-specific policy: For this role, we expect all staff to be able to work from our Dublin office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking an exceptional Commercial Counsel to join our founding Commercial Legal team in EMEA! You'll serve as a strategic legal partner to our sales teams, supporting complex deal negotiations and commercial activities that fuel our expansion with leading EMEA companies and organizations across all market segments, from SMB to large enterprises. In this highly impactful role, you'll guide sophisticated technology transactions across diverse industries and client sizes while providing strategic counsel on the unique legal and regulatory considerations for responsible AI deployment throughout the EMEA region.
Responsibilities:
You might be a good fit if you have:
Strong candidates may have:
Role-specific policy: For this role, we expect all staff to be able to work from our London office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
We are seeking an experienced People Legal Specialist to join our growing legal team and serve as the operational backbone for a small, high-impact team that advises on how Anthropic interacts with the people who power our mission: employees, candidates, advisors, and contingent workers globally. You will be a trusted project manager and process-builder, working under the guidance of legal counsel to keep employment matters running efficiently so our attorneys can focus on the thorniest strategic questions. You will partner closely with colleagues on the Litigation & Regulatory team and People team operations functions to help the People Legal team scale its support for a rapidly growing, globally distributed workforce at the frontier of AI.
Responsibilities:
You may be a good fit if you have:
Strong candidates may have:
Role-specific policy: For this role, we expect all staff to be able to work from our San Francisco, Seattle, or New York offices at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Commercial Counsel for Datacenters & Construction at Anthropic, you'll lead legal execution for Anthropic's largest physical infrastructure transactions: direct datacenter leases, construction agreements, and the financing structures that enable infrastructure investments at massive scale. This role focuses on facilities where Anthropic has direct control through ownership or long-term lease, complementing the colo & networks, and cloud compute legal teammates (facilities where Anthropic is tenant, and our hyperscaler relationships, respectively). You'll work in close partnership with specialized outside counsel on financing and real estate matters, serving as the in-house coordinator who ensures external work product aligns with Anthropic's commercial objectives and integrates with our overall compute strategy.
Role-specific policy: For this role, we expect staff to be able to work from our San Francisco or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We're looking for a Commercial Counsel, GTM to join our legal team at the forefront of shaping how cutting-edge AI technology reaches users, researchers, and enterprises worldwide. You'll work alongside experienced colleagues who operate as true strategic partners—moving fast, thinking creatively, and working collaboratively to support deals that matter. In this role, you'll focus heavily on supporting our Go-to-Market teams, handling enterprise agreements and customer negotiations while gaining exposure to a range of commercial work including research collaborations, product partnerships, and procurement. Additionally, you’ll have opportunities to work on cutting-edge AI safety and research agreements, including threat-sharing frameworks with leading security organizations and collaborations that shape and advance responsible AI development standards.
This is an excellent opportunity for a motivated attorney to develop deep expertise in their commercial transactions career while contributing directly to responsible AI development.
Role-specific policy: For this role, we expect all staff to be able to work from our San Francisco, Seattle or New York offices at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Commercial Counsel for Colo and Networks at Anthropic, you'll lead legal execution for the distributed component of Anthropic's compute strategy (colocation agreements, network connectivity / infrastructure, and datacenter services). You'll work closely with Anthropic’s compute transaction managers to secure capacity through third-party facilities and emerging providers, complementing our hyperscaler relationships and direct datacenter development workstreams.
Role-specific policy: For this role, we expect staff to be able to work from our San Francisco or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic