All active Lawyer roles based in San Francisco.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
As an Applied AI Security Architect, you will serve as Anthropic's trusted security expert for our most demanding enterprise customers. You'll engage directly with CISOs, security architects, compliance officers, and technical leaders at the world's largest financial institutions, insurance companies, and other highly regulated enterprises to address their most critical questions about deploying Claude safely and securely.
This is a pre-sales technical role focused on security, compliance, networking, and data architecture. Your job is to walk into a room full of security professionals and demonstrate deep expertise in enterprise security, regulatory compliance, and data protection. You'll help customers understand Claude's security architecture, data handling practices, and deployment options, and partner with them to design solutions that meet their specific regulatory and organizational requirements.
You'll bring significant experience in enterprise security, cloud architecture, and technical pre-sales within regulated industries. Whether you've been a Security Architect, Solutions Architect, Field CTO, or senior pre-sales engineer at a cloud or security vendor, what matters is that you understand how large institutions evaluate and adopt technology, especially in financial services, and can speak credibly to their security and compliance concerns.
We are looking for someone excited to help define how enterprises should think about security and compliance in the age of AI. How do MCP, autonomous agents, and RBAC work together? If working at the intersection of AI adoption and regulated industries excites you, this is the role for you.
Responsibilities:
You may be a good fit if you have:
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Benefits is a core expression of how we support Ants and their families, now and over the arc of their lives. Our Benefits function has scaled to a global program covering thousands of employees across multiple regions in just over two years, and we are now adding additional depth to match that breadth.
We are hiring a senior individual contributor to anchor a major piece of our global program. The role is shaped to fit the strongest candidate's regional expertise. You will primarily own one of two scopes:
Both versions of this role share the same level, the same reporting line, and the same core competencies. We will identify which scope fits best during the interview process based on where your hands-on experience and strongest interests lie.
This is a builder role. You will design programs where infrastructure does not yet exist, run broker selections, negotiate carrier terms, and stand up audit-ready operations across multiple jurisdictions.
You will report to the Head of Benefits and partner closely with our Senior Benefits Partner.
Regardless of regional anchor, you will:
If you anchor in Americas, you will additionally:
If you anchor in International, you will additionally:
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Discovery team is organized around the north star of building an AI scientist — a system capable of solving the long-horizon reasoning challenges and core capabilities needed to push the scientific frontier. The team trains large-scale models, runs complex multi-week experiments, and builds novel products at the intersection of AI and science.
As a Technical Program Manager on the Discovery team, you will own the systems and programs that determine how fast our research moves: compute planning, scientific RL environment health, and the vendor pipelines that supply them, with scope to incubate new programs in domains like bio R&D. Strong candidates should have an ML engineering or research background and have grown into program leadership. You'll need real technical depth: the ability to debug data pipelines, read RL transcripts to spot issues, and make allocation and quality decisions in real time when experimental or production runs hit problems. You'll need organizational effectiveness in equal measure: the ability to navigate a fast-growing organization, quickly identify the critical people and teams across research, infrastructure, product, and data operations, and coordinate across them without losing velocity.
Join us in our mission to build AI systems that push the frontiers of science and benefit humanity.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Capacity & Efficiency team sits inside Anthropic’s Compute organization and owns the cost, utilization, and attribution story for non-accelerator infrastructure — the network, compute, and storage backbone that moves petabytes between training clusters, inference fleets, and object storage across clouds and regions. The scale is real, the spend is large, and the efficiency levers are still mostly unpulled.
We work alongside the Systems Networking team (who build and operate the fabric) and the Observability team (who own the telemetry platform). This role lives at the intersection: you’ll use deep networking knowledge and rigorous measurement to figure out where and how bandwidth, latency, and dollars are being used, find optimization opportunities and land them.
We’re looking for a network engineer who thinks in metrics first. You understand spine-leaf fabrics, BGP, SDN overlays, and cloud interconnect products well enough to build them. You will instrument them, model their cost-per-bit, and squeeze out the inefficiency, while ensuring we can move the bits to the right places in the most efficient manner. You’ll own the observability and efficiency surface for Anthropic’s network: from per-flow telemetry on backbone routers, to cost attribution that tells a research team exactly what their checkpoint sync is costing.
This is a hands-on IC role. You’ll write code (Python, Go), build dashboards and model capacity. You’ll also influence architecture: when the data says a traffic pattern is pathological, you’ll be in the room root causing it and fixing it.
You will be working across three areas: network telemetry, observability and cost modeling and attribution. We expect you to be strong in at least two and willing to grow into the third. If you're a telemetry-first engineer who's never built a chargeback model, or a traffic engineer who hasn't shipped eBPF probes, apply anyway and tell us which axis you want to grow on.
Build the network observability stack. Design and deploy telemetry pipelines — sFlow/IPFIX, gNMI streaming, eBPF host probes — that turn packet counters into per-flow, per-tenant, per-workload cost and utilization data. Own the SLIs for backbone and DCN fabric health.
Hunt for efficiency. Analyze inter-region traffic patterns, identify hot links and stranded capacity, and quantify the dollar impact. Build the models that tell us whether we should buy more capacity, or move the workload.
Own QoS and traffic engineering. Design and operate traffic classification, marking, and shaping across the backbone. Make sure bulk checkpoint transfers don’t starve latency-sensitive inference, and that we’re not paying premium cross-region rates for traffic that could take the cheap path.
Drive cost attribution. Tie network spend — egress, interconnect ports, transit, optical leases — back to the teams and workloads that generate it. Make network cost a first-class input to capacity planning and workload placement decisions.
Automate. Extend our intent-based network configuration systems and write the tooling that turns your efficiency findings into safe, reviewable, and impactful changes.
Have 5+ years operating large-scale production networks — data center fabrics (spine-leaf, Clos), backbone/WAN, or hyperscaler-adjacent environments.
Are genuinely fluent across the stack: BGP (including policy and communities), ECMP, VXLAN/EVPN or equivalent overlays, QoS (DSCP, queuing, shaping), and L1/optical basics (DWDM, coherent, LAGs).
Know at least one major CSP’s networking model deeply — AWS (VPC, TGW, Direct Connect, Gateway Load Balancer) or GCP (Shared VPC, Interconnect, Cloud Router, Network Connectivity Center) — and understand how their overlays interact with physical underlays.
Have built or operated network telemetry at scale: streaming telemetry (gNMI/OpenConfig), flow export (sFlow, IPFIX, NetFlow), or eBPF-based host-side instrumentation. You can reason about sampling, cardinality, and storage tradeoffs.
Comfortable writing Python or Go to build tooling, telemetry pipelines, infrastructure-as-code, config management for network devices and automation, that you’ll ship to production.
Think quantitatively by default. You reach for a notebook or a Grafana query before you reach for an opinion, and you can turn messy counter data into a defensible cost model.
Communicate crisply. You can explain to a finance partner why a 10% egress reduction matters, and to a network engineer why a specific ECMP imbalance is costing real money.
SRE experience for large-scale network infrastructure — designing for reliability, defining SLOs/SLIs for network services, capacity planning with error budgets, and incident response for network-impacting outages at scale.
Background on a cloud provider's networking team or a cloud networking product team — building or operating the interconnect, backbone, or SDN control plane from the provider side, not just consuming it as a customer.
Familiarity with AI/ML infrastructure traffic patterns like collective communication (all-reduce, all-gather), checkpoint/weight transfer, inference serving, and how these stress networks differ than traditional workloads in terms of burst behavior, flow synchronization, and bandwidth symmetry.
Experience with HPC fabrics like InfiniBand, RoCE v2, lossless Ethernet, or custom high-radix topologies and an understanding of how job placement, congestion management, and adaptive routing interact at scale.
Background in traffic engineering for large backbones and the operational judgment to know when TE is worth the complexity.
Hands-on time with multi-cloud connectivity: cross-cloud peering, private interconnect products, and the billing models that come with them.
Experience building cost/chargeback systems for shared infrastructure, or FinOps exposure in a large cloud environment.
Build a per-flow cost attribution pipeline that traces every byte of cross-region egress back to the team and workload that generated it
Design QoS policy for the private backbone that prevents bulk checkpoint transfers from starving inference traffic
Model whether it's cheaper to buy an additional 1.6Tb interconnect tranche or to re-route traffic through existing capacity
Instrument DCN fabric utilization with streaming telemetry and build the Grafana dashboards that become the team's source of truth for network observability
Anthropic’s network footprint is growing faster than our ability to reason about it. We’re turning up tens of terabits of private backbone capacity, peering across clouds, and moving model weights that keep getting larger. The efficiency opportunities are enormous and largely untouched — this is a chance to build the measurement and optimization layer from the ground up, with real budget impact and direct influence on how Anthropic’s infrastructure scales.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform, from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations.
Within Cloud Inference, the model & inference launch team owns the validation pipeline for our inference server and load balancer on these platforms. We're responsible for every inference change — model launches, performance improvements, safeguard integrations — landing on cloud platforms with correctness, performance, and reliability intact.
This is high-leverage infrastructure work: validation has to be fast and cheap enough to run on the same accelerators that serve customers, trustworthy enough to replace manual checks, and consistent enough that a change working on Anthropic first-party means it works everywhere. This directly determines how fast frontier models and features ship to every cloud platform, and how quickly performance wins reach production — reclaiming capacity at a time when compute is our scarcest resource.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are seeking a Staff Software Engineer to build and operate the safety, oversight, and intervention mechanisms that protect Claude on third-party cloud service provider (CSP) platforms. As the engineer responsible for Safeguards on those surfaces, you will ensure that every request served through our CSP partners is monitored for misuse, enforced against policy, and compliant with the data residency and privacy commitments that enterprise CSP customers expect.
You will sit at the seam between the Safeguards organization and the Cloud Inference team: taking classifiers, detection signals, and enforcement policies developed by Safeguards and making them run reliably inside a CSP partner’s infrastructure at serving-path latency and scale. You will own the architecture that lets our safeguards operate within those constraints without gaps. You will build, deploy and operate the multi-layered defenses that catch unwanted model behavior in real time, the telemetry pipelines that give us situational awareness over CSP traffic, and the enforcement hooks that let us act quickly when something goes wrong. Your work will directly determine whether Anthropic can ship frontier models on CSP platforms at the same safety bar we hold ourselves to on our first-party API.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are seeking a GRC Automation Lead to join our GRC organization and build the technical foundation for how we scale our risk and compliance programs. In this role, you will lead the team that designs and implements automated workflows, data pipelines, and integrations that transform manual compliance processes into scalable engineering systems.
This is a greenfield opportunity to establish the team, architecture, and integrations that will define how we approach governance, risk, and compliance at Anthropic. The core challenge is a data problem: compliance information lives across dozens of systems—cloud infrastructure, identity providers, HR platforms, ticketing tools, code repositories—and your job is to design systems that bring it together, normalize it, and make it actionable. Success in this role comes from understanding how systems connect and how data flows between them, not from writing code yourself.
At Anthropic, you'll also have a unique advantage: the ability to design AI-powered workflows where Claude acts as an extension of your team, handling tasks that would traditionally require additional headcount or manual effort. You'll need ingenuity to identify where agentic AI can accelerate evidence collection, interpret unstructured data, triage compliance gaps, and augment human judgment in risk assessments. Working closely with Security, IT, and Engineering teams, you'll translate compliance and regulatory requirements into solutions that support audit programs including SOC 2, ISO, HIPAA, and FedRAMP, building systems that combine traditional automation with AI capabilities to achieve scale that wouldn't otherwise be possible.
Deadline to apply: None, applications will be received on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is hiring a Lead for our Talent Development & Enablement team to help build the systems that help Ants (particularly managers) thrive. This is a build role: you'll partner directly with the team lead to think holistically about talent development across Anthropic, with a strong primary focus on the manager and leader audience.
You'll bring deep instructional design expertise, exceptional facilitation skills, and a strong point of view on how to grow people, and managers in particular, in an environment of unprecedented growth and change. You'll design across modalities (self-service, live virtual, in-person) and across audiences (new hires, ICs, first-time managers, experienced leaders, executives), and you'll treat AI as a chance to reimagine what talent development can be, not retrofit it onto what already exists.
This role suits someone who is energized by extreme ambiguity, ships fast, and wants to invent the playbook rather than execute someone else's. You'll be one of the first specialist hires on a new team, which means your point of view will shape what we build and how we measure success.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
You’ll bring to the role a passion for delivering a seamless and engaging employee experience through operational excellence. The ideal candidate is a detail-oriented and tech-savvy HR professional with hands-on HRIS experience and a strong understanding of core HR processes. In this role, you will support key People Operations programs across the employee lifecycle, ensuring accuracy, compliance, and continuous process improvement.
Manage HRIS transactions throughout the employee lifecycle, including onboarding, job changes, promotions, compensation updates, and offboarding
Support the administration of company benefits programs, including new hire enrollment and qualifying life event changes
Ensure new hires are enrolled in required training plans; support rollout, tracking, and reporting of annual compliance training
Assist in administering performance review cycles, including assessment kick-off, progress monitoring, and completion reporting
Develop proficiency and become a subject-matter expert in HR systems, processes, and communication platforms
Support payroll processing by verifying data accuracy and compliance with internal requirements
Maintain and update process documentation and refresh HR content on the company Intranet
Utilize Google Workspace tools (Gmail, Calendar, Docs, Slides, and basic Sheets) to support daily operations and reporting
3–5 years of HR Operations experience with hands-on HRIS work
Experience supporting a range of People Operations projects and initiatives
Service-oriented mindset with strong written and verbal communication skills
Ability to manage sensitive information with discretion and professionalism
Not sure if you meet all the qualifications? Apply anyway! We value diverse experiences and encourage you to bring your unique talents to our team!
Actual compensation packages are determined by various factors unique to each candidate, including but not limited to skill set, depth of experience, certifications, specific work location, and performance during the interview process. In addition to base salary, this role may include variable compensation and be eligible for an equity grant, depending on the position and level.
By applying for this position, your data will be processed as per Rocket Lawyer Privacy Policy.
Ready to apply?
Apply to Rocket Lawyer
Share this job
Our Integrated Marketing team partners closely with product, product marketing, SEO/AEO, CMS, lifecycle marketing, paid media, and web publishing to bring our brand and campaigns to life across every customer touchpoint.
As we expand our search visibility, organic social presence, and cross-channel campaign execution, we are investing in thoughtful, research-backed messaging that supports acquisition, engagement, and brand authority in the legal tech space.
Actual compensation packages are determined by various factors unique to each candidate, including but not limited to skill set, depth of experience, certifications, specific work location, and performance during the interview process. In addition to base salary, this role may include variable compensation and be eligible for an equity grant, depending on the position and level.
By applying for this position, your data will be processed as per Rocket Lawyer Privacy Policy.
Ready to apply?
Apply to Rocket Lawyer
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is building Treasury with scale and automation in mind from the start. We're creating modern, well-controlled processes that can grow with the business; supported by strong systems, clear governance, and practical use of AI-enabled tools. We're designing touchless, AI-native workflows from day one, with SOX-grade controls built in rather than bolted on.
This role reports to the Treasury Ops Lead and plays a central part in building and running the company's Treasury Operations foundation. You'll manage core activities: daily cash positioning, global bank account administration, intercompany funding, payment controls, and Treasury reporting - while improving processes so the function scales efficiently as the company grows.
This is a hands-on role for someone who enjoys both execution and process building. You should be comfortable running key day-to-day activities while identifying opportunities to improve, automate, and strengthen controls.
You'll be the second person in a Treasury Ops function at a company that's growing fast and building deliberately. The processes you design in your first six months are the ones the company runs on at 10x scale. The ownership is end-to-end, and the toolkit is ours - you'll design Treasury workflows that are AI-native from the first draft, the way only Anthropic can.
Core Treasury Operations
Manage daily cash positioning across entities and currencies, including tracking balances and consolidating cash activity
Lead global bank account management; account opening and closing coordination, signer updates, KYC documentation, and portal administration
Help design and manage intercompany settlement and funding processes, including documentation and ongoing improvement
Maintain short-term cash forecasting, including variance tracking and reporting
Support payment workflows and controls; approval routing, exception handling, operational process improvements
Support Letter of Credit and Bank Guarantee issuance and post-M&A Treasury integration activities
Systems & Automation
Support TMS implementation, bank connectivity, reporting enhancements, and process standardization
Design the "Claudification" layer: identify which workflows go touchless, build the automation, keep humans in the decisions that need judgment
Partner with Finance Systems to simplify and automate Treasury processes
Own Treasury data quality - the source of truth that everything downstream depends on
Controls & Governance
Execute Treasury processes with strong focus on controls, documentation, segregation of duties, and audit readiness
Support development and maintenance of Treasury policies, procedures, and control documentation
Document Treasury controls for SOX readiness; not as a separate project, but as how the work gets done
Partner with Accounting on close activities, reporting, and audit requests
5–8 years in Treasury operations, ideally at a growth-stage or scaling company where you saw processes get built, not just inherited
You currently own the daily cash position. You log into the bank portals, you run the position, you execute the wires. This is a doing role, not a managing role
Treasury-native depth. You came up through Treasury; TMS, KYC mechanics, bank account administration, cash positioning judgment. This isn't a payments-processing seat; it's a Treasury seat
Player-coach DNA. You still do the work; run the cash position, chase the KYC doc, troubleshoot the wire - while building the system so nobody has to do it manually next quarter
Hands-on TMS experience. You've worked in a Treasury Management System (Kyriba, GTreasury, Trovata, Quantum, or similar): configured it, broken it, fixed it. Implementation experience is a plus but not required
SOX in your DNA. You think in audit trails, approval chains, and segregation of duties; not because compliance makes you, but because well-controlled processes are the ones that scale
Comfort with ambiguity. There's no playbook here. You'll write it
AI-first mindset. Not a prompt engineer; a Treasury person who wants the model handling data so you can focus on decisions.
Exposure to SOX readiness or IPO prep, even as a contributor
Multi-entity, multi-currency operations background
Worked with bank APIs or Treasury aggregators (Trovata, Kyriba, or direct connectivity)
APAC or EMEA bank account opening experience
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Scale seeks a business-minded Director and Associate General Counsel, Compliance to lead Scale’s corporate compliance program. This role will set the legal standards, governance model, and escalation framework for the program, working closely with cross-functional partners to translate that guidance into practical workflows.
This is a high-impact role for a lawyer who combines strong legal judgment with practical program execution in a fast-moving, high-visibility business.
You Will:
Ideally, you’d have:
Nice to haves:
Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.
PLEASE NOTE: Our policy requires a 90-day waiting period before reconsidering candidates for the same role. This allows us to ensure a fair and thorough evaluation of all applicants.
About Us:
At Scale, our mission is to develop reliable AI systems for the world's most important decisions. Our products provide the high-quality data and full-stack technologies that power the world's leading models, and help enterprises and governments build, deploy, and oversee AI applications that deliver real impact. We work closely with industry leaders like Meta, Cisco, DLA Piper, Mayo Clinic, Time Inc., the Government of Qatar, and U.S. government agencies including the Army and Air Force. We are expanding our team to accelerate the development of AI applications.
We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity or Veteran status.
We are committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities. If you need assistance and/or a reasonable accommodation in the application or recruiting process due to a disability, please contact us at accommodations@scale.com. Please see the United States Department of Labor's Know Your Rights poster for additional information.
We comply with the United States Department of Labor's Pay Transparency provision.
PLEASE NOTE: We collect, retain and use personal data for our professional business purposes, including notifying you of job opportunities that may be of interest and sharing with our affiliates. We limit the personal data we collect to that which we believe is appropriate and necessary to manage applicants’ needs, provide our services, and comply with applicable laws. Any information we collect in connection with your application will be treated in accordance with our internal policies and programs designed to protect personal data. Please see our privacy policy for additional information.
Ready to apply?
Apply to Scale AIShare this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an Analytics Data Engineering Manager focused on Product, you will build and lead the analytics engineering team responsible for creating the data foundations that enable data-driven decision making across Anthropic’s Product organization. You will oversee the development of scalable data solutions for Product pillars – including Consumer, Claude Code, Enterprise & Verticals, Growth, Platform Product – managing a team of analytics engineers and working closely with stakeholders across Data Science, Product, and Engineering to ensure teams have access to reliable, accurate metrics that can scale with our company’s growth.
In this role, you will balance hands-on technical leadership with people management, setting the strategic vision for product data foundations while developing and mentoring team members. You will partner closely with Product Data Scientists, Product Managers, and Product Engineers to understand how users interact with Claude, how to measure product quality and growth, and how to transform raw event logs into insightful data marts that power product decisions.
Build and scale the Product Analytics Engineering team, including hiring and mentoring a team of high-performing analytics engineers embedded with Product pillars
Define and execute the strategic roadmap for product data foundations and analytics capabilities
Oversee the design and implementation of scalable data pipelines, data models, and analytics solutions that transform raw product event logs into canonical datasets and insightful data marts
Partner with Data Science, Product, and Engineering leadership to understand data needs and translate them into technical requirements
Establish and maintain high data integrity standards, SLAs, alerting, and best practices for the team
Drive the development of foundational data products, dashboards, and tools to enable self-serve analytics; partner with the Data Science team to build innovative data tools using Claude to scale data-driven decisions across Product teams
Foster a culture of technical excellence, continuous learning, and data-driven decision making
Serve as a technical thought leader for data modeling, ETL processes, and product analytics infrastructure
5+ years of experience managing analytics engineering or data engineering teams, preferably in a scaling startup environment
8+ years of total experience in analytics engineering, data engineering, or similar data-focused roles
Deep expertise in data modeling, ETL pipelines, and data warehouse architecture
Strong technical foundation with expertise in SQL, Python, dbt, and modern data stack tools
Proven track record of building and leading high-performing teams
Experience partnering with Data Science, Product, and Engineering leaders to deliver key product metrics and user behavior insights
Demonstrated ability to balance strategic thinking with hands-on technical leadership
Strong communication skills with the ability to translate complex technical concepts for diverse audiences
Experience scaling analytics functions from early stage to maturity in rapidly changing environments
Track record of establishing data governance, quality standards, and best practices
A bias for action and urgency, not letting perfect be the enemy of the effective
A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end
A passion for Anthropic’s mission of building helpful, honest, and harmless AI
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic manages one of the largest and fastest-growing accelerator fleets in the industry — spanning multiple accelerator families and clouds. The Accelerator Capacity Engineering (ACE) team is responsible for making sure every chip in that fleet is accounted for, well-utilized, and efficiently allocated. We own the data, tooling, and operational systems that let Anthropic plan, measure, and maximize utilization across first-party and third-party compute.
As an engineer on ACE, you will build the production systems that power this work: data pipelines that ingest and normalize telemetry from heterogeneous cloud environments, observability tooling that gives the org real-time visibility into fleet health, and performance instrumentation that measures how efficiently every major workload uses the hardware it’s running on. You will be expected to write production-quality code every day, operate alongside Kubernetes-native infrastructure at meaningful scale, and directly influence decisions around one of Anthropic’s largest areas of spend.
You’ll collaborate closely with research engineering, infrastructure, inference, and finance teams. The work requires someone who can move between data engineering, systems engineering, and observability with comfort — and who thrives in a high-autonomy, high-ambiguity environment.
The team’s work spans three functional areas. Depending on your background and interests, you’ll focus primarily in one, but the boundaries are fluid and the problems overlap:
Data infrastructure — collecting, normalizing, and serving the fleet-wide data that powers everything else. This means building pipelines that ingest occupancy and utilization telemetry from Kubernetes clusters, normalizing billing and usage data across cloud providers, and maintaining the BigQuery layer that the rest of the org queries against. Correctness, completeness, and latency matter here.
Fleet observability — making the state of the accelerator fleet legible and actionable in real time. This means building cluster health tooling, capacity planning platforms, alerting on occupancy drops and allocation problems, and driving systemic improvements to scheduling and fragmentation. The work sits at the intersection of Kubernetes operations and cross-team coordination.
Compute efficiency — measuring and improving how effectively every major workload uses the hardware it’s running on. This means instrumenting utilization metrics across training, inference, and eval systems, building benchmarking infrastructure, establishing per-config baselines, and collaborating directly with system-owning teams to close efficiency gaps.
Internal compute tooling — building the platforms and interfaces that make capacity data usable across the org. This includes capacity planning tools, workload attribution systems, cost dashboards, and self-service APIs. The consumers are research engineers, infrastructure teams, finance, and leadership — each with different needs and different levels of technical depth. The work involves product thinking as much as engineering: figuring out what people actually need, defining schema contracts, and making the data discoverable.
You will be placed on a pod based on your background and interests. We are especially focused on hiring for Data Platform, but strong candidates for any of the three active pods will move forward.
Build and operate data pipelines that ingest accelerator occupancy, utilization, and cost data from multiple cloud providers into BigQuery. Own data completeness, latency SLOs, gap detection, and backfill automation.
Develop and maintain observability infrastructure — Prometheus recording rules, Grafana dashboards, and alerting systems — that surface actionable signals about fleet health, occupancy, and efficiency.
Instrument and analyze compute efficiency metrics across training, inference, and eval workloads. Build benchmarking infrastructure, establish per-config baselines, and work with system-owning teams to improve utilization.
Build internal tooling and platforms that enable capacity planning, workload attribution, and cluster debugging. The consumers are other engineering teams, finance, and leadership — not external users.
Operate Kubernetes-native systems at scale — deploying data collection agents, managing workload labeling infrastructure, and understanding how taints, reservations, and scheduling affect capacity.
Normalize and reconcile data across heterogeneous sources — including AWS, GCP, and Azure billing exports, vendor-specific telemetry formats, and internal systems with different schemas and billing arrangements.
Collaborate across organizational boundaries with research engineering, infrastructure, inference, and finance teams. Gather requirements from technical stakeholders, translate them into useful systems, and communicate trade-offs to non-technical audiences.
5+ years of software engineering experience with a strong track record building and operating production systems. You write code every day — this is a hands-on engineering role, not a planning or coordination role.
Kubernetes fluency at operational depth — you’ve operated production K8s at meaningful scale, not just written manifests. Comfort with scheduling, taints, labels, node management, and debugging cluster-level issues.
Data pipeline engineering experience — designing, building, and owning the full lifecycle of production data pipelines. Experience with data warehouses (BigQuery preferred), schema management, streaming ingestion, SLOs for latency and completeness, and a strong instinct for correctness.
Observability tooling experience — Prometheus, PromQL, and Grafana are in the critical path for this team. Experience writing recording rules, understanding metric semantics, and building monitoring systems that engineering teams actually rely on.
Python and SQL at production quality. Most pipeline code is Python; the presentation layer is BigQuery SQL including table-valued functions and views. Both need to be idiomatic, well-tested, and maintainable.
Familiarity with at least one major cloud provider (AWS, GCP, or Azure) at the infrastructure level — compute, billing, usage APIs, cost management tooling. Multi-cloud experience is a strong plus.
High autonomy and strong cross-team communication. You can gather your own requirements, navigate ambiguity, and work across organizational boundaries. Scrappiness and ownership matter more than polish.
Multi-cloud data ingestion experience — especially working with AWS and GCP APIs, billing exports, or vendor-specific telemetry formats. Experience normalizing data from external providers with different billing arrangements is directly applicable.
Accelerator infrastructure familiarity — GPU metrics (DCGM), TPU utilization, Trainium power and utilization metrics, or experience working with ML training/inference systems at the hardware level.
Performance engineering and benchmarking experience — building benchmark harnesses, establishing baselines, reasoning about compute efficiency (FLOPs utilization, memory bandwidth, interconnect throughput), and working with system teams to diagnose and improve performance.
Data-as-product thinking — experience building internal data products with self-service access, schema contracts, API serving, documentation, and discoverability. Not just building pipelines, but thinking about how platform data gets consumed.
Experience with capacity planning, resource management, or cost attribution systems at a hyperscaler or large-scale ML environment. FinOps, chargeback systems, or infrastructure cost modeling.
Familiarity with ClickHouse, Terraform, or Rust. ClickHouse is the team’s current streaming store; Terraform for infrastructure-as-code; Rust for high-performance data collection agents.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an Analytics Engineer, you will be an early member of the Data Science & Analytics team building the foundation to scale analytics across our organization. You will collaborate with key stakeholders in Engineering, Product, GTM and other areas to build scalable solutions to transform data into key metrics reporting and insights. You will be responsible for ensuring teams have access to reliable, accurate metrics that can scale with our company’s growth. You will also lead your own projects to enable self-serve insights to help teams make data-driven decisions.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform, from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations.
Our engineers are extremely high leverage: we simultaneously drive multiple major revenue streams while optimizing one of Anthropic's most precious resources: compute. As we expand to more cloud platforms, the complexity of managing inference efficiently across providers with different hardware, networking stacks, and operational models grows significantly. We need product-minded backend engineers who can navigate these platform differences, design the services and abstractions that work across providers, and make architectural decisions that keep us reliable and cost-effective at massive scale.
Your work will increase the scale at which our services operate, accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms, and ensure our LLMs meet rigorous safety, performance, and security standards.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Building a Brand starts with a Story
FIRST is a leading global brand experience agency providing Content, Creative, Digital Technology, Management & Delivery, and Data & Insights services. For 30 years, we have utilized three flexible delivery models: Embedded teams, Corporate Campus operations, and a Full-Service Agency. Managing over 37,000 events & projects annually across 110 countries, FIRST serves 130+ brands, specializing in Fortune 500 companies within the Financial Services, Technology, Healthcare, Media, and Professional Services sectors. FIRST is part of the broader Encore family of companies. Encore is a global leader in event production and technology. Consistently recognized for excellence, recent honors include 2026 Chief Marketer Agency of the Year, the Ad Net Zero Award for sustainability, and more.
Location: Menlo Park or San Francisco, CA (Hybrid) Must reside in Menlo Park or San Francisco, CA to be able to go into office 3 days a week, up to 5 if there is a business need.
Learn more at www.firstagency.com
What You Would Get To Do
The Project Manager is responsible for supporting clients with both internal and client-facing events and conferences, delivering exceptional service through effective project management and creative solutions. This role involves overseeing all aspects of event operations, from initial briefing to post-event reconciliation. Key responsibilities include managing project plans, collateral, registration, transportation, hotel accommodations, entertainment, food and beverage, audio/visual needs, webcasts, staging, speaker coordination, and event execution.
There is a requirement for onsite events to travel domestically or abroad as required.
Your Contributions
What We Are Looking For:
Bachelor's degree preferred
At FIRST, we believe that our success is driven by the passion, creativity, and commitment of our teams. That's why we offer a range of benefits and perks to support your personal and professional growth
United States Residents: The base pay for this position ranges from $70,000-$100,000. This role is also eligible for an annual discretionary bonus.
Please note that the national salary range listed in the job posting reflects the new hire salary range across various U.S. locations that would be applicable to the position. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
#LI-hybrid #LI-KM1
FIRST is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind. FIRST is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions are based on business needs, job requirements and individual qualifications without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. We stand firm: FIRST will not tolerate discrimination or harassment based on any of these characteristics. If any candidate feels that they are unable to meet the requirements of the role, please get in touch to discuss adjustments or other suitable roles.
Ready to apply?
Apply to First Global Management Services, Inc.
Share this job
Wilson Elser is a leading defense litigation law firm with more than 1400 attorneys in 46 offices throughout the United States. Founded in 1978, we rank among the top 100 law firms identified by The American Lawyer and 36 in the National Law Journal’s survey of the nation’s largest law firms.
Wilson Elser's San Francisco office is currently seeking a Toxic Tort Associate to join a team of talented professionals in a collaborative, collegial environment. Our firm is committed to developing professionals who value each other and the service we provide through Teamwork, Collaboration, Client Service, and Innovation. If you are a motivated professional looking for a long-term fit, where you can grow in a role, and be valued and empowered, then we invite you to apply. This attorney can work from the San Francisco office, or remotely elsewhere in California.
This position offers a flexible, hybrid working arrangement, or remote work opportunities.
The Position
This attorney will represent sophisticated clients confronted with complex legal challenges to defend a set of talc, silicosis, asbestos, and other toxic tort matters.
Key Responsibilities
Qualifications
#LI-BG1
A variety of factors are considered in making compensation decisions, including but not limited to experience, education, licensure and/or certifications, geographic location, market demands, other business and organizational needs, and other factors permitted by law. This position is also eligible for discretionary bonuses. Final salary wages offered may be outside of this range based on other reasons and individual circumstances. This position is considered full-time and therefore qualifies for benefits including 401(k) retirement savings plan, medical, dental, vision, disability, and life insurance. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
Why Should You Apply?
Ready to apply?
Apply to Wilson Elser - AttorneysShare this job
Lightning AI is the company behind PyTorch Lightning. Founded in 2019, we build an end-to-end platform for developing, training, and deploying AI systems—designed to take ideas from research to production with less friction.
Through our merger with Voltage Park, a neocloud and AI Factory, Lightning AI combines developer-first software with cost-efficient, large-scale compute. Teams get the tools they need for experimentation, training, and production inference, with security, observability, and control built in.
We serve solo researchers, startups, and large enterprises. Lightning AI operates globally with offices in New York City, San Francisco, Seattle, and London, and is backed by Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.
Move Fast: We act with speed and precision, breaking down big challenges into achievable steps.
Focus: We complete one goal at a time with care, collaborating as a team to deliver features with precision.
Balance: Sustained performance comes from rest and recovery. We ensure a healthy work-life balance to keep you at your best.
Craftsmanship: Innovation through excellence. Every detail matters, and we take pride in mastering our craft.
Minimal: Simplicity drives our innovation. We eliminate complexity through discipline and focus on what truly matters.
As we continue to expand our footprint, we are looking for a sharp, agile, and business-minded lawyer to join our lean legal team. You will be the primary engine behind our commercial contracting process. This is not a "siloed" role; you will be the cross-functional glue between Finance, Engineering, Sales, and Security, ensuring our legal strategy moves at the speed of our innovation.
This is an ideal opportunity for a high-performing lawyer with a strong grasp of transactional fundamentals who is ready to "level up." While your primary focus will be owning the commercial contract lifecycle, you will have a front-row seat to the broader legal challenges of a hyper-growth AI company.
You will be joining the Legal Team and report to our VP of Legal. This is a hybrid role based in our San Francisco or New York office with an in-office requirement of 2 days per week.
We are committed to offering competitive compensation that reflects the value each team member brings to our mission. Final offers are based on factors such as experience, skills, geographic location, and role expectations. In addition to base salary, our total rewards package for eligible roles includes a discretionary bonus, a meaningful equity component, and comprehensive benefits.
We offer a comprehensive and competitive benefits package designed to support our employees’ health, well-being, and long-term success. Benefits may vary by location, team, and role.
Benefits include:
At Lightning AI, we are committed to fostering an inclusive and diverse workplace. We believe that diverse teams drive innovation and create better products. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected characteristic. We are dedicated to building a culture where everyone can thrive and contribute to their fullest potential.
Ready to apply?
Apply to Lightning AI
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic’s model weights and training infrastructure are among the highest-value targets in the technology sector, facing nation-state, supply-chain interdiction, and insider threats that conventional enterprise security programs were not built to address. As Commercial Counsel, Infrastructure Security for Compute and Infrastructure at Anthropic, you’ll be the day-to-day legal partner to the Chief Security Officer’s infrastructure-security and the Governance, Risk and Compliance teams. You will own the contractual and regulatory layer of physical and facility security, hardware and supply-chain security, network security, vendor personnel and insider-risk flow-downs, and security regulatory and assurance.
You’ll work in close partnership with Anthropic teammates in Frontier, Product, Litigation, Employment, and Commercial Legal, in addition to specialized outside counsel. You will serve as the primary legal owner ensuring security requirements are in the contract before signature and that external work product aligns with Anthropic’s security posture and commercial objectives.
Draft and negotiate security design-basis and site-hardening specifications in build-to-suit, lease, and colo agreements (perimeter, access control, CCTV, intrusion detection); guard-force statements of work, post orders, and KPI regimes; visitor, contractor, and badging policy; and security clauses in shared-campus and multi-tenant arrangements
Own provenance, anti-tamper, and chain-of-custody warranties in silicon, ODM, and OEM paper; trusted-supplier and country-of-origin restrictions; NDAA §889/§5949 and CHIPS-Act guardrail flow-downs; BIS/EAR advanced-computing and semiconductor export-control flow-downs and end-use/end-user certifications; firmware integrity, secure-boot, and golden-image escrow terms; secure logistics; counterfeit-part and grey-market controls; and secure decommissioning and certified media-destruction terms
Draft security schedules in carrier and fiber agreements (encryption-in-transit, route integrity, lawful-intercept handling), and security obligations in peering agreements
Set background-screening, training, and badge-revocation requirements for vendor and contractor personnel with site or hardware access, and flow Anthropic personnel-security standards into guard-force, security-integrator, and EPC vendor MSAs
Support CFIUS and outbound-investment screening on infrastructure vendors and sites, provide NIST/ISO/SOC 2 physical-control evidence for customer and auditor assurance in partnership with security teams; and support security representations in customer contracts that reference physical infrastructure with Commercial Legal
Work closely with specialized outside counsel, ensuring their work product aligns with Anthropic’s security and commercial objectives
Build the function: develop and maintain the security-schedule library, design-basis templates, advise on vendor security questionnaire templates, and negotiation playbooks; train Procurement, Datacenter, and Network teams to apply them at scale
Serve as direct counsel to the CSO’s infrastructure-security organization, coordinating with Product Legal and Litigation on incident response, threat intelligence, law-enforcement and intelligence-community engagement, insider-threat governance, and model-weight security policy under Anthropic’s Responsible Scaling Policy
Escalate novel structures or terms that create downstream risk for Anthropic’s security posture or operational flexibility; ensure security requirements accommodate AI-specific threats including hardware tamper, supply-chain interdiction, and high-value-target facility risk
Monitor and assess the evolving regulatory landscape affecting security and data protection, identifying higher-risk obligations for the business and partnering with security to operationalize them through policies, controls, and compliance programs
Advise on risk assessments, risk acceptance decisions, and reporting to leadership and the board; and review remediation commitments arising from assessments, customer audits and regulator inquiries.
JD and active membership in at least one U.S. state bar
Fluency in security design-basis specifications, guard-force and access-control contracting, and how security schedules interact with build-to-suit, colo, procurement, and carrier agreements
Experience with NDAA §889/§5949, CHIPS-Act guardrails, CFIUS/outbound-investment screening, and trusted-supplier or country-of-origin programs
Comfort with NIST, ISO 27001, and SOC 2 physical-control frameworks and the evidence and attestation process that supports customer and auditor assurance
Ability to coordinate effectively with multiple internal legal teams, and specialized outside counsel while maintaining strategic direction
Strong judgment about when contractual security terms create downstream risk for Anthropic’s security posture, audit position, or operational flexibility
Effective collaboration skills for working with the CSO’s organization, procurement, datacenter, and network teams
Communication skills that translate security and supply-chain-integrity concepts into clear risk assessments for business stakeholders
Genuine interest in infrastructure security and appreciation for why physical, hardware, and network security is mission-critical for frontier AI
At least 10-12 years of relevant legal experience with meaningful exposure to physical and facility security contracting, hardware and supply-chain security, network security schedules, or security regulatory and assurance work for critical infrastructure
In-house experience at cloud service providers, hyperscalers, defense and aerospace primes, telecom carriers, utilities, semiconductor companies, or datacenter operators supporting physical-security, supply-chain-security, or security-assurance programs; or U.S. government experience at DoD, DHS/CISA, BIS, or CFIUS staff
Experience at large technology companies with first-party datacenter or hardware programs supporting security contracting from the buy side
Law firm experience at practices with national-security, supply-chain, or critical-infrastructure specialization, particularly those who have worked on NDAA §889, CFIUS, or trusted-supplier matters
Prior involvement in transactions requiring sophisticated understanding of secure logistics, chain-of-custody, firmware integrity, and certified media destruction
Familiarity with CCTV/biometrics privacy regimes, executive-protection contracting, and the contractual layer of insider-risk programs
Ability to obtain and maintain a U.S. security clearance
Role-specific policy: For this role, we expect staff to be able to work from our San Francisco, Seattle, Washington D.C., or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.
Cluster Infra owns the full lifecycle of compute clusters at Anthropic. We build agent-driven automation for cluster provisioning and lifecycle management across all major cloud providers and our own datacenters. Our systems stand up clusters that are interconnected with high bandwidth, secure-by-default, and able to automatically drain and recover in response to failure. As a Staff engineer on this team, you'll set the technical direction for how Anthropic brings compute online - at a moment when the scale of that compute is growing faster than at almost any company in the world.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking a Program Manager to own and drive our student outreach strategy, establishing Claude as the AI platform of choice for the next generation of builders, researchers, and creators at universities worldwide.
You'll lead flagship programs including the Claude Campus Ambassador Program and Claude Builder Clubs, while developing new initiatives that embody Anthropic's mission of helping humanity navigate the AI transition.
This is a high-ownership role where you'll define the vision, build the infrastructure, and personally drive execution for how Anthropic engages with student communities globally. Beyond keeping programs running smoothly, you'll make real strategic decisions and own the student narrative company-wide.
We're looking for someone who thrives with significant autonomy, can operate effectively with ambiguity, and has the judgment to make consequential calls about where we invest our resources.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are seeking an experienced Senior Business Systems Analyst to join our Finance Systems team at Anthropic. In this role, based on our recent go live on Workday Financials, you will serve as the internal functional lead for our Workday Financials,, owning the design and configuration of the Financial Data Model (FDM), Chart of Accounts, and dimensional structures that will serve as the source of truth for financial reporting. You will develop Prism Analytics and Accounting Center solutions, gather requirements and build reporting capabilities, and collaborate closely with cross-functional teams to drive the successful adoption of the Workday platform.
This is a critical role that will directly shape how Anthropic's finance organization operates as we scale toward public company readiness. You will work at the intersection of finance domain expertise and technical implementation, partnering with the implementation partner, engineering teams, and finance stakeholders to build a world-class financial systems foundation.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
Anthropic is seeking a Certification Development Lead to build and scale a best-in-class certification program for our partner ecosystem. We’ve launched our first certification, the Claude Certified Architect, Foundations (CCA-F), and now we need someone to turn that first credential into a comprehensive, multi-tiered program that our systems integrator, consulting, and technology partners rely on to demonstrate real competence with Claude.
This is a 0-to-1 build. You’ll define the certification strategy, design the credential architecture, and build the program infrastructure from the ground up, all with our partner audience at the center. You’ll work at the intersection of program design and subject matter expertise: identifying what partners need to know, structuring the path to get them there, and collaborating with internal SMEs to ensure the content reflects how Claude actually works and where it’s headed. The systems and tooling that deliver certifications are being built by a separate team; your focus is the program itself, the credentialing framework, the partner experience, and the cross-functional relationships that keep it all connected.
The broader program strategy, including audience planning, partner relationships, and go-to-market alignment, lives within the Partner Enablement team. You’ll sit at the center of that work, collaborating closely with Partnerships, Applied AI, and GTM Productivity to ensure certifications serve both the partners earning them and the Anthropic teams relying on certified partners to extend our reach.
Responsibilities
You May Be a Good Fit If You Have
Strong Candidates May Also Have
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are seeking an Operations Manager to build and lead a new pillar of our Platform Operations function: Community Operations & Executive Escalations. This role will stand up the team, processes, and infrastructure that protect Anthropic's reputation when high-stakes user issues surface on social media or arrive through executive channels. You'll own end-to-end escalation management - detection, triage, incident coordination, and resolution- across two distinct but related workstreams: brand-impacting conversations on public social channels and high-sensitivity inbound from Anthropic employees on behalf of users. The ideal candidate combines calm-under-fire incident judgment with the operational rigor to build durable systems.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link. We are accepting applications on a rolling basis for the next cohort of Anthropic Fellows, which is expected to start in late September. In some circumstances, we can accommodate fellows starting outside the usual cohort timelines — please note in your application if the September start date doesn't work for you.
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In one of our earlier cohorts, over 80% of fellows produced papers.
We run multiple cohorts of Fellows each year and review applications on a rolling basis. This application is for cohorts starting in July 2026 and beyond.
The interview process will include an initial application & reference check, technical assessments & interviews, and a research discussion.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).
Due to the success of the Anthropic Fellows for AI Safety Research program, we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams.
Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application. You can see an overview of the current workstreams below:
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
Fellows will undergo a project selection & mentor matching process. Potential research areas and mentors include:
Projects in this workstream may include:
For past project examples, see:
You might be a particularly great fit for this workstream if you:
Logistics Requirements: To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.
Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.
Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you need to have or independently obtain full-time work authorization in the UK, the US, or Canada.
Program Duration: The program runs for 4 months, full-time. If you can't commit to the full duration, please still apply and note your constraints in the application. We review these requests on a case-by-case basis.
Please note: We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit for full-time roles at Anthropic. In previous cohorts, 25-50% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on AI safety and security at other organizations.
Applications and interviews are managed by Constellation, our recruiting partner. Clicking "Apply here" will take you to their portal, and updates will come from a Constellation address. Constellation also runs the Berkeley workspace and provides program support for fellows working on AI safety and security; fellows on capabilities-focused projects are supported directly by Anthropic. All applicants currently use the same application portal but we are working to separate applications for safety/security and capabilities focused projects in future rounds.
The below are Anthropic's policies for full time roles. These do NOT apply to the Fellows Program.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link. We are accepting applications on a rolling basis for the next cohort of Anthropic Fellows, which is expected to start in late September. In some circumstances, we can accommodate fellows starting outside the usual cohort timelines — please note in your application if the September start date doesn't work for you.
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In one of our earlier cohorts, over 80% of fellows produced papers.
We run multiple cohorts of Fellows each year and review applications on a rolling basis. This application is for cohorts starting in July 2026 and beyond.
The interview process will include an initial application & reference check, technical assessments & interviews, and a research discussion.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).
Due to the success of the Anthropic Fellows for AI Safety Research program, we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams.
Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application. You can see an overview of the current workstreams below:
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
Fellows will undergo a project selection & mentor matching process. Potential research areas and mentors include:
Projects in this workstream may include:
You might be a particularly great fit for this workstream if you:
Logistics Requirements: To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.
Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.
Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you need to have or independently obtain full-time work authorization in the UK, the US, or Canada.
Program Duration: The program runs for 4 months, full-time. If you can't commit to the full duration, please still apply and note your constraints in the application. We review these requests on a case-by-case basis.
Please note: We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit for full-time roles at Anthropic. In previous cohorts, 25-50% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on AI safety and security at other organizations.
Applications and interviews are managed by Constellation, our recruiting partner. Clicking "Apply here" will take you to their portal, and updates will come from a Constellation address. Constellation also runs the Berkeley workspace and provides program support for fellows working on AI safety and security; fellows on capabilities-focused projects are supported directly by Anthropic. All applicants currently use the same application portal but we are working to separate applications for safety/security and capabilities focused projects in future rounds.
The below are Anthropic's policies for full time roles. These do NOT apply to the Fellows Program.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link. We are accepting applications on a rolling basis for the next cohort of Anthropic Fellows, which is expected to start in late September. In some circumstances, we can accommodate fellows starting outside the usual cohort timelines — please note in your application if the September start date doesn't work for you.
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In one of our earlier cohorts, over 80% of fellows produced papers.
We run multiple cohorts of Fellows each year and review applications on a rolling basis. This application is for cohorts starting in July 2026 and beyond.
The interview process will include an initial application & reference check, technical assessments & interviews, and a research discussion.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).
Due to the success of the Anthropic Fellows for AI Safety Research program, we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams.
Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application. You can see an overview of the current workstreams below:
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
Fellows will undergo a project selection & mentor matching process. Potential mentors include:
Note: You may research mentors' prior work, but all applications must go through the official form, not the mentors.
For a past example of an engineering-heavy project, see:
Projects in this workstream may include:
You might be a particularly great fit for this workstream if you:
Logistics Requirements: To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.
Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.
Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you need to have or independently obtain full-time work authorization in the UK, the US, or Canada.
Program Duration: The program runs for 4 months, full-time. If you can't commit to the full duration, please still apply and note your constraints in the application. We review these requests on a case-by-case basis.
Please note: We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit for full-time roles at Anthropic. In previous cohorts, 25-50% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on AI safety and security at other organizations.
Applications and interviews are managed by Constellation, our recruiting partner. Clicking "Apply here" will take you to their portal, and updates will come from a Constellation address. Constellation also runs the Berkeley workspace and provides program support for fellows working on AI safety and security; fellows on capabilities-focused projects are supported directly by Anthropic. All applicants currently use the same application portal but we are working to separate applications for safety/security and capabilities focused projects in future rounds.
The below are Anthropic's policies for full time roles. These do NOT apply to the Fellows Program.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link. We are accepting applications on a rolling basis for the next cohort of Anthropic Fellows, which is expected to start in late September. In some circumstances, we can accommodate fellows starting outside the usual cohort timelines — please note in your application if the September start date doesn't work for you.
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In one of our earlier cohorts, over 80% of fellows produced papers.
We run multiple cohorts of Fellows each year and review applications on a rolling basis. This application is for cohorts starting in July 2026 and beyond.
The interview process will include an initial application & reference check, technical assessments & interviews, and a research discussion.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).
Due to the success of the Anthropic Fellows for AI Safety Research program, we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams.
Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application. You can see an overview of the current workstreams below:
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
Fellows will undergo a project selection & mentor matching process. Potential mentors include:
On our Alignment Science and Frontier Red Team blogs, you can read about some past Fellows projects, including:
You might be a particularly great fit for this workstream if you:
Logistics Requirements: To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.
Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.
Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you need to have or independently obtain full-time work authorization in the UK, the US, or Canada.
Program Duration: The program runs for 4 months, full-time. If you can't commit to the full duration, please still apply and note your constraints in the application. We review these requests on a case-by-case basis.
Please note: We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit for full-time roles at Anthropic. In previous cohorts, 25-50% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on AI safety and security at other organizations.
Applications and interviews are managed by Constellation, our recruiting partner. Clicking "Apply here" will take you to their portal, and updates will come from a Constellation address. Constellation also runs the Berkeley workspace and provides program support for fellows working on AI safety and security; fellows on capabilities-focused projects are supported directly by Anthropic. All applicants currently use the same application portal but we are working to separate applications for safety/security and capabilities focused projects in future rounds.
The below are Anthropic's policies for full time roles. These do NOT apply to the Fellows Program.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link. We are accepting applications on a rolling basis for the next cohort of Anthropic Fellows, which is expected to start in late September. In some circumstances, we can accommodate fellows starting outside the usual cohort timelines — please note in your application if the September start date doesn't work for you.
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In one of our earlier cohorts, over 80% of fellows produced papers.
We run multiple cohorts of Fellows each year and review applications on a rolling basis. This application is for cohorts starting in July 2026 and beyond.
The interview process will include an initial application & reference check, technical assessments & interviews, and a research discussion.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).
Due to the success of the Anthropic Fellows for AI Safety Research program, we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams.
Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application. You can see an overview of the current workstreams below:
This page is specific to one of the Anthropic Fellows Workstreams, see also the main Anthropic Fellows posting.
Fellows will undergo a project selection & mentor matching process. Potential mentors include:
Our mentors will lead projects in select AI safety research areas, such as:
On our Alignment Science and Frontier Red Team blogs, you can read about past projects, including:
For a full list of representative projects for each area, please see these blog posts: Introducing the Anthropic Fellows Program for AI Safety Research, Recommendations for Technical AI Safety Research Directions.
You might be a particularly great fit for this workstream if you:
Logistics Requirements: To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.
Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.
Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you need to have or independently obtain full-time work authorization in the UK, the US, or Canada.
Program Duration: The program runs for 4 months, full-time. If you can't commit to the full duration, please still apply and note your constraints in the application. We review these requests on a case-by-case basis.
Please note: We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit for full-time roles at Anthropic. In previous cohorts, 25-50% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on AI safety and security at other organizations.
Applications and interviews are managed by Constellation, our recruiting partner. Clicking "Apply here" will take you to their portal, and updates will come from a Constellation address. Constellation also runs the Berkeley workspace and provides program support for fellows working on AI safety and security; fellows on capabilities-focused projects are supported directly by Anthropic. All applicants currently use the same application portal but we are working to separate applications for safety/security and capabilities focused projects in future rounds.
The below are Anthropic's policies for full time roles. These do NOT apply to the Fellows Program.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link. We are accepting applications on a rolling basis for the next cohort of Anthropic Fellows, which is expected to start in late September. In some circumstances, we can accommodate fellows starting outside the usual cohort timelines — please note in your application if the September start date doesn't work for you.
The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In one of our earlier cohorts, over 80% of fellows produced papers.
We run multiple cohorts of Fellows each year and review applications on a rolling basis. This application is for cohorts starting in July 2026 and beyond.
The interview process will include an initial application & reference check, technical assessments & interviews, and a research discussion.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
The expected base stipend for this role is 3,850 USD / 2,310 GBP / 4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).
Due to the success of the Anthropic Fellows for AI Safety Research program, we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams.
Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application. You can see an overview of the current workstreams below:
Fellows will undergo a project selection & mentor matching process. Potential mentors include:
Note: You may research mentors' prior work, but all applications must go through the official form, not the mentors.
Our mentors will lead projects in select AI safety research areas, such as:
On our Alignment Science and Frontier Red Team blogs, you can read about past projects, including:
For a full list of representative projects for each area, please see these blog posts: Introducing the Anthropic Fellows Program for AI Safety Research, Recommendations for Technical AI Safety Research Directions.
You might be a particularly great fit for this workstream if you:
Fellows will undergo a project selection & mentor matching process. Potential mentors include:
On our Alignment Science and Frontier Red Team blogs, you can read about some past Fellows projects, including:
You might be a particularly great fit for this workstream if you:
Fellows will undergo a project selection & mentor matching process. Potential mentors include:
For a past example of an engineering-heavy project, see:
Projects in this workstream may include:
You might be a particularly great fit for this workstream if you:
Fellows will undergo a project selection & mentor matching process. Potential research areas and mentors include:
Projects in this workstream may include:
You might be a particularly great fit for this workstream if you:
Fellows will undergo a project selection & mentor matching process. Potential research areas and mentors include:
Projects in this workstream may include:
For past project examples, see:
You might be a particularly great fit for this workstream if you:
Logistics Requirements: To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program.
Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK, US, or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.
Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program, you need to have or independently obtain full-time work authorization in the UK, the US, or Canada.
Program Duration: The program runs for 4 months, full-time. If you can't commit to the full duration, please still apply and note your constraints in the application. We review these requests on a case-by-case basis.
Please note: We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit for full-time roles at Anthropic. In previous cohorts, 25-50% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on AI safety and security at other organizations.
Applications and interviews are managed by Constellation, our recruiting partner. Clicking "Apply here" will take you to their portal, and updates will come from a Constellation address. Constellation also runs the Berkeley workspace and provides program support for fellows working on AI safety and security; fellows on capabilities-focused projects are supported directly by Anthropic. All applicants currently use the same application portal but we are working to separate applications for safety/security and capabilities focused projects in future rounds.
The below are Anthropic's policies for full time roles. These do NOT apply to the Fellows Program.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Product Support Specialist, you’ll be at the front lines of safely delivering AI to the world by responding to, investigating, and tracking user needs in your day to day. Additionally, you’ll help us identify – and close – gaps in our team’s technical knowledge, provide high-touch support to strategic customers, and demonstrate deep care for how we systematically support customers at scale.
While not required, we're particularly excited about candidates with one or more of these specializations:
We're hiring across a range of experience levels to build depth in both technical capabilities and enterprise support.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Nextdoor (NYSE: NXDR) is the essential neighborhood network. Neighbors, public agencies, and businesses use Nextdoor to connect around local information that matters in more than 350,000 neighborhoods across 11 countries. Nextdoor builds innovative technology to foster local community, share important news, and create neighborhood connections at scale. Download the app and join the neighborhood at nextdoor.com.
The Counsel team is a close-knit and collaborative team that supports the entire Nextdoor community. We operate with the mindset that a small, high-leverage team can outperform a larger one by investing in the right systems, workflows, and tooling. Each member brings individual expertise and a builder's orientation: we don't just deliver legal advice, we design the infrastructure that makes great legal support scalable. We are reliable, trustworthy, and innovative in our support of Nextdoor’s business goals and our purpose of building stronger local communities.
At Nextdoor, we offer a warm and inclusive work environment that embraces a hybrid employment model, blending an in office presence and work from home experience for our valued employees. The hiring team will go over these expectations with you if you are being considered for a role near one of our offices in San Francisco, Los Angeles, Chicago, Dallas, New York, and London.
As Commercial Counsel at Nextdoor, you’ll be both a trusted commercial lawyer and a systems builder. You'll support our global revenue team on advertising deals and partner with business teams on a range of commercial matters — but equally important, you'll design, build, and continuously improve the workflows, playbooks, templates, and AI-enabled tooling that allow a lean legal team to move at the speed of the business. You will also advise internal clients on the various legal risks, business strategies, and other issues related to commercial transactions and general operations.
Your responsibilities will include:
Compensation, benefits, perks, and recognition programs at Nextdoor come together to create our total rewards package. Compensation will vary depending on your relevant skills, experience, and qualifications.
The starting salary for this role is expected to range from $220,000 to $240,000 on an annualized basis, or potentially greater in the event that your 'level' of proficiency exceeds the level expected for the role. The salary range will be determined by the candidate’s geographic location.
We expect to award a meaningful equity grant for this role. With quarterly vesting, your first vest date will take place within 3 months of your start date.
When it comes to benefits, we have you covered! Nextdoor employees can choose between a variety of health plans, including a 100% covered employee only plan option, and we also provide a OneMedical membership for concierge care.
At Nextdoor, we empower our employees to build stronger local communities. To create a platform where all feel welcome, we want our workforce to reflect the diversity of the neighbors we serve. We encourage everyone interested in our mission to apply. We do not discriminate on the basis of race, gender, religion, sexual orientation, age, or any other trait that unfairly targets a group of people. In accordance with the San Francisco Fair Chance Ordinance, we always consider qualified applicants with arrest and conviction records.
For information about our collection and use of applicants’ personal information, please see Nextdoor's Personnel Privacy Notice, found here.
Ready to apply?
Apply to Nextdoor
Nextdoor (NYSE: NXDR) is the essential neighborhood network. Neighbors, public agencies, and businesses use Nextdoor to connect around local information that matters in more than 350,000 neighborhoods across 11 countries. Nextdoor builds innovative technology to foster local community, share important news, and create neighborhood connections at scale. Download the app and join the neighborhood at nextdoor.com.
The Counsel team is a close-knit and collaborative team that supports the entire Nextdoor community. We operate with the mindset that a small, high-leverage team can outperform a larger one by investing in the right systems, workflows, and tooling. Each member brings individual expertise and a builder's orientation: we don't just deliver legal advice, we design the infrastructure that makes great legal support scalable. We are reliable, trustworthy, and innovative in our support of Nextdoor’s business goals and our purpose of building stronger local communities.
At Nextdoor, we offer a warm and inclusive work environment that embraces a hybrid employment model, blending an in office presence and work from home experience for our valued employees. The hiring team will go over these expectations with you if you are being considered for a role near one of our offices in San Francisco, Los Angeles, Chicago, Dallas, New York, and London.
As Commercial Counsel at Nextdoor, you’ll be both a trusted commercial lawyer and a systems builder. You'll support our global revenue team on advertising deals and partner with business teams on a range of commercial matters — but equally important, you'll design, build, and continuously improve the workflows, playbooks, templates, and AI-enabled tooling that allow a lean legal team to move at the speed of the business. You will also advise internal clients on the various legal risks, business strategies, and other issues related to commercial transactions and general operations.
Your responsibilities will include:
Compensation, benefits, perks, and recognition programs at Nextdoor come together to create our total rewards package. Compensation will vary depending on your relevant skills, experience, and qualifications.
The starting salary for this role is expected to range from $220,000 to $240,000 on an annualized basis, or potentially greater in the event that your 'level' of proficiency exceeds the level expected for the role. The salary range will be determined by the candidate’s geographic location.
We expect to award a meaningful equity grant for this role. With quarterly vesting, your first vest date will take place within 3 months of your start date.
When it comes to benefits, we have you covered! Nextdoor employees can choose between a variety of health plans, including a 100% covered employee only plan option, and we also provide a OneMedical membership for concierge care.
At Nextdoor, we empower our employees to build stronger local communities. To create a platform where all feel welcome, we want our workforce to reflect the diversity of the neighbors we serve. We encourage everyone interested in our mission to apply. We do not discriminate on the basis of race, gender, religion, sexual orientation, age, or any other trait that unfairly targets a group of people. In accordance with the San Francisco Fair Chance Ordinance, we always consider qualified applicants with arrest and conviction records.
For information about our collection and use of applicants’ personal information, please see Nextdoor's Personnel Privacy Notice, found here.
Ready to apply?
Apply to Nextdoor
Nextdoor (NYSE: NXDR) is the essential neighborhood network. Neighbors, public agencies, and businesses use Nextdoor to connect around local information that matters in more than 350,000 neighborhoods across 11 countries. Nextdoor builds innovative technology to foster local community, share important news, and create neighborhood connections at scale. Download the app and join the neighborhood at nextdoor.com.
The Counsel team is a close-knit and collaborative team that supports the entire Nextdoor community. We operate with the mindset that a small, high-leverage team can outperform a larger one by investing in the right systems, workflows, and tooling. Each member brings individual expertise and a builder's orientation: we don't just deliver legal advice, we design the infrastructure that makes great legal support scalable. We are reliable, trustworthy, and innovative in our support of Nextdoor’s business goals and our purpose of building stronger local communities.
At Nextdoor, we offer a warm and inclusive work environment that embraces a hybrid employment model, blending an in office presence and work from home experience for our valued employees. The hiring team will go over these expectations with you if you are being considered for a role near one of our offices in San Francisco, Los Angeles, Chicago, Dallas, New York, and London.
As Commercial Counsel at Nextdoor, you’ll be both a trusted commercial lawyer and a systems builder. You'll support our global revenue team on advertising deals and partner with business teams on a range of commercial matters — but equally important, you'll design, build, and continuously improve the workflows, playbooks, templates, and AI-enabled tooling that allow a lean legal team to move at the speed of the business. You will also advise internal clients on the various legal risks, business strategies, and other issues related to commercial transactions and general operations.
Your responsibilities will include:
Compensation, benefits, perks, and recognition programs at Nextdoor come together to create our total rewards package. Compensation will vary depending on your relevant skills, experience, and qualifications.
The starting salary for this role is expected to range from $220,000 to $240,000 on an annualized basis, or potentially greater in the event that your 'level' of proficiency exceeds the level expected for the role. The salary range will be determined by the candidate’s geographic location.
We expect to award a meaningful equity grant for this role. With quarterly vesting, your first vest date will take place within 3 months of your start date.
When it comes to benefits, we have you covered! Nextdoor employees can choose between a variety of health plans, including a 100% covered employee only plan option, and we also provide a OneMedical membership for concierge care.
At Nextdoor, we empower our employees to build stronger local communities. To create a platform where all feel welcome, we want our workforce to reflect the diversity of the neighbors we serve. We encourage everyone interested in our mission to apply. We do not discriminate on the basis of race, gender, religion, sexual orientation, age, or any other trait that unfairly targets a group of people. In accordance with the San Francisco Fair Chance Ordinance, we always consider qualified applicants with arrest and conviction records.
For information about our collection and use of applicants’ personal information, please see Nextdoor's Personnel Privacy Notice, found here.
Ready to apply?
Apply to Nextdoor
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Knowledge Work team builds the training environments and evaluations that make Claude effective at real-world professional workflows — searching, analyzing, and creating across the tools and documents knowledge workers use every day. As that work scales, the systems behind it need to be as rigorous as the research itself.
We are looking for a Research Engineer to own the reliability, observability, and infrastructure foundation that the team's research depends on. You will be responsible for ensuring our training and evaluation runs remain stable, well-instrumented, and high-quality as they grow in scale and complexity.
A core part of this role is shifting reliability work from reactive to proactive: hardening systems, stress-testing at realistic scale, and building the observability and tooling that surface problems early — so researchers can stay focused on research rather than incident response. You will be the team's stable, context-rich owner for environment health and evaluation integrity, and the primary point of contact for partner teams when issues arise.
Where this role focuses: While you'll work closely with researchers building new training environments, the priority for this role is the reliability those environments depend on. It's best suited to an engineer who finds real ownership and impact in making critical systems dependable, and in being the person behind trustworthy evaluation results the entire organization relies on.
Serve as the dedicated reliability owner for the Knowledge Work training environments, providing continuity of context and reducing the operational overhead of rotating ownership
Own a clean, canonical set of evaluation tools and processes for Knowledge Work capabilities, including the process used for model releases
Build and automate observability, dashboards, and operational tooling for our training environments and evaluation systems, with an emphasis on high signal-to-noise: a small set of trusted metrics and alerts rather than sprawling instrumentation
Proactively harden environments and evaluation systems through load testing, fault injection, and stress testing at realistic scale, so failures surface early rather than during critical training work
Act as the primary point of contact for partner training and infrastructure teams when issues in our environments arise, and drive incidents to resolution
Reduce the operational burden on researchers so they can stay focused on research
Highly experienced Python engineer who ships reliable, well-instrumented code that teammates trust in production
Demonstrated experience operating ML or distributed systems at scale, including significant on-call and incident-response experience
Strong SRE or production-engineering mindset — reaching for SLOs, load tests, and failure injection before reaching for more dashboards
Foundational ML knowledge sufficient to understand what a training environment or evaluation is actually measuring, and recognize when an evaluation has become stale or gameable
Experience building or operating RL environments, agent harnesses, or LLM evaluation frameworks
Familiarity with reward modeling, evaluation design, or detecting and mitigating reward hacking
Experience with bservability stacks (metrics, tracing, structured logging) and operational dashboard tooling
Background in chaos engineering, fault injection, or large-scale load testing
Experience with data quality pipelines, drift detection, or evaluation-set curation and versioning
Familiarity with large-scale training or inference infrastructure (schedulers, multi-agent orchestration, sandboxed execution)
Prior experience as a dedicated reliability or operations owner embedded within a research team
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As Anthropic continues to grow, the rigor and sophistication of how we plan our workforce needs to grow with us, and our People Analytics team plays a central role in that work.
We are looking for a Workforce Planning Lead to own how Anthropic plans, forecasts, and tracks its workforce, from the underlying data and tooling, to forecasting models, to the cadence and partnership that connects workforce decisions to business strategy. The person in this seat will shape the next chapter of how workforce planning runs at Anthropic.
The role is highly cross-functional. You will partner closely with Recruiting, Finance, People Partners (HRBPs), talent leaders, and business leaders across the company to understand headcount needs, model tradeoffs, and translate strategy into workforce plans. Strong grounding in people analytics methods, judgment, and a generalist orientation are essential. Scope will at times extend beyond traditional workforce planning into broader People Analytics work.
Workforce planning process and strategic partnership
Lead recurring company-wide headcount planning cycles and the ongoing rhythms of review, monitoring, and adjustment that keep plans connected to the business. Partner with Finance, Recruiting, and talent leaders to translate business strategy into workforce implications, including hiring plans, capacity needs, location considerations, and workforce mix. Help inform where and how Anthropic grows over time. Strengthen the underlying infrastructure that supports planning, including position management, governance of core workforce data, and consistency across our people systems. Be a trusted thought partner when growth plans are set, organizational design is evaluated, or new initiatives are resourced, providing the data and modeling that lets leaders make decisions.
Forecasting and scenario modeling
Build and maintain quantitative models for headcount forecasting, attrition projection, hiring plan scenario analysis, and capacity planning. Develop scenario analyses that quantify tradeoffs across growth plans, hiring assumptions, attrition rates, and budget considerations. Identify supply and demand gaps before they become bottlenecks.
Analytics, dashboards, and insight
Build self-service dashboards for executives, functional leaders, and managers showing current state, plan versus actual, pipeline health, and key workforce metrics. Develop predictive analytics including attrition risk, hiring velocity forecasting, and supply/demand gap identification. Translate analysis into clear narratives that drive decisions.
Cross-functional execution
Provide hands-on operational support during structural change, including reorganizations, new team formation, and rapid scaling of a function. Support broader People Analytics workstreams as needed. This role will frequently touch attrition, recruiting funnel, location analytics, and other people data.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Wilson Elser is a leading defense litigation law firm with more than 1400 attorneys in 46 offices throughout the United States. Founded in 1978, we rank among the top 100 law firms identified by The American Lawyer and 36 in the National Law Journal’s survey of the nation’s largest law firms.
Wilson Elser's San Francisco office is currently seeking a Life Sciences Associate to join a team of talented professionals in a collaborative, collegial environment. Our firm is committed to developing professionals who value each other and the service we provide through Teamwork, Collaboration, Client Service, and Innovation. If you are a motivated professional looking for a long-term fit, where you can grow in a role, and be valued and empowered, then we invite you to apply.
This position offers a flexible, hybrid working arrangement, or remote work opportunities.
The Position
This is an excellent opportunity for attorneys to grow their careers across a broad spectrum of the life sciences and healthcare practices, including medical malpractice, senior living and long-term care litigation, medical device product liability, and broader health care litigation matters.
Discover our newly renovated San Francisco office! Take a virtual tour and explore our building amenities.
Qualifications
Responsibilities
A variety of factors are considered in making compensation decisions, including but not limited to experience, education, licensure and/or certifications, geographic location, market demands, other business and organizational needs, and other factors permitted by law. This position is also eligible for discretionary bonuses. Final salary wages offered may be outside of this range based on other reasons and individual circumstances. This position is considered full-time and therefore qualifies for benefits including 401(k) retirement savings plan, medical, dental, vision, disability, and life insurance. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
Why Should You Apply?
Ready to apply?
Apply to Wilson Elser - AttorneysAnthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Compute Accounting Manager at Anthropic, you will own the end-to-end accounting for our compute expenses, ensuring accurate financial reporting and robust controls as we scale. You'll be responsible for the complete lifecycle of compute accounting—from contract review and compliance through accruals, prepaids, accounts payable reconciliation, expense classification, and commitment tracking. This role requires deep expertise in accounting for compute usage, strong data and analytical capabilities, and the ability to build scalable processes in a high-growth environment. You'll partner directly with the relevant Engineering, Legal, and Procurement teams to ensure our CSP contracts are properly reflected in our financial records and that we're capturing costs accurately and in compliance with our agreements. As Anthropic continues to grow rapidly, you'll play a critical role in establishing the financial controls and processes that enable us to manage significant compute spend with precision and confidence.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are looking for Research Engineers to help us redesign how Claude interacts with external data sources. Many of the paradigms for how data and knowledge bases are organized assume human consumers and constraints. This is no longer true in a world of LLMs! Your job will be to design new architectures for how information is organized, and train language models to optimally use those architectures.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic launches products at lightning speed and many of them grow at an unprecedented pace, too. For no product is this more true than Claude Code, the fastest-growing software product in history. That said, speed of adoption is not the same as depth of adoption. We’re building a team of Technical Specialists to drive meaningful adoption breadth and depth in our most strategic customers. We’ll do this through delivery of high-quality technical engagement and enablement in the customer’s ~ 90 days before and after contract signature.
As a Technical Specialist, you’ll largely engage with customers post-sale. Once an account’s implementation set-up is complete, you’ll convert bottom-up developer passion into org-wide meaningful adoption through bespoke enablement tailored to the customer's stack, repos, and workflows and focused upon deep adoption of proprietary Claude capabilities, which we know drive stickiness.
You'll also engage in strategic pilots before signature — partnering with Sales, Applied AI, and the customer's engineering leads to scope the pilot, run the enablement, and instrument the success criteria that close the deal. Carrying that context from pilot into post-sale is what makes adoption stick.
This is not a technical implementation role — that work sits with our Implementation Specialists. You'll spend your time in front of developers launching Claude Code, department leaders adopting Cowork, on stage at customer events and Anthropic builder summits, and in Claude Code and Cowork, building the demo apps and reference implementations that prove what's possible.
You’re the kind of person engineers want to grab coffee with after your workshops — credible because you can open a terminal in the meeting and actually drive the tool, exciting because you've thought hard about how agentic AI changes how software gets built, and useful because you can answer the next three questions a senior platform engineer is about to ask about security, scale, or agent behavior.
Drive end-user excitement inside enterprise accounts (primary focus)
Design and deliver customer-specific enablement programs — workshops, office hours, "build your first agent" labs, role-based curricula for engineering, data, platform, security, and knowledge-worker audiences
Drive deep adoption of proprietary Claude Code capabilities that make our tools sticky (subagents, hooks, MCP servers, headless mode, managed settings) tailored to the customer's stack, repos, and actual workflows (CI/CD, IDE integration, source control, secure coding, agentic pipelines)
Enable and excite champions and AI Center of Excellence leads inside customer orgs, arming them to scale the motion without you in the room
Support strategic pilots with pre-sales technical enablement
Partner with Account Executives, Applied AI, and customer engineering leads on pilots for strategic accounts
Carry pilot context into post-sale engagement so the customer’s experience is seamless and deeply focused upon their unique needs
Run hackathons and builder events that create lasting momentum
Design, scope, and run on-site and virtual hackathons inside customer organizations
Bring the playbook: themes, judging rubrics, starter kits, prize structures, post-event follow-through that turns hackathon prototypes into shipped products
Partner with Anthropic Marketing and Events to amplify the best work from customer hackathons externally where appropriate
Build the demos, content, and reference implementations that sell themselves
Build production-quality demo apps, reference architectures, and code samples on the Claude Developer Platform, Claude Code, and Claude Enterprise (including Cowork) — using the products daily yourself as core infrastructure
Create technical content — tutorials, blog posts, recorded walkthroughs, internal "cookbooks" — tailored to enterprise developer audiences and their real workflows (CI/CD, IDE integration, secure coding, agentic workflows, multi-agent orchestration)
Translate new product capabilities into field-ready content within days of release
Be the trusted technical voice in front of customers
Deliver keynotes, technical talks, and live-coded demos at customer all-hands, internal AI summits, and Anthropic-hosted builder events
Run "ask me anything" sessions with customer engineering organizations and engage credibly on architecture, prompting, agent design, evaluation, developer workflow, and the honest limitations of where AI coding tools are today
Represent Anthropic with our customers' engineering communities and, where appropriate, the broader developer ecosystem
Partner across GTM and back to product
Partner closely with Sales, Applied AI, and Customer Success to sequence engagements that compound (pilot → activation → adoption → value)
Bring back signal from the field: what enterprise developers are asking for, where they're getting stuck, what would unlock the next leap in adoption, and feed it to Product and Engineering
Contribute to the playbook: capture what works in pilots and post-sale enablement, share with peers, raise the floor across the team
3–7+ years in a customer-facing technical role — Solutions Architecture, sales engineering, developer relations, or technical consulting.
You build, and you build with our products. You use Claude Code, the Claude Developer Platform, and Cowork as core infrastructure in your own work. You can ship a working demo over a weekend, steer an agent through a non-trivial task live, and have opinions about what's actually good.
You can hold a room of engineers. You've delivered technical talks, workshops, or training to engineering audiences — platform engineers, ML/AI teams, security-minded enterprise architects — and you know the difference between a session that creates awareness and one that creates champions.
Builder credibility. A career that includes meaningful time as a software engineer, founder, solutions engineer, developer advocate, or Solutions Architect. You've shipped code that other people have depended on.
Pilot and evaluation experience. You've supported technical evaluations or pilots in an enterprise setting — scoping, enablement, handling the security/admin/scale conversations without escalating everything to product.
Strong written and visual communication. Tutorials, decks, code comments, and Slack threads that people actually read.
Project management instincts. You can run a multi-week activation plan — or a pilot against a defined success metric — without dropping the ball.
Willingness to travel. Regular travel for on-site enablement, pilots, hackathons, and customer events. Some evenings and weekends around major events.
Deep enthusiasm for AI with hands-on experience building with LLMs and an instinct for how this technology should be used responsibly inside large organizations.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic runs some of the largest Kubernetes clusters in the industry. We have fleets of hundreds of thousands of nodes across multiple cloud providers and datacenters to train, research, and serve frontier AI models. The Kubernetes Platform team owns the Kubernetes control plane that makes those clusters work.
We are operating at a scale where the defaults stop working. We own the scheduler and extend it to place topology-sensitive ML workloads across thousands of accelerators at once. We scale the control plane itself — apiserver, etcd, controllers — so it stays responsive as object counts and node counts grow by orders of magnitude. And we build the core cluster services every workload depends on, like service discovery, so they hold up under the same pressure.
We make sure the control plane is fast, correct, and always available. Your work will directly determine whether Anthropic can keep reliably and safely training frontier models as our compute footprint continues to grow.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is bringing Claude to democracies worldwide, and we’re looking for startup-minded engineers to build out our public sector engineering team. Engineers in this role will be working across the stack to scale our products from 0 to 1 and beyond.
While we’re looking for candidates with prior public sector experience, we are also excited about hiring product-minded engineers with a track record of building and scaling products. There’s no shortage of things to build, so startup experience will help in maintaining a sharp focus and plotting an optimal path forward. This role is full-stack and entrepreneurial: you’ll own projects such as building out deployments for new government customers, implementing products to better surface critical information, and working directly with customers to understand how we can best leverage and adapt Claude to power their workflows.
You’d be joining at an exciting inflection point where we are evolving past deployments of basic Claude access for the public sector and toward cutting-edge AI capabilities and a comprehensive public sector experience. The projects the team tackles enables governments to deploy AI applications for their most critical operations—from national security to public services. As the team continues to grow, our focus includes delivering solutions that directly influence how democratic institutions deploy and use advanced AI systems.
Build specialized AI applications and products for governments by developing deep understanding of their processes and workflows
Take ownership of architecting new deployments and designing public sector specific features
Collaborate closely with research on AI models for government applications and workflows
Partner directly with government customers and internal go-to-market teams to translate requirements into technical and product roadmaps
Design and build scalable systems for deployment, user management, and administrative controls
Create clarity and technical direction in a fast-moving environment with unique constraints
Have 8+ years of experience as a full stack software engineer
Experience integrating and working with AI/ML models and understanding their capabilities
Strong technical background with proven success building and shipping enterprise or government-grade products
Have excellent collaboration skills, with ability to work effectively across functions
Startup experience, particularly in scaling products from zero to one
Experience partnering with sales, customer success, and professional services teams to drive product adoption
Experience delivering software to government networks
Active federal security clearance (Secret or above)
This position requires verification of U.S. citizenship due to citizenship-based legal restrictions. Specifically, this position supports a United States federal, state, and/or local government agency customer and is subject to certain citizenship-based restrictions where required or permitted by applicable law. To meet this legal requirement, citizenship will be verified via a valid passport, or other approved documents, or verified US government clearance.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.
Developer Productivity owns the end-to-end experience of how engineers and researchers at Anthropic develop, build, test, and ship code at scale — from the source control and language ecosystems that underpin our monorepo, to the build and CI infrastructure that keeps thousands of daily builds running reliably across multiple cloud providers, to the developer acceleration tooling that deeply integrates Claude into engineering workflows.
Team Matching: Team matching is determined after the interview process based on interview performance, interests, and business priorities. Please note we may also consider you for different Infrastructure teams.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking an exceptional Productivity Engineer to build intelligent systems that amplify our Sales and Sales Development organizations’ effectiveness. In this role, you'll develop Claude-powered automations, alerts, and scalable automations focused on top of funnel activities like lead process automation and prospecting intelligence. This position sits at the intersection of enablement, operations, and AI innovation—creating tools that support our sales teams and reduce the friction in the sales process.
As the Productivity Engineer for top of funnel activities, you'll be responsible for building the infrastructure that accelerates lead response times, personalizes outbound prospecting, and creates intelligent routing and prioritization systems. This role offers the unique opportunity to architect the future of AI-powered sales processes by creating scalable systems that will define how frontier AI technology enhances early-stage pipeline generation in enterprise sales environments.
Responsibilities
Lead the development for Claude-powered productivity tooling focused on prospecting, lead qualification, and early-stage opportunity creation
Architect intelligent alert systems that notify sales teams of high-intent signals, inbound leads, website activity, and optimal engagements
Build sophisticated automations for lead routing, account research, personalized outreach generation, and follow-up sequencing that drive consistent top-of-funnel behaviors
Design and implement a centralized data architecture that aggregates prospect intelligence from multiple sources (web traffic, intent data, firmographics, technographics) to create unified views for targeting and prioritization
Leverage conversational intelligence and email engagement data to identify high-performing prospecting patterns and surface best practices for personalization, messaging, and timing
Partner cross-functionally with Sales Operations, Sales Development Leadership, and Marketing to establish feedback loops that continuously improve lead quality scoring, routing accuracy, and the effectiveness of prospecting tools and enablement reinforcement
Engineering experience working with complex technologies, varied datasets, and building data-driven productivity solutions with AI
Proven track record of developing and deploying automations focused on lead management, prospecting workflows, and early-stage sales processes
Strong technical proficiency with APIs and experience integrating lead sources, Github, CRM systems, sales engagement platforms, and marketing automation tools
Deep expertise with sales development tools
Experience building scalable lead routing, scoring, and prioritization systems in fast-growth B2B environments
Strong understanding of prospecting methodologies, inbound/outbound sales motions, and the behaviors that drive top-of-funnel conversion
Ability to translate sales development requirements into technical solutions and work effectively with both BDR teams and engineering stakeholders
Excellence in program management with demonstrated ability to build initiatives from the ground up in ambiguous, fast-paced environments
Experience in sales and/or sales development roles is highly valued
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking an exceptional GTM Engineer to build intelligent systems that amplify our Sales and Sales Development organizations’ effectiveness. In this role, you'll develop Claude-powered automations, alerts, and scalable automations focused on top of funnel activities like lead process automation and prospecting intelligence. This position sits at the intersection of enablement, operations, and AI innovation—creating tools that support our sales teams and reduce the friction in the sales process.
As the GTM Engineer for top of funnel activities, you'll be responsible for building the infrastructure that accelerates lead response times, personalizes outbound prospecting, and creates intelligent routing and prioritization systems. This role offers the unique opportunity to architect the future of AI-powered sales processes by creating scalable systems that will define how frontier AI technology enhances early-stage pipeline generation in enterprise sales environments.
Responsibilities
Lead the development for Claude-powered productivity tooling focused on prospecting, lead qualification, and early-stage opportunity creation
Architect intelligent alert systems that notify sales teams of high-intent signals, inbound leads, website activity, and optimal engagements
Build sophisticated automations for lead routing, account research, personalized outreach generation, and follow-up sequencing that drive consistent top-of-funnel behaviors
Design and implement a centralized data architecture that aggregates prospect intelligence from multiple sources (web traffic, intent data, firmographics, technographics) to create unified views for targeting and prioritization
Leverage conversational intelligence and email engagement data to identify high-performing prospecting patterns and surface best practices for personalization, messaging, and timing
Partner cross-functionally with Sales Operations, Sales Development Leadership, and Marketing to establish feedback loops that continuously improve lead quality scoring, routing accuracy, and the effectiveness of prospecting tools and enablement reinforcement
Engineering experience working with complex technologies, varied datasets, and building data-driven productivity solutions with AI
Proven track record of developing and deploying automations focused on lead management, prospecting workflows, and early-stage sales processes
Strong technical proficiency with APIs and experience integrating lead sources, Github, CRM systems, sales engagement platforms, and marketing automation tools
Deep expertise with sales development tools
Experience building scalable lead routing, scoring, and prioritization systems in fast-growth B2B environments
Strong understanding of prospecting methodologies, inbound/outbound sales motions, and the behaviors that drive top-of-funnel conversion
Ability to translate sales development requirements into technical solutions and work effectively with both BDR teams and engineering stakeholders
Excellence in program management with demonstrated ability to build initiatives from the ground up in ambiguous, fast-paced environments
Experience in sales and/or sales development roles is highly valued
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic’s mission requires the ability to defend our compute resources against all types of adversaries. Meeting that bar requires more than trustworthy hardware, it requires a software stack to consume the trust primitives the platform produces: device identities, attestation evidence, integrity manifests, DICE chains, topology claims, debug-state assertions, encryption posture, etc.
This role makes sure that all the security-relevant system state that needs to be attested to is properly accounted for in a robust and comprehensive manner, and used appropriately for authorization decisions in production.
You will be the technical owner of Anthropic's platform trust integration surface, the host-side stack that ingests trust primitives produced by hardware and converts them into production gates for workloads. Concretely:
This is an IC engineering role not a PM role. You will write code, own services, and be accountable for the correctness of Anthropic's platform trust pipeline in production. You'll operate at the interface between Platform Security Engineering and the rest of Anthropic.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As the Enterprise Events Lead at Anthropic, you will own the development and execution of enterprise-focused events that accelerate pipeline, deepen customer relationships, and elevate Anthropic's brand presence with key enterprise audiences. You will be responsible for designing and delivering high-impact event experiences—from major industry conferences to hosted summits, and partner activations.
In this role, you'll collaborate closely with Sales, Partnerships, Product Marketing, and Creative teams to ensure every event touchpoint reinforces Anthropic's position as a leader in safe, frontier AI. You'll bring a strategic mindset grounded in data, a talent for flawless execution under pressure, and the ability to translate complex enterprise goals into compelling in-person and hybrid experiences. This is a high-visibility role with significant cross-functional impact, ideal for someone who thrives at the intersection of strategy and logistics in a fast-moving environment.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a GTM Strategy & Operations Business Partner supporting the AMER Enterprise organization, you will work closely with sales leadership to be a deep expert in a critical segment of our fastest-growing business. This role supports the AMER Enterprise team, one of Anthropic's largest and most strategically significant commercial organizations, and will partner directly with executives to drive strategic analysis, operational efficiency, and growth initiatives across emerging and developing verticals while supporting critical global strategic initiatives.
This includes segment analysis, facilitating operating cadences and powering business reviews, improving operational efficiency, driving strategic initiatives, and supporting the rapid evolution of our sales motions and organization.
We're building for a future where AI dramatically amplifies what strategy teams can accomplish. We're hiring people with the technical depth, business judgment, and executive potential to operate at that frontier — leveraging AI to eliminate routine work and redirect energy toward the strategic and creative challenges that move the business. This role will evolve rapidly, and we need someone with the ceiling and ambition to evolve with it.
Partner with Global and AMER Enterprise segment leaders to analyze and refine segment-specific strategy including territory planning, resource allocation, performance to plan, and growth initiatives
Support development of targeted strategies across different customer segments and verticals based on their unique characteristics, product-market fit trajectories, and needs
Maintain excellence through business reviews and operating cadences to help achieve company revenue targets and strategic objectives
Be audible-ready about GTM efforts and a deep expert in your segment's book of business and trends with the ability to cut right to what matters for varying audiences from IC to executive
Analyze sales performance metrics and develop insights to optimize segment performance and improve efficiency using SQL, R, or Python to query, model, and visualize data at scale
Leverage Claude to its full capability
Partner to create, evolve and maintain reporting mechanisms, using data to go from "what" to "why"
Synthesize tangible, strategic recommendations based on go-to-market motions, deal outcomes, partner and customer feedback, and product opportunities
Build and maintain dashboards, models, and ad hoc analyses in spreadsheets and BI tools with mastery-level proficiency
Develop and maintain use case frameworks to capture and standardize how customers derive value
Support multiple teams and executives in leveraging insights for new deals, expansions, and retention strategies
Partner with GTM teams to evolve and refine sales motions and best practices across segments and regions
Serve as a trusted strategic advisor to senior and executive stakeholders across the GTM organization, delivering insights and recommendations with clarity, conviction, and composure
Navigate high-stakes conversations with intense, demanding executives — maintaining poise under pressure while delivering candid, well-supported perspectives
Build deep credibility through consistent follow-through, proactive communication, and the ability to challenge assumptions respectfully when the data warrants it
Translate complex analyses into executive-ready narratives that drive decisions, managing up effectively across multiple senior leaders with competing priorities
Work closely with Revenue Operations, Sales Enablement, Strategic Finance, and Product teams to drive initiatives and elevate excellence in course of business
Support coordination between AMER Enterprise sales segments and partnership teams
Help prepare materials for executive reviews and planning sessions
Deep alignment to Anthropic’s mission
5+ years of experience in sales strategy, revenue operations, management consulting, or GTM operations
Proficiency in SQL or R — you can independently write queries, build models, and extract insights from large datasets without relying on others
Mastery of spreadsheet tools and data (Google Sheets / Excel) — pivot tables, complex formulas, data modeling, and scenario analysis are second nature to you
Strong analytical skills with the ability to translate data into actionable insights and compelling executive narratives
Deep experience with GTM systems like Salesforce and business intelligence tools (Looker, Tableau, etc.)
Proven executive stakeholder management skills — you have a track record of presenting to and influencing C-suite and VP-level leaders in high-intensity environments.
An ability to pull signal through noise and an internal drive to create clarity. You strive for simplicity as the most elegant solution to complex situations.
A demonstrated ability to build trust with and influence a variety of stakeholders, including colleagues and other members of the GTM Strategy and Revenue Operations team
A healthy balance of idealism with pragmatism and the highest internal bar for excellence. You have been accountable for the results of your recommendations.
Track record of driving forward high-growth sales organizations
Disciplined project management and exceptional communication and interpersonal abilities
A desire and readiness to do the best work of your career
Bachelor's degree required; MBA or advanced degree preferred
Direct experience in a commercial role (pre-sales, customer-facing, partnerships, founder, etc.)
Experience in both large Enterprises in technology, finance or professional services, in addition to fast-paced startup building
Experience with AI/ML companies and consumption business models
Prior experience building frameworks for sales use cases or value propositions
Background in scaling sales organizations through 2-3x+ growth periods
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.