All active Lawyer roles based in New York.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
As an Applied AI Security Architect, you will serve as Anthropic's trusted security expert for our most demanding enterprise customers. You'll engage directly with CISOs, security architects, compliance officers, and technical leaders at the world's largest financial institutions, insurance companies, and other highly regulated enterprises to address their most critical questions about deploying Claude safely and securely.
This is a pre-sales technical role focused on security, compliance, networking, and data architecture. Your job is to walk into a room full of security professionals and demonstrate deep expertise in enterprise security, regulatory compliance, and data protection. You'll help customers understand Claude's security architecture, data handling practices, and deployment options, and partner with them to design solutions that meet their specific regulatory and organizational requirements.
You'll bring significant experience in enterprise security, cloud architecture, and technical pre-sales within regulated industries. Whether you've been a Security Architect, Solutions Architect, Field CTO, or senior pre-sales engineer at a cloud or security vendor, what matters is that you understand how large institutions evaluate and adopt technology, especially in financial services, and can speak credibly to their security and compliance concerns.
We are looking for someone excited to help define how enterprises should think about security and compliance in the age of AI. How do MCP, autonomous agents, and RBAC work together? If working at the intersection of AI adoption and regulated industries excites you, this is the role for you.
Responsibilities:
You may be a good fit if you have:
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Benefits is a core expression of how we support Ants and their families, now and over the arc of their lives. Our Benefits function has scaled to a global program covering thousands of employees across multiple regions in just over two years, and we are now adding additional depth to match that breadth.
We are hiring a senior individual contributor to anchor a major piece of our global program. The role is shaped to fit the strongest candidate's regional expertise. You will primarily own one of two scopes:
Both versions of this role share the same level, the same reporting line, and the same core competencies. We will identify which scope fits best during the interview process based on where your hands-on experience and strongest interests lie.
This is a builder role. You will design programs where infrastructure does not yet exist, run broker selections, negotiate carrier terms, and stand up audit-ready operations across multiple jurisdictions.
You will report to the Head of Benefits and partner closely with our Senior Benefits Partner.
Regardless of regional anchor, you will:
If you anchor in Americas, you will additionally:
If you anchor in International, you will additionally:
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Discovery team is organized around the north star of building an AI scientist — a system capable of solving the long-horizon reasoning challenges and core capabilities needed to push the scientific frontier. The team trains large-scale models, runs complex multi-week experiments, and builds novel products at the intersection of AI and science.
As a Technical Program Manager on the Discovery team, you will own the systems and programs that determine how fast our research moves: compute planning, scientific RL environment health, and the vendor pipelines that supply them, with scope to incubate new programs in domains like bio R&D. Strong candidates should have an ML engineering or research background and have grown into program leadership. You'll need real technical depth: the ability to debug data pipelines, read RL transcripts to spot issues, and make allocation and quality decisions in real time when experimental or production runs hit problems. You'll need organizational effectiveness in equal measure: the ability to navigate a fast-growing organization, quickly identify the critical people and teams across research, infrastructure, product, and data operations, and coordinate across them without losing velocity.
Join us in our mission to build AI systems that push the frontiers of science and benefit humanity.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Capacity & Efficiency team sits inside Anthropic’s Compute organization and owns the cost, utilization, and attribution story for non-accelerator infrastructure — the network, compute, and storage backbone that moves petabytes between training clusters, inference fleets, and object storage across clouds and regions. The scale is real, the spend is large, and the efficiency levers are still mostly unpulled.
We work alongside the Systems Networking team (who build and operate the fabric) and the Observability team (who own the telemetry platform). This role lives at the intersection: you’ll use deep networking knowledge and rigorous measurement to figure out where and how bandwidth, latency, and dollars are being used, find optimization opportunities and land them.
We’re looking for a network engineer who thinks in metrics first. You understand spine-leaf fabrics, BGP, SDN overlays, and cloud interconnect products well enough to build them. You will instrument them, model their cost-per-bit, and squeeze out the inefficiency, while ensuring we can move the bits to the right places in the most efficient manner. You’ll own the observability and efficiency surface for Anthropic’s network: from per-flow telemetry on backbone routers, to cost attribution that tells a research team exactly what their checkpoint sync is costing.
This is a hands-on IC role. You’ll write code (Python, Go), build dashboards and model capacity. You’ll also influence architecture: when the data says a traffic pattern is pathological, you’ll be in the room root causing it and fixing it.
You will be working across three areas: network telemetry, observability and cost modeling and attribution. We expect you to be strong in at least two and willing to grow into the third. If you're a telemetry-first engineer who's never built a chargeback model, or a traffic engineer who hasn't shipped eBPF probes, apply anyway and tell us which axis you want to grow on.
Build the network observability stack. Design and deploy telemetry pipelines — sFlow/IPFIX, gNMI streaming, eBPF host probes — that turn packet counters into per-flow, per-tenant, per-workload cost and utilization data. Own the SLIs for backbone and DCN fabric health.
Hunt for efficiency. Analyze inter-region traffic patterns, identify hot links and stranded capacity, and quantify the dollar impact. Build the models that tell us whether we should buy more capacity, or move the workload.
Own QoS and traffic engineering. Design and operate traffic classification, marking, and shaping across the backbone. Make sure bulk checkpoint transfers don’t starve latency-sensitive inference, and that we’re not paying premium cross-region rates for traffic that could take the cheap path.
Drive cost attribution. Tie network spend — egress, interconnect ports, transit, optical leases — back to the teams and workloads that generate it. Make network cost a first-class input to capacity planning and workload placement decisions.
Automate. Extend our intent-based network configuration systems and write the tooling that turns your efficiency findings into safe, reviewable, and impactful changes.
Have 5+ years operating large-scale production networks — data center fabrics (spine-leaf, Clos), backbone/WAN, or hyperscaler-adjacent environments.
Are genuinely fluent across the stack: BGP (including policy and communities), ECMP, VXLAN/EVPN or equivalent overlays, QoS (DSCP, queuing, shaping), and L1/optical basics (DWDM, coherent, LAGs).
Know at least one major CSP’s networking model deeply — AWS (VPC, TGW, Direct Connect, Gateway Load Balancer) or GCP (Shared VPC, Interconnect, Cloud Router, Network Connectivity Center) — and understand how their overlays interact with physical underlays.
Have built or operated network telemetry at scale: streaming telemetry (gNMI/OpenConfig), flow export (sFlow, IPFIX, NetFlow), or eBPF-based host-side instrumentation. You can reason about sampling, cardinality, and storage tradeoffs.
Comfortable writing Python or Go to build tooling, telemetry pipelines, infrastructure-as-code, config management for network devices and automation, that you’ll ship to production.
Think quantitatively by default. You reach for a notebook or a Grafana query before you reach for an opinion, and you can turn messy counter data into a defensible cost model.
Communicate crisply. You can explain to a finance partner why a 10% egress reduction matters, and to a network engineer why a specific ECMP imbalance is costing real money.
SRE experience for large-scale network infrastructure — designing for reliability, defining SLOs/SLIs for network services, capacity planning with error budgets, and incident response for network-impacting outages at scale.
Background on a cloud provider's networking team or a cloud networking product team — building or operating the interconnect, backbone, or SDN control plane from the provider side, not just consuming it as a customer.
Familiarity with AI/ML infrastructure traffic patterns like collective communication (all-reduce, all-gather), checkpoint/weight transfer, inference serving, and how these stress networks differ than traditional workloads in terms of burst behavior, flow synchronization, and bandwidth symmetry.
Experience with HPC fabrics like InfiniBand, RoCE v2, lossless Ethernet, or custom high-radix topologies and an understanding of how job placement, congestion management, and adaptive routing interact at scale.
Background in traffic engineering for large backbones and the operational judgment to know when TE is worth the complexity.
Hands-on time with multi-cloud connectivity: cross-cloud peering, private interconnect products, and the billing models that come with them.
Experience building cost/chargeback systems for shared infrastructure, or FinOps exposure in a large cloud environment.
Build a per-flow cost attribution pipeline that traces every byte of cross-region egress back to the team and workload that generated it
Design QoS policy for the private backbone that prevents bulk checkpoint transfers from starving inference traffic
Model whether it's cheaper to buy an additional 1.6Tb interconnect tranche or to re-route traffic through existing capacity
Instrument DCN fabric utilization with streaming telemetry and build the Grafana dashboards that become the team's source of truth for network observability
Anthropic’s network footprint is growing faster than our ability to reason about it. We’re turning up tens of terabits of private backbone capacity, peering across clouds, and moving model weights that keep getting larger. The efficiency opportunities are enormous and largely untouched — this is a chance to build the measurement and optimization layer from the ground up, with real budget impact and direct influence on how Anthropic’s infrastructure scales.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform, from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations.
Within Cloud Inference, the model & inference launch team owns the validation pipeline for our inference server and load balancer on these platforms. We're responsible for every inference change — model launches, performance improvements, safeguard integrations — landing on cloud platforms with correctness, performance, and reliability intact.
This is high-leverage infrastructure work: validation has to be fast and cheap enough to run on the same accelerators that serve customers, trustworthy enough to replace manual checks, and consistent enough that a change working on Anthropic first-party means it works everywhere. This directly determines how fast frontier models and features ship to every cloud platform, and how quickly performance wins reach production — reclaiming capacity at a time when compute is our scarcest resource.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are seeking a GRC Automation Lead to join our GRC organization and build the technical foundation for how we scale our risk and compliance programs. In this role, you will lead the team that designs and implements automated workflows, data pipelines, and integrations that transform manual compliance processes into scalable engineering systems.
This is a greenfield opportunity to establish the team, architecture, and integrations that will define how we approach governance, risk, and compliance at Anthropic. The core challenge is a data problem: compliance information lives across dozens of systems—cloud infrastructure, identity providers, HR platforms, ticketing tools, code repositories—and your job is to design systems that bring it together, normalize it, and make it actionable. Success in this role comes from understanding how systems connect and how data flows between them, not from writing code yourself.
At Anthropic, you'll also have a unique advantage: the ability to design AI-powered workflows where Claude acts as an extension of your team, handling tasks that would traditionally require additional headcount or manual effort. You'll need ingenuity to identify where agentic AI can accelerate evidence collection, interpret unstructured data, triage compliance gaps, and augment human judgment in risk assessments. Working closely with Security, IT, and Engineering teams, you'll translate compliance and regulatory requirements into solutions that support audit programs including SOC 2, ISO, HIPAA, and FedRAMP, building systems that combine traditional automation with AI capabilities to achieve scale that wouldn't otherwise be possible.
Deadline to apply: None, applications will be received on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Building a Brand starts with a Story
FIRST is a leading global brand experience agency providing Content, Creative, Digital Technology, Management & Delivery, and Data & Insights services. For 30 years, we have utilized three flexible delivery models: Embedded teams, Corporate Campus operations, and a Full-Service Agency.
Managing over 37,000 events & projects annually across 110 countries, FIRST serves 130+ brands, specializing in Fortune 500 companies within the Financial Services, Technology, Healthcare, Media, and Professional Services sectors. FIRST is part of the broader Encore family of companies. Encore is a global leader in event production and technology.
Consistently recognized for excellence, recent honors include 2026 Chief Marketer Agency of the Year, the Ad Net Zero Award for sustainability, and more.
Learn more at www.firstagency.com
What You Would Get To Do
The Staff Accountant will play a critical role in ensuring the integrity of financial data and supporting the day-to-day operations of the Finance department. This role bridges meticulous transactional oversight, specifically in Accounts Payable and Cash Management, with essential month-end reporting and internal stakeholder support.
This role is an integral part of the overall business and will require a great deal of attention to detail, excellent time management, and the ability to work individually and as part of a team.
Your Contributions
Accounting Payable and Vendor Management
Cash Receipt Management and Banking
Month-End Closing and Financial Reporting
Finance Operation and Administrative
Ad Hoc and Strategic Supports
What We Are Looking For
At FIRST, we believe that our success is driven by the passion, creativity, and commitment of our teams. That's why we offer a range of benefits and perks to support your personal and professional growth
United States Residents: The base pay for this position ranges from $77,700 - $89,400. This role is also eligible for an annual discretionary bonus.
Please note that the national salary range listed in the job posting reflects the new hire salary range across various U.S. locations that would be applicable to the position. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
#LI-hybrid #LI-MV1
FIRST is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind. FIRST is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions are based on business needs, job requirements and individual qualifications without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. We stand firm: FIRST will not tolerate discrimination or harassment based on any of these characteristics. If any candidate feels that they are unable to meet the requirements of the role, please get in touch to discuss adjustments or other suitable roles.
Ready to apply?
Apply to First Global Management Services, Inc.
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is hiring a Lead for our Talent Development & Enablement team to help build the systems that help Ants (particularly managers) thrive. This is a build role: you'll partner directly with the team lead to think holistically about talent development across Anthropic, with a strong primary focus on the manager and leader audience.
You'll bring deep instructional design expertise, exceptional facilitation skills, and a strong point of view on how to grow people, and managers in particular, in an environment of unprecedented growth and change. You'll design across modalities (self-service, live virtual, in-person) and across audiences (new hires, ICs, first-time managers, experienced leaders, executives), and you'll treat AI as a chance to reimagine what talent development can be, not retrofit it onto what already exists.
This role suits someone who is energized by extreme ambiguity, ships fast, and wants to invent the playbook rather than execute someone else's. You'll be one of the first specialist hires on a new team, which means your point of view will shape what we build and how we measure success.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
M0 is the universal stablecoin platform. With M0, builders can create their own application-specific stablecoins, and customize attributes such as branding, transfer & compliance behaviors, and reward distribution mechanics. By leveraging M0, businesses can monetize their digital dollar use cases and maintain control over their money tech stack, while not being burdened by issuance and operational heavy-lifting.
M0 is seeking an experienced and exceptionally sharp corporate lawyer to be a key player in our growth. Beyond being a pragmatic lawyer with a proven track record of executing complex deals, this team member should be a creative legal strategist, a true business partner. You will lead the legal function in a deeply cross-functional environment and help shape M0’s long-term legal architecture.
Reporting to the Deputy COO, you will serve as M0’s senior-most legal advisor and operational owner of the legal function – responsible for corporate governance, commercial matters, regulatory navigation, product counseling, risk management, and strategic initiatives. You will also play a critical role in hiring and developing a team as M0 scales. This role requires a leader who can operate at every altitude, providing strategic counsel to the executive team in the morning, and diving deep into the details of a critical transaction or process improvement that same afternoon.
Ready to apply?
Apply to M0Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an Analytics Data Engineering Manager focused on Product, you will build and lead the analytics engineering team responsible for creating the data foundations that enable data-driven decision making across Anthropic’s Product organization. You will oversee the development of scalable data solutions for Product pillars – including Consumer, Claude Code, Enterprise & Verticals, Growth, Platform Product – managing a team of analytics engineers and working closely with stakeholders across Data Science, Product, and Engineering to ensure teams have access to reliable, accurate metrics that can scale with our company’s growth.
In this role, you will balance hands-on technical leadership with people management, setting the strategic vision for product data foundations while developing and mentoring team members. You will partner closely with Product Data Scientists, Product Managers, and Product Engineers to understand how users interact with Claude, how to measure product quality and growth, and how to transform raw event logs into insightful data marts that power product decisions.
Build and scale the Product Analytics Engineering team, including hiring and mentoring a team of high-performing analytics engineers embedded with Product pillars
Define and execute the strategic roadmap for product data foundations and analytics capabilities
Oversee the design and implementation of scalable data pipelines, data models, and analytics solutions that transform raw product event logs into canonical datasets and insightful data marts
Partner with Data Science, Product, and Engineering leadership to understand data needs and translate them into technical requirements
Establish and maintain high data integrity standards, SLAs, alerting, and best practices for the team
Drive the development of foundational data products, dashboards, and tools to enable self-serve analytics; partner with the Data Science team to build innovative data tools using Claude to scale data-driven decisions across Product teams
Foster a culture of technical excellence, continuous learning, and data-driven decision making
Serve as a technical thought leader for data modeling, ETL processes, and product analytics infrastructure
5+ years of experience managing analytics engineering or data engineering teams, preferably in a scaling startup environment
8+ years of total experience in analytics engineering, data engineering, or similar data-focused roles
Deep expertise in data modeling, ETL pipelines, and data warehouse architecture
Strong technical foundation with expertise in SQL, Python, dbt, and modern data stack tools
Proven track record of building and leading high-performing teams
Experience partnering with Data Science, Product, and Engineering leaders to deliver key product metrics and user behavior insights
Demonstrated ability to balance strategic thinking with hands-on technical leadership
Strong communication skills with the ability to translate complex technical concepts for diverse audiences
Experience scaling analytics functions from early stage to maturity in rapidly changing environments
Track record of establishing data governance, quality standards, and best practices
A bias for action and urgency, not letting perfect be the enemy of the effective
A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end
A passion for Anthropic’s mission of building helpful, honest, and harmless AI
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic manages one of the largest and fastest-growing accelerator fleets in the industry — spanning multiple accelerator families and clouds. The Accelerator Capacity Engineering (ACE) team is responsible for making sure every chip in that fleet is accounted for, well-utilized, and efficiently allocated. We own the data, tooling, and operational systems that let Anthropic plan, measure, and maximize utilization across first-party and third-party compute.
As an engineer on ACE, you will build the production systems that power this work: data pipelines that ingest and normalize telemetry from heterogeneous cloud environments, observability tooling that gives the org real-time visibility into fleet health, and performance instrumentation that measures how efficiently every major workload uses the hardware it’s running on. You will be expected to write production-quality code every day, operate alongside Kubernetes-native infrastructure at meaningful scale, and directly influence decisions around one of Anthropic’s largest areas of spend.
You’ll collaborate closely with research engineering, infrastructure, inference, and finance teams. The work requires someone who can move between data engineering, systems engineering, and observability with comfort — and who thrives in a high-autonomy, high-ambiguity environment.
The team’s work spans three functional areas. Depending on your background and interests, you’ll focus primarily in one, but the boundaries are fluid and the problems overlap:
Data infrastructure — collecting, normalizing, and serving the fleet-wide data that powers everything else. This means building pipelines that ingest occupancy and utilization telemetry from Kubernetes clusters, normalizing billing and usage data across cloud providers, and maintaining the BigQuery layer that the rest of the org queries against. Correctness, completeness, and latency matter here.
Fleet observability — making the state of the accelerator fleet legible and actionable in real time. This means building cluster health tooling, capacity planning platforms, alerting on occupancy drops and allocation problems, and driving systemic improvements to scheduling and fragmentation. The work sits at the intersection of Kubernetes operations and cross-team coordination.
Compute efficiency — measuring and improving how effectively every major workload uses the hardware it’s running on. This means instrumenting utilization metrics across training, inference, and eval systems, building benchmarking infrastructure, establishing per-config baselines, and collaborating directly with system-owning teams to close efficiency gaps.
Internal compute tooling — building the platforms and interfaces that make capacity data usable across the org. This includes capacity planning tools, workload attribution systems, cost dashboards, and self-service APIs. The consumers are research engineers, infrastructure teams, finance, and leadership — each with different needs and different levels of technical depth. The work involves product thinking as much as engineering: figuring out what people actually need, defining schema contracts, and making the data discoverable.
You will be placed on a pod based on your background and interests. We are especially focused on hiring for Data Platform, but strong candidates for any of the three active pods will move forward.
Build and operate data pipelines that ingest accelerator occupancy, utilization, and cost data from multiple cloud providers into BigQuery. Own data completeness, latency SLOs, gap detection, and backfill automation.
Develop and maintain observability infrastructure — Prometheus recording rules, Grafana dashboards, and alerting systems — that surface actionable signals about fleet health, occupancy, and efficiency.
Instrument and analyze compute efficiency metrics across training, inference, and eval workloads. Build benchmarking infrastructure, establish per-config baselines, and work with system-owning teams to improve utilization.
Build internal tooling and platforms that enable capacity planning, workload attribution, and cluster debugging. The consumers are other engineering teams, finance, and leadership — not external users.
Operate Kubernetes-native systems at scale — deploying data collection agents, managing workload labeling infrastructure, and understanding how taints, reservations, and scheduling affect capacity.
Normalize and reconcile data across heterogeneous sources — including AWS, GCP, and Azure billing exports, vendor-specific telemetry formats, and internal systems with different schemas and billing arrangements.
Collaborate across organizational boundaries with research engineering, infrastructure, inference, and finance teams. Gather requirements from technical stakeholders, translate them into useful systems, and communicate trade-offs to non-technical audiences.
5+ years of software engineering experience with a strong track record building and operating production systems. You write code every day — this is a hands-on engineering role, not a planning or coordination role.
Kubernetes fluency at operational depth — you’ve operated production K8s at meaningful scale, not just written manifests. Comfort with scheduling, taints, labels, node management, and debugging cluster-level issues.
Data pipeline engineering experience — designing, building, and owning the full lifecycle of production data pipelines. Experience with data warehouses (BigQuery preferred), schema management, streaming ingestion, SLOs for latency and completeness, and a strong instinct for correctness.
Observability tooling experience — Prometheus, PromQL, and Grafana are in the critical path for this team. Experience writing recording rules, understanding metric semantics, and building monitoring systems that engineering teams actually rely on.
Python and SQL at production quality. Most pipeline code is Python; the presentation layer is BigQuery SQL including table-valued functions and views. Both need to be idiomatic, well-tested, and maintainable.
Familiarity with at least one major cloud provider (AWS, GCP, or Azure) at the infrastructure level — compute, billing, usage APIs, cost management tooling. Multi-cloud experience is a strong plus.
High autonomy and strong cross-team communication. You can gather your own requirements, navigate ambiguity, and work across organizational boundaries. Scrappiness and ownership matter more than polish.
Multi-cloud data ingestion experience — especially working with AWS and GCP APIs, billing exports, or vendor-specific telemetry formats. Experience normalizing data from external providers with different billing arrangements is directly applicable.
Accelerator infrastructure familiarity — GPU metrics (DCGM), TPU utilization, Trainium power and utilization metrics, or experience working with ML training/inference systems at the hardware level.
Performance engineering and benchmarking experience — building benchmark harnesses, establishing baselines, reasoning about compute efficiency (FLOPs utilization, memory bandwidth, interconnect throughput), and working with system teams to diagnose and improve performance.
Data-as-product thinking — experience building internal data products with self-service access, schema contracts, API serving, documentation, and discoverability. Not just building pipelines, but thinking about how platform data gets consumed.
Experience with capacity planning, resource management, or cost attribution systems at a hyperscaler or large-scale ML environment. FinOps, chargeback systems, or infrastructure cost modeling.
Familiarity with ClickHouse, Terraform, or Rust. ClickHouse is the team’s current streaming store; Terraform for infrastructure-as-code; Rust for high-performance data collection agents.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an Analytics Engineer, you will be an early member of the Data Science & Analytics team building the foundation to scale analytics across our organization. You will collaborate with key stakeholders in Engineering, Product, GTM and other areas to build scalable solutions to transform data into key metrics reporting and insights. You will be responsible for ensuring teams have access to reliable, accurate metrics that can scale with our company’s growth. You will also lead your own projects to enable self-serve insights to help teams make data-driven decisions.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Wilson Elser is a leading defense litigation law firm with more than 1400 attorneys in 46 offices throughout the United States. Founded in 1978, we rank among the top 100 law firms identified by The American Lawyer and 36 in the National Law Journal’s survey of the nation’s largest law firms.
The Position
Our DevOps Engineer must have a strong technical background in Azure, development, and operations processes, as our firm implements an aggressive and exciting cloud strategy. This role will facilitate and develop streamlined processes and work collaboratively with cross-functional teams of software engineers, data engineers, product and project managers, business analysts, and infrastructure architects. This individual must thrive in a fast-paced environment and be self-motivated with a passion for problem-solving and innovation. Our IT department is project-focused, undergoing a complete digital business transformation for the firm. This position will drive new possibilities and opportunities for teams to learn cutting edge technologies while ensuring current systems are supported at their best.
Key Responsibilities
Qualifications
Education
A variety of factors are considered in making compensation decisions, including but not limited to experience, education, licensure and/or certifications, geographic location, market demands, other business and organizational needs, and other factors permitted by law. Final salary wages offered may be outside of this range based on other reasons and individual circumstances. This position is considered full-time and therefore qualifies for benefits including 401(k) retirement savings plan, medical, dental, vision, disability, and life insurance. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
Why Should You Apply?
Ready to apply?
Apply to Wilson Elser - Business & Legal ProfessionalsShare this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Cloud Inference team scales and optimizes Claude to serve the massive audiences of developers and enterprise companies across AWS, GCP, Azure, and future cloud service providers (CSPs). We own the end-to-end product of Claude on each cloud platform, from API integration and intelligent request routing to inference execution, capacity management, and day-to-day operations.
Our engineers are extremely high leverage: we simultaneously drive multiple major revenue streams while optimizing one of Anthropic's most precious resources: compute. As we expand to more cloud platforms, the complexity of managing inference efficiently across providers with different hardware, networking stacks, and operational models grows significantly. We need product-minded backend engineers who can navigate these platform differences, design the services and abstractions that work across providers, and make architectural decisions that keep us reliable and cost-effective at massive scale.
Your work will increase the scale at which our services operate, accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms, and ensure our LLMs meet rigorous safety, performance, and security standards.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Lightning AI is the company behind PyTorch Lightning. Founded in 2019, we build an end-to-end platform for developing, training, and deploying AI systems—designed to take ideas from research to production with less friction.
Through our merger with Voltage Park, a neocloud and AI Factory, Lightning AI combines developer-first software with cost-efficient, large-scale compute. Teams get the tools they need for experimentation, training, and production inference, with security, observability, and control built in.
We serve solo researchers, startups, and large enterprises. Lightning AI operates globally with offices in New York City, San Francisco, Seattle, and London, and is backed by Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.
Move Fast: We act with speed and precision, breaking down big challenges into achievable steps.
Focus: We complete one goal at a time with care, collaborating as a team to deliver features with precision.
Balance: Sustained performance comes from rest and recovery. We ensure a healthy work-life balance to keep you at your best.
Craftsmanship: Innovation through excellence. Every detail matters, and we take pride in mastering our craft.
Minimal: Simplicity drives our innovation. We eliminate complexity through discipline and focus on what truly matters.
As we continue to expand our footprint, we are looking for a sharp, agile, and business-minded lawyer to join our lean legal team. You will be the primary engine behind our commercial contracting process. This is not a "siloed" role; you will be the cross-functional glue between Finance, Engineering, Sales, and Security, ensuring our legal strategy moves at the speed of our innovation.
This is an ideal opportunity for a high-performing lawyer with a strong grasp of transactional fundamentals who is ready to "level up." While your primary focus will be owning the commercial contract lifecycle, you will have a front-row seat to the broader legal challenges of a hyper-growth AI company.
You will be joining the Legal Team and report to our VP of Legal. This is a hybrid role based in our San Francisco or New York office with an in-office requirement of 2 days per week.
We are committed to offering competitive compensation that reflects the value each team member brings to our mission. Final offers are based on factors such as experience, skills, geographic location, and role expectations. In addition to base salary, our total rewards package for eligible roles includes a discretionary bonus, a meaningful equity component, and comprehensive benefits.
We offer a comprehensive and competitive benefits package designed to support our employees’ health, well-being, and long-term success. Benefits may vary by location, team, and role.
Benefits include:
At Lightning AI, we are committed to fostering an inclusive and diverse workplace. We believe that diverse teams drive innovation and create better products. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected characteristic. We are dedicated to building a culture where everyone can thrive and contribute to their fullest potential.
Ready to apply?
Apply to Lightning AI
Share this job
Associate Counsel / Full Time / NY
(Comp: $200,000 + $50,000 bonus)
Position: Financial Technology Partners is seeking an Associate Counsel to join our small, high-leverage legal team. Reporting directly to the General Counsel, the Associate will touch nearly every part of the FT Partners legal platform--engagement letters for FinTech clients, employment matters, vendor and SaaS contracts, corporate policies, and the playbooks and AI tooling that scale our practice. The ideal candidate will have demonstrated experience in commercial contracts at a top law firm, and have a desire to broaden their legal skill set to become a well-rounded in-house lawyer at a boutique investment bank
Essential Functions:
Basic Qualifications:
Preferred Qualifications:
FT Partners is an equal opportunity employer and is committed to fostering a diverse and inclusive workplace. We welcome applications from all qualified individuals and do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other legally protected characteristic.
Ready to apply?
Apply to Financial Technology Partners
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic’s model weights and training infrastructure are among the highest-value targets in the technology sector, facing nation-state, supply-chain interdiction, and insider threats that conventional enterprise security programs were not built to address. As Commercial Counsel, Infrastructure Security for Compute and Infrastructure at Anthropic, you’ll be the day-to-day legal partner to the Chief Security Officer’s infrastructure-security and the Governance, Risk and Compliance teams. You will own the contractual and regulatory layer of physical and facility security, hardware and supply-chain security, network security, vendor personnel and insider-risk flow-downs, and security regulatory and assurance.
You’ll work in close partnership with Anthropic teammates in Frontier, Product, Litigation, Employment, and Commercial Legal, in addition to specialized outside counsel. You will serve as the primary legal owner ensuring security requirements are in the contract before signature and that external work product aligns with Anthropic’s security posture and commercial objectives.
Draft and negotiate security design-basis and site-hardening specifications in build-to-suit, lease, and colo agreements (perimeter, access control, CCTV, intrusion detection); guard-force statements of work, post orders, and KPI regimes; visitor, contractor, and badging policy; and security clauses in shared-campus and multi-tenant arrangements
Own provenance, anti-tamper, and chain-of-custody warranties in silicon, ODM, and OEM paper; trusted-supplier and country-of-origin restrictions; NDAA §889/§5949 and CHIPS-Act guardrail flow-downs; BIS/EAR advanced-computing and semiconductor export-control flow-downs and end-use/end-user certifications; firmware integrity, secure-boot, and golden-image escrow terms; secure logistics; counterfeit-part and grey-market controls; and secure decommissioning and certified media-destruction terms
Draft security schedules in carrier and fiber agreements (encryption-in-transit, route integrity, lawful-intercept handling), and security obligations in peering agreements
Set background-screening, training, and badge-revocation requirements for vendor and contractor personnel with site or hardware access, and flow Anthropic personnel-security standards into guard-force, security-integrator, and EPC vendor MSAs
Support CFIUS and outbound-investment screening on infrastructure vendors and sites, provide NIST/ISO/SOC 2 physical-control evidence for customer and auditor assurance in partnership with security teams; and support security representations in customer contracts that reference physical infrastructure with Commercial Legal
Work closely with specialized outside counsel, ensuring their work product aligns with Anthropic’s security and commercial objectives
Build the function: develop and maintain the security-schedule library, design-basis templates, advise on vendor security questionnaire templates, and negotiation playbooks; train Procurement, Datacenter, and Network teams to apply them at scale
Serve as direct counsel to the CSO’s infrastructure-security organization, coordinating with Product Legal and Litigation on incident response, threat intelligence, law-enforcement and intelligence-community engagement, insider-threat governance, and model-weight security policy under Anthropic’s Responsible Scaling Policy
Escalate novel structures or terms that create downstream risk for Anthropic’s security posture or operational flexibility; ensure security requirements accommodate AI-specific threats including hardware tamper, supply-chain interdiction, and high-value-target facility risk
Monitor and assess the evolving regulatory landscape affecting security and data protection, identifying higher-risk obligations for the business and partnering with security to operationalize them through policies, controls, and compliance programs
Advise on risk assessments, risk acceptance decisions, and reporting to leadership and the board; and review remediation commitments arising from assessments, customer audits and regulator inquiries.
JD and active membership in at least one U.S. state bar
Fluency in security design-basis specifications, guard-force and access-control contracting, and how security schedules interact with build-to-suit, colo, procurement, and carrier agreements
Experience with NDAA §889/§5949, CHIPS-Act guardrails, CFIUS/outbound-investment screening, and trusted-supplier or country-of-origin programs
Comfort with NIST, ISO 27001, and SOC 2 physical-control frameworks and the evidence and attestation process that supports customer and auditor assurance
Ability to coordinate effectively with multiple internal legal teams, and specialized outside counsel while maintaining strategic direction
Strong judgment about when contractual security terms create downstream risk for Anthropic’s security posture, audit position, or operational flexibility
Effective collaboration skills for working with the CSO’s organization, procurement, datacenter, and network teams
Communication skills that translate security and supply-chain-integrity concepts into clear risk assessments for business stakeholders
Genuine interest in infrastructure security and appreciation for why physical, hardware, and network security is mission-critical for frontier AI
At least 10-12 years of relevant legal experience with meaningful exposure to physical and facility security contracting, hardware and supply-chain security, network security schedules, or security regulatory and assurance work for critical infrastructure
In-house experience at cloud service providers, hyperscalers, defense and aerospace primes, telecom carriers, utilities, semiconductor companies, or datacenter operators supporting physical-security, supply-chain-security, or security-assurance programs; or U.S. government experience at DoD, DHS/CISA, BIS, or CFIUS staff
Experience at large technology companies with first-party datacenter or hardware programs supporting security contracting from the buy side
Law firm experience at practices with national-security, supply-chain, or critical-infrastructure specialization, particularly those who have worked on NDAA §889, CFIUS, or trusted-supplier matters
Prior involvement in transactions requiring sophisticated understanding of secure logistics, chain-of-custody, firmware integrity, and certified media destruction
Familiarity with CCTV/biometrics privacy regimes, executive-protection contracting, and the contractual layer of insider-risk programs
Ability to obtain and maintain a U.S. security clearance
Role-specific policy: For this role, we expect staff to be able to work from our San Francisco, Seattle, Washington D.C., or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.
Cluster Infra owns the full lifecycle of compute clusters at Anthropic. We build agent-driven automation for cluster provisioning and lifecycle management across all major cloud providers and our own datacenters. Our systems stand up clusters that are interconnected with high bandwidth, secure-by-default, and able to automatically drain and recover in response to failure. As a Staff engineer on this team, you'll set the technical direction for how Anthropic brings compute online - at a moment when the scale of that compute is growing faster than at almost any company in the world.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is seeking a Program Manager to own and drive our student outreach strategy, establishing Claude as the AI platform of choice for the next generation of builders, researchers, and creators at universities worldwide.
You'll lead flagship programs including the Claude Campus Ambassador Program and Claude Builder Clubs, while developing new initiatives that embody Anthropic's mission of helping humanity navigate the AI transition.
This is a high-ownership role where you'll define the vision, build the infrastructure, and personally drive execution for how Anthropic engages with student communities globally. Beyond keeping programs running smoothly, you'll make real strategic decisions and own the student narrative company-wide.
We're looking for someone who thrives with significant autonomy, can operate effectively with ambiguity, and has the judgment to make consequential calls about where we invest our resources.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
At Morgan & Morgan, the work we do matters. For millions of Americans, we’re their last line of defense against insurance companies, large corporations or defective goods. From attorneys in all 50 states, to client support staff, creative marketing to operations teams, every member of our firm has a key role to play in the winning fight for consumer rights. Our over 6,000 employees are all united by one mission: For the People.
We're seeking a paralegal to be an integral part of Mass Tort Law and Briefing Team. This role will work closely with a lead attorney and team members. The ability to be self sufficient and work independently on a day-to-day basis is essential. Successful candidates will be responsible for areas including but not limited to the following.
Responsibilities
Qualification
#LI-AP1
Benefits
Morgan & Morgan is a leading personal injury law firm dedicated to protecting the people, not the powerful. This success starts with our staff. For full-time employees, we offer an excellent benefits package including medical and dental insurance, 401(k) plan, paid time off and paid holidays.
Equal Opportunity Statement
Morgan & Morgan provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
E-Verify
This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S. If E-Verify cannot confirm that you are authorized to work, this employer is required to give you written instructions and an opportunity to contact Department of Homeland Security (DHS) or Social Security Administration (SSA) so you can begin to resolve the issue before the employer can take any action against you, including terminating your employment. Employers can only use E-Verify once you have accepted a job offer and completed the I-9 Form.
Privacy Policy
Here is a link to Morgan & Morgan's privacy policy.
Ready to apply?
Apply to Morgan & Morgan, P.A.
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
Anthropic is seeking a Certification Development Lead to build and scale a best-in-class certification program for our partner ecosystem. We’ve launched our first certification, the Claude Certified Architect, Foundations (CCA-F), and now we need someone to turn that first credential into a comprehensive, multi-tiered program that our systems integrator, consulting, and technology partners rely on to demonstrate real competence with Claude.
This is a 0-to-1 build. You’ll define the certification strategy, design the credential architecture, and build the program infrastructure from the ground up, all with our partner audience at the center. You’ll work at the intersection of program design and subject matter expertise: identifying what partners need to know, structuring the path to get them there, and collaborating with internal SMEs to ensure the content reflects how Claude actually works and where it’s headed. The systems and tooling that deliver certifications are being built by a separate team; your focus is the program itself, the credentialing framework, the partner experience, and the cross-functional relationships that keep it all connected.
The broader program strategy, including audience planning, partner relationships, and go-to-market alignment, lives within the Partner Enablement team. You’ll sit at the center of that work, collaborating closely with Partnerships, Applied AI, and GTM Productivity to ensure certifications serve both the partners earning them and the Anthropic teams relying on certified partners to extend our reach.
Responsibilities
You May Be a Good Fit If You Have
Strong Candidates May Also Have
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are seeking an Operations Manager to build and lead a new pillar of our Platform Operations function: Community Operations & Executive Escalations. This role will stand up the team, processes, and infrastructure that protect Anthropic's reputation when high-stakes user issues surface on social media or arrive through executive channels. You'll own end-to-end escalation management - detection, triage, incident coordination, and resolution- across two distinct but related workstreams: brand-impacting conversations on public social channels and high-sensitivity inbound from Anthropic employees on behalf of users. The ideal candidate combines calm-under-fire incident judgment with the operational rigor to build durable systems.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Building a Brand starts with a Story
FIRST is a leading global brand experience agency providing Content, Creative, Digital Technology, Management & Delivery, and Data & Insights services. For 30 years, we have utilized three flexible delivery models: Embedded teams, Corporate Campus operations, and a Full-Service Agency. Managing over 37,000 events & projects annually across 110 countries, FIRST serves 130+ brands, specializing in Fortune 500 companies within the Financial Services, Technology, Healthcare, Media, and Professional Services sectors. FIRST is part of the broader Encore family of companies. Encore is a global leader in event production and technology. Consistently recognized for excellence, recent honors include 2026 Chief Marketer Agency of the Year, the Ad Net Zero Award for sustainability, and more.
Learn more at www.firstagency.com
Location: NYC - hybrid 3 days a week, subject to change to 5 at anytime.
What You Would Get To Do
The Associate Project Manager, Portfolio Specialist provides essential support in delivering high-touch hospitality experiences for the Investment Banking Division of a major financial institution. This role assists with executive-level dining and event arrangements, helping to coordinate high-end dining experiences, manage reservations, and liaise with top-tier restaurants to ensure seamless execution for senior banking executives and VIP clients.
Working closely with the broader events team, this position requires strong attention to detail, organizational skills, and a commitment to delivering exceptional service. The ideal candidate has a passion for luxury hospitality, thrives in a fast-paced environment, and is eager to contribute to a world-class client experience.
Your Contributions
Concierge & Event Support
What We Are Looking For
Please be aware that we know experience does not always look like we describe it. If you believe you would be an excellent fit for this role, please apply - we would love to see how you could fit at FIRST. Please contact recruiting@firstagency.com if you need accommodation at any stage of the recruitment process.
At FIRST, we believe that our success is driven by the passion, creativity, and commitment of our teams. That's why we offer a range of benefits and perks to support your personal and professional growth:
United States Residents: The base pay for this position ranges from $70,000 - $91,000. This role is also eligible for an annual discretionary bonus.
#LI-KM1 #LI-Hybrid
FIRST is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind. FIRST is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions are based on business needs, job requirements and individual qualifications without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. We stand firm: FIRST will not tolerate discrimination or harassment based on any of these characteristics. If any candidate feels that they are unable to meet the requirements of the role, please get in touch to discuss adjustments or other suitable roles.
Ready to apply?
Apply to First Global Management Services, Inc.
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Product Support Specialist, you’ll be at the front lines of safely delivering AI to the world by responding to, investigating, and tracking user needs in your day to day. Additionally, you’ll help us identify – and close – gaps in our team’s technical knowledge, provide high-touch support to strategic customers, and demonstrate deep care for how we systematically support customers at scale.
While not required, we're particularly excited about candidates with one or more of these specializations:
We're hiring across a range of experience levels to build depth in both technical capabilities and enterprise support.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Secure Every Identity, from AI to Human
Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.
This is an opportunity to do career-defining work. We're all in on this mission. If you are too, let's talk.
Okta is the World’s Identity Company. As a cloud-first SaaS provider, our mission is to enable any organization to use any technology by providing world-class security and identity solutions. The Okta Legal team is a high-impact group of professionals dedicated to navigating the complex landscape of technology law while fueling innovation. We are looking for a sharp, collaborative IP lawyer with a software engineering, computer science, or electrical engineering background to help protect our innovations and brand, and continuously shape and refine our global IP strategy.
More specifically, we seek to hire a Patent Attorney with a software and patent prosecution background and relevant technical degree (CS, EE, or ECE) to work with Okta’s legal and business teams to further develop Okta’s global patent portfolio and corresponding strategy. The scope of the role includes identifying patentable inventions and securing corresponding patent assets, providing expert technical and legal analysis, handling patent challenges, providing guidance on patent transactions and licensing matters, and supporting M&A and litigation.
The ideal candidate will also have experience with or a strong interest in handling and/or supporting all other aspects of Okta’s intellectual property work, including trademark, copyright, and trade secrets and will support other legal projects as required.
#LI-KC4
#LI-Hybrid
(P19817_3383227)
Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.
The Okta Experience
We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.
Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.
If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.
Notice for New York City Applicants & Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.
Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/.
Ready to apply?
Apply to Okta
Share this job
Secure Every Identity, from AI to Human
Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.
This is an opportunity to do career-defining work. We're all in on this mission. If you are too, let's talk.
Okta is the World’s Identity Company. As a cloud-first SaaS provider, our mission is to enable any organization to use any technology by providing world-class security and identity solutions. The Okta Legal team is a high-impact group of professionals dedicated to navigating the complex landscape of technology law while fueling innovation. We are looking for a sharp, collaborative IP lawyer with a software engineering, computer science, or electrical engineering background to help protect our innovations and brand, and continuously shape and refine our global IP strategy.
More specifically, we seek to hire a Patent Attorney with a software and patent prosecution background and relevant technical degree (CS, EE, or ECE) to work with Okta’s legal and business teams to further develop Okta’s global patent portfolio and corresponding strategy. The scope of the role includes identifying patentable inventions and securing corresponding patent assets, providing expert technical and legal analysis, handling patent challenges, providing guidance on patent transactions and licensing matters, and supporting M&A and litigation.
The ideal candidate will also have experience with or a strong interest in handling and/or supporting all other aspects of Okta’s intellectual property work, including trademark, copyright, and trade secrets and will support other legal projects as required.
#LI-KC4
#LI-Hybrid
(P19817_3383227)
Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.
The Okta Experience
We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.
Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.
If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.
Notice for New York City Applicants & Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.
Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/.
Ready to apply?
Apply to Okta
Share this job
Secure Every Identity, from AI to Human
Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.
This is an opportunity to do career-defining work. We're all in on this mission. If you are too, let's talk.
Okta is the World’s Identity Company. As a cloud-first SaaS provider, our mission is to enable any organization to use any technology by providing world-class security and identity solutions. The Okta Legal team is a high-impact group of professionals dedicated to navigating the complex landscape of technology law while fueling innovation. We are looking for a sharp, collaborative IP lawyer with a software engineering, computer science, or electrical engineering background to help protect our innovations and brand, and continuously shape and refine our global IP strategy.
More specifically, we seek to hire a Patent Attorney with a software and patent prosecution background and relevant technical degree (CS, EE, or ECE) to work with Okta’s legal and business teams to further develop Okta’s global patent portfolio and corresponding strategy. The scope of the role includes identifying patentable inventions and securing corresponding patent assets, providing expert technical and legal analysis, handling patent challenges, providing guidance on patent transactions and licensing matters, and supporting M&A and litigation.
The ideal candidate will also have experience with or a strong interest in handling and/or supporting all other aspects of Okta’s intellectual property work, including trademark, copyright, and trade secrets and will support other legal projects as required.
#LI-KC4
#LI-Hybrid
(P19817_3383227)
Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.
The Okta Experience
We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.
Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.
If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.
Notice for New York City Applicants & Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.
Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/.
Ready to apply?
Apply to Okta
Share this job
Secure Every Identity, from AI to Human
Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.
This is an opportunity to do career-defining work. We're all in on this mission. If you are too, let's talk.
Okta is the World’s Identity Company. As a cloud-first SaaS provider, our mission is to enable any organization to use any technology by providing world-class security and identity solutions. The Okta Legal team is a high-impact group of professionals dedicated to navigating the complex landscape of technology law while fueling innovation. We are looking for a sharp, collaborative IP lawyer with a software engineering, computer science, or electrical engineering background to help protect our innovations and brand, and continuously shape and refine our global IP strategy.
More specifically, we seek to hire a Patent Attorney with a software and patent prosecution background and relevant technical degree (CS, EE, or ECE) to work with Okta’s legal and business teams to further develop Okta’s global patent portfolio and corresponding strategy. The scope of the role includes identifying patentable inventions and securing corresponding patent assets, providing expert technical and legal analysis, handling patent challenges, providing guidance on patent transactions and licensing matters, and supporting M&A and litigation.
The ideal candidate will also have experience with or a strong interest in handling and/or supporting all other aspects of Okta’s intellectual property work, including trademark, copyright, and trade secrets and will support other legal projects as required.
#LI-KC4
#LI-Hybrid
(P19817_3383227)
Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.
The Okta Experience
We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.
Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.
If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.
Notice for New York City Applicants & Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.
Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/.
Ready to apply?
Apply to Okta
Nextdoor (NYSE: NXDR) is the essential neighborhood network. Neighbors, public agencies, and businesses use Nextdoor to connect around local information that matters in more than 350,000 neighborhoods across 11 countries. Nextdoor builds innovative technology to foster local community, share important news, and create neighborhood connections at scale. Download the app and join the neighborhood at nextdoor.com.
The Counsel team is a close-knit and collaborative team that supports the entire Nextdoor community. We operate with the mindset that a small, high-leverage team can outperform a larger one by investing in the right systems, workflows, and tooling. Each member brings individual expertise and a builder's orientation: we don't just deliver legal advice, we design the infrastructure that makes great legal support scalable. We are reliable, trustworthy, and innovative in our support of Nextdoor’s business goals and our purpose of building stronger local communities.
At Nextdoor, we offer a warm and inclusive work environment that embraces a hybrid employment model, blending an in office presence and work from home experience for our valued employees. The hiring team will go over these expectations with you if you are being considered for a role near one of our offices in San Francisco, Los Angeles, Chicago, Dallas, New York, and London.
As Commercial Counsel at Nextdoor, you’ll be both a trusted commercial lawyer and a systems builder. You'll support our global revenue team on advertising deals and partner with business teams on a range of commercial matters — but equally important, you'll design, build, and continuously improve the workflows, playbooks, templates, and AI-enabled tooling that allow a lean legal team to move at the speed of the business. You will also advise internal clients on the various legal risks, business strategies, and other issues related to commercial transactions and general operations.
Your responsibilities will include:
Compensation, benefits, perks, and recognition programs at Nextdoor come together to create our total rewards package. Compensation will vary depending on your relevant skills, experience, and qualifications.
The starting salary for this role is expected to range from $220,000 to $240,000 on an annualized basis, or potentially greater in the event that your 'level' of proficiency exceeds the level expected for the role. The salary range will be determined by the candidate’s geographic location.
We expect to award a meaningful equity grant for this role. With quarterly vesting, your first vest date will take place within 3 months of your start date.
When it comes to benefits, we have you covered! Nextdoor employees can choose between a variety of health plans, including a 100% covered employee only plan option, and we also provide a OneMedical membership for concierge care.
At Nextdoor, we empower our employees to build stronger local communities. To create a platform where all feel welcome, we want our workforce to reflect the diversity of the neighbors we serve. We encourage everyone interested in our mission to apply. We do not discriminate on the basis of race, gender, religion, sexual orientation, age, or any other trait that unfairly targets a group of people. In accordance with the San Francisco Fair Chance Ordinance, we always consider qualified applicants with arrest and conviction records.
For information about our collection and use of applicants’ personal information, please see Nextdoor's Personnel Privacy Notice, found here.
Ready to apply?
Apply to Nextdoor
Nextdoor (NYSE: NXDR) is the essential neighborhood network. Neighbors, public agencies, and businesses use Nextdoor to connect around local information that matters in more than 350,000 neighborhoods across 11 countries. Nextdoor builds innovative technology to foster local community, share important news, and create neighborhood connections at scale. Download the app and join the neighborhood at nextdoor.com.
The Counsel team is a close-knit and collaborative team that supports the entire Nextdoor community. We operate with the mindset that a small, high-leverage team can outperform a larger one by investing in the right systems, workflows, and tooling. Each member brings individual expertise and a builder's orientation: we don't just deliver legal advice, we design the infrastructure that makes great legal support scalable. We are reliable, trustworthy, and innovative in our support of Nextdoor’s business goals and our purpose of building stronger local communities.
At Nextdoor, we offer a warm and inclusive work environment that embraces a hybrid employment model, blending an in office presence and work from home experience for our valued employees. The hiring team will go over these expectations with you if you are being considered for a role near one of our offices in San Francisco, Los Angeles, Chicago, Dallas, New York, and London.
As Commercial Counsel at Nextdoor, you’ll be both a trusted commercial lawyer and a systems builder. You'll support our global revenue team on advertising deals and partner with business teams on a range of commercial matters — but equally important, you'll design, build, and continuously improve the workflows, playbooks, templates, and AI-enabled tooling that allow a lean legal team to move at the speed of the business. You will also advise internal clients on the various legal risks, business strategies, and other issues related to commercial transactions and general operations.
Your responsibilities will include:
Compensation, benefits, perks, and recognition programs at Nextdoor come together to create our total rewards package. Compensation will vary depending on your relevant skills, experience, and qualifications.
The starting salary for this role is expected to range from $220,000 to $240,000 on an annualized basis, or potentially greater in the event that your 'level' of proficiency exceeds the level expected for the role. The salary range will be determined by the candidate’s geographic location.
We expect to award a meaningful equity grant for this role. With quarterly vesting, your first vest date will take place within 3 months of your start date.
When it comes to benefits, we have you covered! Nextdoor employees can choose between a variety of health plans, including a 100% covered employee only plan option, and we also provide a OneMedical membership for concierge care.
At Nextdoor, we empower our employees to build stronger local communities. To create a platform where all feel welcome, we want our workforce to reflect the diversity of the neighbors we serve. We encourage everyone interested in our mission to apply. We do not discriminate on the basis of race, gender, religion, sexual orientation, age, or any other trait that unfairly targets a group of people. In accordance with the San Francisco Fair Chance Ordinance, we always consider qualified applicants with arrest and conviction records.
For information about our collection and use of applicants’ personal information, please see Nextdoor's Personnel Privacy Notice, found here.
Ready to apply?
Apply to Nextdoor
Nextdoor (NYSE: NXDR) is the essential neighborhood network. Neighbors, public agencies, and businesses use Nextdoor to connect around local information that matters in more than 350,000 neighborhoods across 11 countries. Nextdoor builds innovative technology to foster local community, share important news, and create neighborhood connections at scale. Download the app and join the neighborhood at nextdoor.com.
The Counsel team is a close-knit and collaborative team that supports the entire Nextdoor community. We operate with the mindset that a small, high-leverage team can outperform a larger one by investing in the right systems, workflows, and tooling. Each member brings individual expertise and a builder's orientation: we don't just deliver legal advice, we design the infrastructure that makes great legal support scalable. We are reliable, trustworthy, and innovative in our support of Nextdoor’s business goals and our purpose of building stronger local communities.
At Nextdoor, we offer a warm and inclusive work environment that embraces a hybrid employment model, blending an in office presence and work from home experience for our valued employees. The hiring team will go over these expectations with you if you are being considered for a role near one of our offices in San Francisco, Los Angeles, Chicago, Dallas, New York, and London.
As Commercial Counsel at Nextdoor, you’ll be both a trusted commercial lawyer and a systems builder. You'll support our global revenue team on advertising deals and partner with business teams on a range of commercial matters — but equally important, you'll design, build, and continuously improve the workflows, playbooks, templates, and AI-enabled tooling that allow a lean legal team to move at the speed of the business. You will also advise internal clients on the various legal risks, business strategies, and other issues related to commercial transactions and general operations.
Your responsibilities will include:
Compensation, benefits, perks, and recognition programs at Nextdoor come together to create our total rewards package. Compensation will vary depending on your relevant skills, experience, and qualifications.
The starting salary for this role is expected to range from $220,000 to $240,000 on an annualized basis, or potentially greater in the event that your 'level' of proficiency exceeds the level expected for the role. The salary range will be determined by the candidate’s geographic location.
We expect to award a meaningful equity grant for this role. With quarterly vesting, your first vest date will take place within 3 months of your start date.
When it comes to benefits, we have you covered! Nextdoor employees can choose between a variety of health plans, including a 100% covered employee only plan option, and we also provide a OneMedical membership for concierge care.
At Nextdoor, we empower our employees to build stronger local communities. To create a platform where all feel welcome, we want our workforce to reflect the diversity of the neighbors we serve. We encourage everyone interested in our mission to apply. We do not discriminate on the basis of race, gender, religion, sexual orientation, age, or any other trait that unfairly targets a group of people. In accordance with the San Francisco Fair Chance Ordinance, we always consider qualified applicants with arrest and conviction records.
For information about our collection and use of applicants’ personal information, please see Nextdoor's Personnel Privacy Notice, found here.
Ready to apply?
Apply to Nextdoor
Share this job
Betterment is a leading, technology-driven financial services company that offers investing, savings and retirement solutions for retail investors and investment advisors as well as financial wellness solutions, including a 401(k) for small and medium-sized businesses. Our team is passionate about our mission, to empower people to build wealth with confidence and ease. We’re headquartered in NYC and offer hybrid NY-based positions (four days/ week in-office, with no required office days during the summer and winter holidays).
We’re looking for a versatile lawyer to join Betterment’s high-impact legal team. Betterment is making investing smarter, easier, and more accessible for everyone — and this role sits at the center of the action, with responsibilities covering disputes and arbitration, regulatory inquiries, and policy advocacy. The ideal candidate is a talented litigator looking to expand their focus and expertise.
You are:
In this role, you will build and own legal processes to manage and respond to claims and threatened claims. You will help to identify and mitigate legal risk; defend Betterment in litigations, arbitrations, and regulatory proceedings (both directly and by managing outside counsel); analyze legislative and regulatory changes relevant to Betterment’s business; and advise business partners to improve Betterment’s products and services. You will report directly to Betterment’s Head of Regulatory and Policy, and sit alongside Betterment’s product counseling team.
This role is based out of our NYC office. Below we've reflected the base salary range for this position. Actual salaries may vary depending on factors including but not limited to location, experience, and performance. The range listed is just one component of Betterment’s total compensation package for employees.
This job may also be eligible for variable compensation in the form of a company incentive bonus.
We change lives
Join a community of innovators working to transform financial outcomes for real people. Your work will make an impact, always laddering up to our mission: making people's lives better.
We set audacious goals
We set them for the company, our customers, and ourselves—and we won’t stop until we reach them. We don’t just show up; we give our all, then celebrate our wins.
We value all perspectives
When we collaborate, we're at our best. We believe diverse perspectives lead to better outcomes and strive to uphold our supportive and inclusive community.
We invest in you with:
We’ll take a few weeks to review all applications. If we’d like to spend more time with you, we’ll reach out to arrange next steps, which will include 3-4 sets of meetings with your future colleagues.
In the interview process, we’ll look to learn more about your skills, experiences, capabilities, and motivators. Many of our questions will be aimed at understanding how you might operate here at Betterment. Depending on the role, we may ask you to complete a case study exercise or technical assessments, as we want to collect a robust set of data points to better inform our decisions.
On average, it takes us around 3-5 weeks to make a hiring decision, depending on your availability and sense of urgency. As a best practice, we aim to interview at least 2-3 final round candidates before making a hiring decision. Please note that, as we usually receive an overwhelming number of applications for open positions, we’re unable to offer individual feedback during the interview process.
We recognize that interviewing for a new role is a big deal. We appreciate you considering Betterment as the next step in your career, and our Recruiting Team is here to support and advocate for you through the interview process!
Betterment is dedicated to providing accommodations to candidates upon request. If you need accommodations at any point throughout the interview process, please reach out to your recruiter.
Please note that in any materials you submit, you may redact or remove age-identifying information such as age, date of birth, or dates of school attendance or graduation. You will not be penalized for redacting or removing this information.
Come join us!
We’re an equal opportunity employer and comply with all applicable federal, state, and local fair employment practices laws. We strictly prohibit and do not tolerate discrimination against employees, applicants, or any other covered persons because of race, color, religion, creed, national origin or ancestry, ethnicity, sex, gender (including gender nonconformity and status as a transgender or transsexual individual), sexual orientation, marital status, age, physical or mental disability, citizenship, past, current or prospective service in the uniformed services, predisposing genetic characteristic, domestic violence victim status, arrest records, or any other characteristic protected under applicable federal, state or local law.
E-Verify Statement
Betterment participates in E-Verify. We will provide the Social Security Administration, and if necessary, the Department of Homeland Security, with information from each new employee’s Form I-9 to confirm work authorization. Please note that we do not use this information to pre-screen job applicants.
E-Verify Notice
E-Verify Notice (Spanish)
Right to Work Notice
Right to Work Notice (Spanish)
CPRA Language - CA Applicants ONLY
We collect and store personal information for the purposes of candidate tracking (for this role and future opportunities). The information you provide in the course of completing your application will be kept for up to 24 months. We use tools provided by third-party service providers but do not share candidate information for any purpose other than for recruiting. To access the data that is collected, request deletion or to make updates to your candidate profile, please email team.hire@betterment.com.
Ready to apply?
Apply to Betterment
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As Anthropic continues to grow, the rigor and sophistication of how we plan our workforce needs to grow with us, and our People Analytics team plays a central role in that work.
We are looking for a Workforce Planning Lead to own how Anthropic plans, forecasts, and tracks its workforce, from the underlying data and tooling, to forecasting models, to the cadence and partnership that connects workforce decisions to business strategy. The person in this seat will shape the next chapter of how workforce planning runs at Anthropic.
The role is highly cross-functional. You will partner closely with Recruiting, Finance, People Partners (HRBPs), talent leaders, and business leaders across the company to understand headcount needs, model tradeoffs, and translate strategy into workforce plans. Strong grounding in people analytics methods, judgment, and a generalist orientation are essential. Scope will at times extend beyond traditional workforce planning into broader People Analytics work.
Workforce planning process and strategic partnership
Lead recurring company-wide headcount planning cycles and the ongoing rhythms of review, monitoring, and adjustment that keep plans connected to the business. Partner with Finance, Recruiting, and talent leaders to translate business strategy into workforce implications, including hiring plans, capacity needs, location considerations, and workforce mix. Help inform where and how Anthropic grows over time. Strengthen the underlying infrastructure that supports planning, including position management, governance of core workforce data, and consistency across our people systems. Be a trusted thought partner when growth plans are set, organizational design is evaluated, or new initiatives are resourced, providing the data and modeling that lets leaders make decisions.
Forecasting and scenario modeling
Build and maintain quantitative models for headcount forecasting, attrition projection, hiring plan scenario analysis, and capacity planning. Develop scenario analyses that quantify tradeoffs across growth plans, hiring assumptions, attrition rates, and budget considerations. Identify supply and demand gaps before they become bottlenecks.
Analytics, dashboards, and insight
Build self-service dashboards for executives, functional leaders, and managers showing current state, plan versus actual, pipeline health, and key workforce metrics. Develop predictive analytics including attrition risk, hiring velocity forecasting, and supply/demand gap identification. Translate analysis into clear narratives that drive decisions.
Cross-functional execution
Provide hands-on operational support during structural change, including reorganizations, new team formation, and rapid scaling of a function. Support broader People Analytics workstreams as needed. This role will frequently touch attrition, recruiting funnel, location analytics, and other people data.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
At Curaleaf, we’re redefining the cannabis industry with a strong commitment to quality, expertise, and innovation. As a leading global cannabis provider, our brands—including Curaleaf, Select, and Grassroots—offer premium products and services in both medical and adult-use markets.
Join us at Curaleaf to be part of a high-growth, purpose-driven company that champions corporate social responsibility through our Rooted in Good initiative, supporting community outreach and positive change. Here, you’ll have the opportunity to make a meaningful impact, drive innovation, and help shape the future of cannabis.
Title: Regional Activation Manager
Location: New York, NY / Stamford, CT
Job Type: Full Time | Exempt
About the Role: The Regional Activation Manager drives localized marketing initiatives in support of Curaleaf’s sales teams within an assigned region. This remote role partners closely with national brand teams, providing strategic input on promotions and activation plans while ensuring consistent brand execution across the route-to-market (RTM).
The ideal candidate is a creative, detail-oriented marketer who thrives on bringing brands to life through regional activations, events, and partnerships—all while balancing strategy, execution, and compliance in a highly regulated industry.
What You’ll Do:
Regional Marketing Execution
Channel, Outlet & Event Activation
Cross-Functional Collaboration
Promotions & Product Launches
Customer Engagement & Brand Advocacy
Travel Requirement
What You’ll Bring:
Even Better If:
What We Offer:
Benefits vary by state, role type, and eligibility.
Follow us on Social Media:
Instagram: @curaleaf.usa
Twitter: @Curaleaf_Inc
LinkedIn: Curaleaf LinkedIn
Curaleaf Holdings, Inc. (TSX: CURA) (OTCQX: CURLF) ("Curaleaf") is a leading international provider of consumer products in cannabis with a mission to enhance lives by cultivating, sharing, and celebrating the power of the plant. As a high-growth cannabis company known for quality, expertise and reliability, the Company, and its brands, including Curaleaf, Select, Grassroots, Find, and Anthem provide industry-leading service, product selection and accessibility across the medical and adult-use markets. Curaleaf International is powered by a strong presence in all stages of the supply chain. Its unique distribution network throughout Europe, Canada and Australasia brings together pioneering science and research with cutting-edge cultivation, extraction and production. Home | Curaleaf | Cannabis with Confidence
Our Vision: To be the world's leading cannabis company by consistently delivering superior products and services and driving the global acceptance of cannabis.
Our Values:
Curaleaf is an equal opportunity employer. Curaleaf recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Individuals adversely impacted by the war on drugs are encouraged to apply.
Current Curaleaf employees should apply for open positions through our Internal Job Board, which can be accessed via the link on The Leaf.
***Please note that U.S. immigration rules and regulations are governed by federal law; not state law. Although Curaleaf operates in states that have legalized and decriminalized marijuana, it is still considered a controlled substance under federal regulations and thus illegal under federal law. For this reason, unless and until federal law changes, Curaleaf is prohibited from offering sponsorship for any non-immigrant or immigrant visas.
***Please also note that while candidates who are Lawful Permanent Residents (such as those with Green Cards) are eligible for employment with, and are hired by Curaleaf, working for a company engaged in a business that grows and sells a product that is illegal under federal law might compromise the candidate’s immigration status. Before a Lawful Permanent Resident accepts an offer of employment, Curaleaf recommends that the candidate seek the advice of an immigration lawyer.
Ready to apply?
Apply to Curaleaf
Share this job
Wilson Elser is a leading defense litigation law firm with more than 1400 attorneys in 46 offices throughout the United States. Founded in 1978, we rank among the top 100 law firms identified by The American Lawyer and 36 in the National Law Journal’s survey of the nation’s largest law firms.
Our firm is committed to attracting and retaining professionals who value each other and the service we provide by embracing Teamwork, Collaboration, Client Service, and Innovation. If you are a motivated professional looking for a long-term fit where you can grow in a role, and will be valued and empowered, then we invite you to apply to our Billing Coordinator position in our White Plains, NY Office.
This position offers a flexible, hybrid working arrangement.
Key Responsibilities:
Qualifications
A variety of factors are considered in making compensation decisions, including but not limited to experience, education, licensure and/or certifications, geographic location, market demands, other business and organizational needs, and other factors permitted by law. Final salary wages offered may be outside of this range based on other reasons and individual circumstances. This position is considered full-time and therefore qualifies for benefits including 401(k) retirement savings plan, medical, dental, vision, disability, and life insurance. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
Why Should You Apply?
Ready to apply?
Apply to Wilson Elser - Business & Legal ProfessionalsShare this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are looking for Research Engineers to help us redesign how Claude interacts with external data sources. Many of the paradigms for how data and knowledge bases are organized assume human consumers and constraints. This is no longer true in a world of LLMs! Your job will be to design new architectures for how information is organized, and train language models to optimally use those architectures.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic launches products at lightning speed and many of them grow at an unprecedented pace, too. For no product is this more true than Claude Code, the fastest-growing software product in history. That said, speed of adoption is not the same as depth of adoption. We’re building a team of Technical Specialists to drive meaningful adoption breadth and depth in our most strategic customers. We’ll do this through delivery of high-quality technical engagement and enablement in the customer’s ~ 90 days before and after contract signature.
As a Technical Specialist, you’ll largely engage with customers post-sale. Once an account’s implementation set-up is complete, you’ll convert bottom-up developer passion into org-wide meaningful adoption through bespoke enablement tailored to the customer's stack, repos, and workflows and focused upon deep adoption of proprietary Claude capabilities, which we know drive stickiness.
You'll also engage in strategic pilots before signature — partnering with Sales, Applied AI, and the customer's engineering leads to scope the pilot, run the enablement, and instrument the success criteria that close the deal. Carrying that context from pilot into post-sale is what makes adoption stick.
This is not a technical implementation role — that work sits with our Implementation Specialists. You'll spend your time in front of developers launching Claude Code, department leaders adopting Cowork, on stage at customer events and Anthropic builder summits, and in Claude Code and Cowork, building the demo apps and reference implementations that prove what's possible.
You’re the kind of person engineers want to grab coffee with after your workshops — credible because you can open a terminal in the meeting and actually drive the tool, exciting because you've thought hard about how agentic AI changes how software gets built, and useful because you can answer the next three questions a senior platform engineer is about to ask about security, scale, or agent behavior.
Drive end-user excitement inside enterprise accounts (primary focus)
Design and deliver customer-specific enablement programs — workshops, office hours, "build your first agent" labs, role-based curricula for engineering, data, platform, security, and knowledge-worker audiences
Drive deep adoption of proprietary Claude Code capabilities that make our tools sticky (subagents, hooks, MCP servers, headless mode, managed settings) tailored to the customer's stack, repos, and actual workflows (CI/CD, IDE integration, source control, secure coding, agentic pipelines)
Enable and excite champions and AI Center of Excellence leads inside customer orgs, arming them to scale the motion without you in the room
Support strategic pilots with pre-sales technical enablement
Partner with Account Executives, Applied AI, and customer engineering leads on pilots for strategic accounts
Carry pilot context into post-sale engagement so the customer’s experience is seamless and deeply focused upon their unique needs
Run hackathons and builder events that create lasting momentum
Design, scope, and run on-site and virtual hackathons inside customer organizations
Bring the playbook: themes, judging rubrics, starter kits, prize structures, post-event follow-through that turns hackathon prototypes into shipped products
Partner with Anthropic Marketing and Events to amplify the best work from customer hackathons externally where appropriate
Build the demos, content, and reference implementations that sell themselves
Build production-quality demo apps, reference architectures, and code samples on the Claude Developer Platform, Claude Code, and Claude Enterprise (including Cowork) — using the products daily yourself as core infrastructure
Create technical content — tutorials, blog posts, recorded walkthroughs, internal "cookbooks" — tailored to enterprise developer audiences and their real workflows (CI/CD, IDE integration, secure coding, agentic workflows, multi-agent orchestration)
Translate new product capabilities into field-ready content within days of release
Be the trusted technical voice in front of customers
Deliver keynotes, technical talks, and live-coded demos at customer all-hands, internal AI summits, and Anthropic-hosted builder events
Run "ask me anything" sessions with customer engineering organizations and engage credibly on architecture, prompting, agent design, evaluation, developer workflow, and the honest limitations of where AI coding tools are today
Represent Anthropic with our customers' engineering communities and, where appropriate, the broader developer ecosystem
Partner across GTM and back to product
Partner closely with Sales, Applied AI, and Customer Success to sequence engagements that compound (pilot → activation → adoption → value)
Bring back signal from the field: what enterprise developers are asking for, where they're getting stuck, what would unlock the next leap in adoption, and feed it to Product and Engineering
Contribute to the playbook: capture what works in pilots and post-sale enablement, share with peers, raise the floor across the team
3–7+ years in a customer-facing technical role — Solutions Architecture, sales engineering, developer relations, or technical consulting.
You build, and you build with our products. You use Claude Code, the Claude Developer Platform, and Cowork as core infrastructure in your own work. You can ship a working demo over a weekend, steer an agent through a non-trivial task live, and have opinions about what's actually good.
You can hold a room of engineers. You've delivered technical talks, workshops, or training to engineering audiences — platform engineers, ML/AI teams, security-minded enterprise architects — and you know the difference between a session that creates awareness and one that creates champions.
Builder credibility. A career that includes meaningful time as a software engineer, founder, solutions engineer, developer advocate, or Solutions Architect. You've shipped code that other people have depended on.
Pilot and evaluation experience. You've supported technical evaluations or pilots in an enterprise setting — scoping, enablement, handling the security/admin/scale conversations without escalating everything to product.
Strong written and visual communication. Tutorials, decks, code comments, and Slack threads that people actually read.
Project management instincts. You can run a multi-week activation plan — or a pilot against a defined success metric — without dropping the ball.
Willingness to travel. Regular travel for on-site enablement, pilots, hackathons, and customer events. Some evenings and weekends around major events.
Deep enthusiasm for AI with hands-on experience building with LLMs and an instinct for how this technology should be used responsibly inside large organizations.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic runs some of the largest Kubernetes clusters in the industry. We have fleets of hundreds of thousands of nodes across multiple cloud providers and datacenters to train, research, and serve frontier AI models. The Kubernetes Platform team owns the Kubernetes control plane that makes those clusters work.
We are operating at a scale where the defaults stop working. We own the scheduler and extend it to place topology-sensitive ML workloads across thousands of accelerators at once. We scale the control plane itself — apiserver, etcd, controllers — so it stays responsive as object counts and node counts grow by orders of magnitude. And we build the core cluster services every workload depends on, like service discovery, so they hold up under the same pressure.
We make sure the control plane is fast, correct, and always available. Your work will directly determine whether Anthropic can keep reliably and safely training frontier models as our compute footprint continues to grow.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Wilson Elser is a leading defense litigation law firm with more than 1400 attorneys in 46 offices throughout the United States. Founded in 1978, we rank among the top 100 law firms identified by The American Lawyer and 36 in the National Law Journal’s survey of the nation’s largest law firms.
Our firm is committed to attracting and retaining professionals who value each other and the service we provide by embracing Teamwork, Collaboration, Client Service, and Innovation. If you are a motivated professional looking for a long-term fit where you can grow in a role, and will be valued and empowered, then we invite you to apply to our Medical Malpractice/Nursing Home Defense Attorney position reporting to our White Plains or New York City Office.
This position offers a flexible, hybrid working arrangement.
The Position
We are seeking a dynamic and highly motivated Medical Malpractice/Nursing Home Defense Attorney to join a leading regional defense team specializing in medical malpractice and healthcare provider litigation. This is an excellent opportunity for an attorney with experience in nursing home defense litigation to continue their career development and professional advancement with a large, national firm.
Key Responsibilities:
Qualifications:
A variety of factors are considered in making compensation decisions, including but not limited to experience, education, licensure and/or certifications, geographic location, market demands, other business and organizational needs, and other factors permitted by law. Final salary wages offered may be outside of this range based on other reasons and individual circumstances. This position is considered full-time and therefore qualifies for benefits including 401(k) retirement savings plan, medical, dental, vision, disability, and life insurance. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
Why Should You Apply?
Ready to apply?
Apply to Wilson Elser - AttorneysShare this job
Wilson Elser is a leading defense litigation law firm with more than 1400 attorneys in 46 offices throughout the United States. Founded in 1978, we rank among the top 100 law firms identified by The American Lawyer and 36 in the National Law Journal’s survey of the nation’s largest law firms.
Our firm is committed to attracting and retaining professionals who value each other and the service we provide by embracing Teamwork, Collaboration, Client Service, and Innovation. If you are a motivated professional looking for a long-term fit where you can grow in a role, and will be valued and empowered, then we invite you to apply to our Medical Malpractice/Nursing Home Defense Attorney position reporting to our White Plains or New York City Office.
This position offers a flexible, hybrid working arrangement.
The Position
We are seeking a dynamic and highly motivated Medical Malpractice/Nursing Home Defense Attorney to join a leading regional defense team specializing in medical malpractice and healthcare provider litigation. This is an excellent opportunity for an attorney with experience in nursing home defense litigation to continue their career development and professional advancement with a large, national firm.
Key Responsibilities:
Qualifications:
Why Should You Apply?
Ready to apply?
Apply to Wilson Elser - AttorneysShare this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is bringing Claude to democracies worldwide, and we’re looking for startup-minded engineers to build out our public sector engineering team. Engineers in this role will be working across the stack to scale our products from 0 to 1 and beyond.
While we’re looking for candidates with prior public sector experience, we are also excited about hiring product-minded engineers with a track record of building and scaling products. There’s no shortage of things to build, so startup experience will help in maintaining a sharp focus and plotting an optimal path forward. This role is full-stack and entrepreneurial: you’ll own projects such as building out deployments for new government customers, implementing products to better surface critical information, and working directly with customers to understand how we can best leverage and adapt Claude to power their workflows.
You’d be joining at an exciting inflection point where we are evolving past deployments of basic Claude access for the public sector and toward cutting-edge AI capabilities and a comprehensive public sector experience. The projects the team tackles enables governments to deploy AI applications for their most critical operations—from national security to public services. As the team continues to grow, our focus includes delivering solutions that directly influence how democratic institutions deploy and use advanced AI systems.
Build specialized AI applications and products for governments by developing deep understanding of their processes and workflows
Take ownership of architecting new deployments and designing public sector specific features
Collaborate closely with research on AI models for government applications and workflows
Partner directly with government customers and internal go-to-market teams to translate requirements into technical and product roadmaps
Design and build scalable systems for deployment, user management, and administrative controls
Create clarity and technical direction in a fast-moving environment with unique constraints
Have 8+ years of experience as a full stack software engineer
Experience integrating and working with AI/ML models and understanding their capabilities
Strong technical background with proven success building and shipping enterprise or government-grade products
Have excellent collaboration skills, with ability to work effectively across functions
Startup experience, particularly in scaling products from zero to one
Experience partnering with sales, customer success, and professional services teams to drive product adoption
Experience delivering software to government networks
Active federal security clearance (Secret or above)
This position requires verification of U.S. citizenship due to citizenship-based legal restrictions. Specifically, this position supports a United States federal, state, and/or local government agency customer and is subject to certain citizenship-based restrictions where required or permitted by applicable law. To meet this legal requirement, citizenship will be verified via a valid passport, or other approved documents, or verified US government clearance.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Infrastructure organization is foundational to our mission of developing AI systems that are reliable, interpretable, and steerable. The systems we build determine how quickly we can train new models, how reliably we can run safety experiments, and how effectively we can scale Claude to millions of users — demonstrating that safe, reliable infrastructure and frontier capabilities can go hand in hand.
Developer Productivity owns the end-to-end experience of how engineers and researchers at Anthropic develop, build, test, and ship code at scale — from the source control and language ecosystems that underpin our monorepo, to the build and CI infrastructure that keeps thousands of daily builds running reliably across multiple cloud providers, to the developer acceleration tooling that deeply integrates Claude into engineering workflows.
Team Matching: Team matching is determined after the interview process based on interview performance, interests, and business priorities. Please note we may also consider you for different Infrastructure teams.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic’s mission requires the ability to defend our compute resources against all types of adversaries. Meeting that bar requires more than trustworthy hardware, it requires a software stack to consume the trust primitives the platform produces: device identities, attestation evidence, integrity manifests, DICE chains, topology claims, debug-state assertions, encryption posture, etc.
This role makes sure that all the security-relevant system state that needs to be attested to is properly accounted for in a robust and comprehensive manner, and used appropriately for authorization decisions in production.
You will be the technical owner of Anthropic's platform trust integration surface, the host-side stack that ingests trust primitives produced by hardware and converts them into production gates for workloads. Concretely:
This is an IC engineering role not a PM role. You will write code, own services, and be accountable for the correctness of Anthropic's platform trust pipeline in production. You'll operate at the interface between Platform Security Engineering and the rest of Anthropic.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As the Enterprise Events Lead at Anthropic, you will own the development and execution of enterprise-focused events that accelerate pipeline, deepen customer relationships, and elevate Anthropic's brand presence with key enterprise audiences. You will be responsible for designing and delivering high-impact event experiences—from major industry conferences to hosted summits, and partner activations.
In this role, you'll collaborate closely with Sales, Partnerships, Product Marketing, and Creative teams to ensure every event touchpoint reinforces Anthropic's position as a leader in safe, frontier AI. You'll bring a strategic mindset grounded in data, a talent for flawless execution under pressure, and the ability to translate complex enterprise goals into compelling in-person and hybrid experiences. This is a high-visibility role with significant cross-functional impact, ideal for someone who thrives at the intersection of strategy and logistics in a fast-moving environment.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a GTM Strategy & Operations Business Partner supporting the AMER Enterprise organization, you will work closely with sales leadership to be a deep expert in a critical segment of our fastest-growing business. This role supports the AMER Enterprise team, one of Anthropic's largest and most strategically significant commercial organizations, and will partner directly with executives to drive strategic analysis, operational efficiency, and growth initiatives across emerging and developing verticals while supporting critical global strategic initiatives.
This includes segment analysis, facilitating operating cadences and powering business reviews, improving operational efficiency, driving strategic initiatives, and supporting the rapid evolution of our sales motions and organization.
We're building for a future where AI dramatically amplifies what strategy teams can accomplish. We're hiring people with the technical depth, business judgment, and executive potential to operate at that frontier — leveraging AI to eliminate routine work and redirect energy toward the strategic and creative challenges that move the business. This role will evolve rapidly, and we need someone with the ceiling and ambition to evolve with it.
Partner with Global and AMER Enterprise segment leaders to analyze and refine segment-specific strategy including territory planning, resource allocation, performance to plan, and growth initiatives
Support development of targeted strategies across different customer segments and verticals based on their unique characteristics, product-market fit trajectories, and needs
Maintain excellence through business reviews and operating cadences to help achieve company revenue targets and strategic objectives
Be audible-ready about GTM efforts and a deep expert in your segment's book of business and trends with the ability to cut right to what matters for varying audiences from IC to executive
Analyze sales performance metrics and develop insights to optimize segment performance and improve efficiency using SQL, R, or Python to query, model, and visualize data at scale
Leverage Claude to its full capability
Partner to create, evolve and maintain reporting mechanisms, using data to go from "what" to "why"
Synthesize tangible, strategic recommendations based on go-to-market motions, deal outcomes, partner and customer feedback, and product opportunities
Build and maintain dashboards, models, and ad hoc analyses in spreadsheets and BI tools with mastery-level proficiency
Develop and maintain use case frameworks to capture and standardize how customers derive value
Support multiple teams and executives in leveraging insights for new deals, expansions, and retention strategies
Partner with GTM teams to evolve and refine sales motions and best practices across segments and regions
Serve as a trusted strategic advisor to senior and executive stakeholders across the GTM organization, delivering insights and recommendations with clarity, conviction, and composure
Navigate high-stakes conversations with intense, demanding executives — maintaining poise under pressure while delivering candid, well-supported perspectives
Build deep credibility through consistent follow-through, proactive communication, and the ability to challenge assumptions respectfully when the data warrants it
Translate complex analyses into executive-ready narratives that drive decisions, managing up effectively across multiple senior leaders with competing priorities
Work closely with Revenue Operations, Sales Enablement, Strategic Finance, and Product teams to drive initiatives and elevate excellence in course of business
Support coordination between AMER Enterprise sales segments and partnership teams
Help prepare materials for executive reviews and planning sessions
Deep alignment to Anthropic’s mission
5+ years of experience in sales strategy, revenue operations, management consulting, or GTM operations
Proficiency in SQL or R — you can independently write queries, build models, and extract insights from large datasets without relying on others
Mastery of spreadsheet tools and data (Google Sheets / Excel) — pivot tables, complex formulas, data modeling, and scenario analysis are second nature to you
Strong analytical skills with the ability to translate data into actionable insights and compelling executive narratives
Deep experience with GTM systems like Salesforce and business intelligence tools (Looker, Tableau, etc.)
Proven executive stakeholder management skills — you have a track record of presenting to and influencing C-suite and VP-level leaders in high-intensity environments.
An ability to pull signal through noise and an internal drive to create clarity. You strive for simplicity as the most elegant solution to complex situations.
A demonstrated ability to build trust with and influence a variety of stakeholders, including colleagues and other members of the GTM Strategy and Revenue Operations team
A healthy balance of idealism with pragmatism and the highest internal bar for excellence. You have been accountable for the results of your recommendations.
Track record of driving forward high-growth sales organizations
Disciplined project management and exceptional communication and interpersonal abilities
A desire and readiness to do the best work of your career
Bachelor's degree required; MBA or advanced degree preferred
Direct experience in a commercial role (pre-sales, customer-facing, partnerships, founder, etc.)
Experience in both large Enterprises in technology, finance or professional services, in addition to fast-paced startup building
Experience with AI/ML companies and consumption business models
Prior experience building frameworks for sales use cases or value propositions
Background in scaling sales organizations through 2-3x+ growth periods
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
At Morgan & Morgan, the work we do matters. For millions of Americans, we’re their last line of defense against insurance companies, large corporations or defective goods. From attorneys in all 50 states, to client support staff, creative marketing to operations teams, every member of our firm has a key role to play in the winning fight for consumer rights. Our over 6,000 employees are all united by one mission: For the People.
Full job description
At Morgan & Morgan, the work we do matters. For millions of Americans, we're their last line of defense against insurance companies, large corporations or defective goods. From attorneys in all 50 states, to client support staff, creative marketing to operations teams, every member of our firm has a key role to play in the winning fight for consumer rights.
Some salient growth point highlights are as follows:
1. Of the $25 billion+ we have recovered on behalf of the people in 1988, we have recouped over $8 billion of this total in the past 2 years alone across all 50 states, Washington DC, and the US Virgin Islands.
2. We have doubled our national mega-marketing campaigns from $250M to $500M+ since 2023, making Morgan & Morgan the Number 1 company in the nation for out-of-home marketing spend in the nation. This past October, we proudly hired Geico Insurance’s former Head of Auto Marketing to serve as our new Chief Marketing Officer and continue our advancement.
3. Unlike the vast majority of our competitors, Morgan & Morgan is proud to employ attorneys who can truly call themselves trial attorneys. Last year, we tried 320 cases. In the coming years, we will double that annual number. Attorneys at Morgan & Morgan are not only expected to get trial experience and frequent exposure to the courtroom on their and their colleagues’ cases, they are required to. If you are seeking the opportunity to try cases frequently, we are confident that we can and will deliver.
4. Our attorneys are privy to world-class case docket management software, litigation resources, and a wealth of benefits in service of optimization and case development.
Summary
Morgan & Morgan is looking for an experienced Personal Injury Litigation Attorney to join its growing practice of plaintiff's lawyers and supporting case staff in our office in New York City. The Litigation Attorney will handle motor vehicle liability / automobile accident cases in litigation representing plaintiffs in New York. The lawyer's responsibilities will be focused on litigating cases, including trial prep and trial, but will not have to generally handle pre-suit matters. Our law firm's generous and competitive compensation includes a base salary, non-recoverable draw, and standard commission fees / bonuses, as well as a full benefits package.
Responsibilities
Qualifications
#LI-CB2
Benefits
Morgan & Morgan is a leading personal injury law firm dedicated to protecting the people, not the powerful. This success starts with our staff. For full-time employees, we offer an excellent benefits package including medical and dental insurance, 401(k) plan, paid time off and paid holidays.
Equal Opportunity Statement
Morgan & Morgan provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
E-Verify
This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S. If E-Verify cannot confirm that you are authorized to work, this employer is required to give you written instructions and an opportunity to contact Department of Homeland Security (DHS) or Social Security Administration (SSA) so you can begin to resolve the issue before the employer can take any action against you, including terminating your employment. Employers can only use E-Verify once you have accepted a job offer and completed the I-9 Form.
Privacy Policy
Here is a link to Morgan & Morgan's privacy policy.
Ready to apply?
Apply to Morgan & Morgan, P.A.
Share this job
This role will primarily support HelloFresh’s Factor brand with site locations in Illinois and Arizona.
You are...
You’ll get…
This job description is intended to provide a general overview of the responsibilities. However, the Company reserves the right to adjust, modify, or reassign work tasks and responsibilities as needed to meet changing business needs, operational requirements, or other factors.
Ready to apply?
Apply to HelloFresh
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We are looking for experienced software engineers to join our Platform organization. We build the foundational primitives that accelerate product development across Anthropic, and own infrastructure and systems that teams depend on to ship reliably and at scale - either internally or by hundreds of thousands of external global users and companies at all stages.
You'll independently scope complex, multi-month projects, drive cross-org alignment through ambiguous problem spaces, and make architectural decisions that shape how Anthropic builds and scales its products. You'll partner directly with research to productize cutting-edge capabilities, and will have lasting impact on the platform that hundreds of thousands of companies and internal/external engineers depend on every day.
We have multiple teams that are currently hiring. Team placement occurs after the interview process, taking into account your interests and experience alongside organizational needs. This flexible approach allows us to match talented engineers with the backend product efforts where they'll have the greatest impact and growth potential.
Platform Acceleration: We work on maximizing the developer productivity of product engineers at Anthropic. You'll architect and optimize the critical development infrastructure that powers our AI product development, including dev environments, observability, and CI/CD pipelines. You’ll partner closely with product teams to understand their development workflow and eliminate friction points. Your work will have an extraordinary multiplier effect, enhancing productivity across our entire product organization and accelerating our mission.
Service Infra: we build and maintain the core infrastructure that powers Anthropic's engineering organization, from service mesh and observability systems to deployment pipelines and shared libraries. Our work enables product teams to build and operate reliable services at scale, making us a critical force multiplier across the entire company.
Multicloud: We build and maintain the infrastructure that enables Anthropic to operate across multiple cloud providers. We focus on cloud-agnostic tooling, cross-cloud networking, and multi-region deployments.
Auth & Identity: We build and maintain the critical infrastructure that powers identity and authentication across Anthropic's product suite. We work closely with product teams, security, support, and trust & safety as customers. We create scalable solutions for user authentication, authorization, role-based access control, and single sign-on that form the backbone of our company's identity management operations. We maintain a user-centric approach, building reliable systems that our users and company can depend on as we tackle complex challenges at the intersection of security, scalability, and user experience.
Connectivity: Our mission is to make Claude the most connected AI. We own the MCP proxy that routes every tool call and the OAuth and token management that keeps connections authenticated. We're also the core contributors to the MCP spec — now an open standard under the Linux Foundation — and maintain the official Python and TypeScript SDKs. You'll work on problems where reliability and enterprise trust are the bar: token refresh at scale, admin controls that let IT govern what agents can do, proxy infrastructure that stays up when partner servers don't. We ship for claude.ai, Claude Code, Cowork, and the API. Relevant experience includes OAuth, API gateways, multi-tenant platforms, building for enterprise, and MCP.
API Distributability: The Claude API today is a rapidly growing platform serving developers and enterprises at scale—but reaching the next tier of enterprise customers requires transforming how and where we deploy it. The Distributability team owns that transformation: making the Claude API a cloud-native, managed product that runs wherever our customers need it, cross-cloud and on Anthropic's own infrastructure, with the enterprise-grade security, compliance, and operational capabilities to support it.
Platform Intelligence: We build the training systems that adapt Claude to specific customer workloads. The core problem is task-specific adaptation: getting the right intelligence, cost, and latency profile for a particular use case, and building toward systems where that adaptation can deepen as the customer's usage grows. We work closely with research on training methods and with agent platform teams on data paths. Relevant experience: ML training infra, production ML pipelines, backend engineering. Finetuning experience is a plus.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
Location Preference: Preference will be given to candidates based in NY, SEA, SF or the Bay Area given the current location of team.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Our mandate is to make inference deployment boring and unattended.
Anthropic serves Claude to millions of users across GPUs, TPUs, and Trainium — and every model update must reach production safely, quickly, and without disrupting service. We're building the systems that make inference deployment continuous and unattended.
As a Software Engineer on the Launch Engineering team, you'll design and build the deployment infrastructure that moves inference code from merge to production. This is a resource-constrained optimization problem at its core: validation and deployment consume the same accelerator chips that serve customer traffic — your deploys compete with live user requests for the same hardware. Every model brings different fleet sizes, startup times, and correctness requirements, so the system must adapt continuously. You'll build systems that navigate these constraints — orchestrating validation, scheduling deployments intelligently, and driving down cycle time from merge to production.
If you've built deployment systems at scale and gravitate toward the hardest problems at the intersection of automation and resource management, this team will give you an outsized scope to work on them.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.