All active Lawyer roles based in Washington.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
We are seeking a technology transactions lawyer to to support our client, one of the largest tech companies in the world. In this role, you will provide legal support related to technology transactions and product counseling, including the negotiation of technology and services contracts with public sector customers. This position offers the opportunity to support cloud services and related public sector accreditation matters, including FedRAMP and CJIS.
The estimated total compensation for full-time Axiom attorney roles is $166,000-240,000 per year. Compensation decisions are based on various factors, including, but not limited to, experience, skills, certifications, location, and business needs. Employees may be eligible for additional benefits, including health, dental, and vision insurance; paid holidays and PTO; flexible work arrangements; and professional development opportunities and tools.
For over 25 years, Axiom has pioneered the alternative legal services industry, now serving more than 1,500 legal departments globally with our unique blend of world-class legal talent and advanced AI tools, delivering innovative solutions ranging from that combine top-tier talent with cutting-edge technology. We tackle complex legal matters across 12 practice areas for clients ranging from Fortune 100 to SMBs, empowering our legal professionals to engage in meaningful work that advances their careers. Our Talent NPS score of +71% and our Client NPS score of +64% highlight our commitment to excellence (Reported scores were Axiom’s average for 2025 – the legal industry average NPS score is +35%). Join our forward-thinking community where you’ll keep your career dynamic and multi-faceted, be part of a company that values agility, collaboration, and excellence, and love both the law and your life.
Learn more about life at Axiom.
Axiom is the global leader in high-caliber, on-demand legal talent. Covering North America, the UK, Europe, and APAC, we enable legal departments to drive efficiency and growth and meet the demands of today’s business landscape with best in class alterative legal services.
Diversity is core to our values and we are proud to be an equal opportunity employer. Axiom ensures equal employment opportunity in recruitment and employment, without discrimination or harassment on the basis of race, color, nationality, national or ethnic origin, religious creed or belief, political opinion, sex, gender, gender identity, gender identity status, pregnancy or maternity, age, disability, alienage or citizenship status, marital (or civil or other partnership recognized by law) status, genetic predisposition or carrier status, sexual orientation, military service, or any other characteristic protected by applicable law. Axiom prohibits and will not tolerate any such discrimination or harassment.
Upon request and consistent with applicable laws, Axiom will provide reasonable accommodations for individuals with disabilities who require accommodations to participate in each stage of the recruitment process. To request an accommodation, please contact benefits@axiomlaw.com.
Axiom respects your privacy. For an explanation of the kind of information we collect about you and how it is used, our full data privacy notice is available here.
Employment with Axiom may be contingent upon successful completion of a background check, providing proof of identity, and possessing the necessary legal authorization to work. Pursuant to the San Francisco Fair Chance Ordinance, NY Fair Chance Act, and Los Angeles Fair Chance Initiative, we will consider for employment qualified applicants with arrest and conviction records.
#LI-ML
Ready to apply?
Apply to Axiom Talent Platform
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic’s model weights and training infrastructure are among the highest-value targets in the technology sector, facing nation-state, supply-chain interdiction, and insider threats that conventional enterprise security programs were not built to address. As Commercial Counsel, Infrastructure Security for Compute and Infrastructure at Anthropic, you’ll be the day-to-day legal partner to the Chief Security Officer’s infrastructure-security and the Governance, Risk and Compliance teams. You will own the contractual and regulatory layer of physical and facility security, hardware and supply-chain security, network security, vendor personnel and insider-risk flow-downs, and security regulatory and assurance.
You’ll work in close partnership with Anthropic teammates in Frontier, Product, Litigation, Employment, and Commercial Legal, in addition to specialized outside counsel. You will serve as the primary legal owner ensuring security requirements are in the contract before signature and that external work product aligns with Anthropic’s security posture and commercial objectives.
Draft and negotiate security design-basis and site-hardening specifications in build-to-suit, lease, and colo agreements (perimeter, access control, CCTV, intrusion detection); guard-force statements of work, post orders, and KPI regimes; visitor, contractor, and badging policy; and security clauses in shared-campus and multi-tenant arrangements
Own provenance, anti-tamper, and chain-of-custody warranties in silicon, ODM, and OEM paper; trusted-supplier and country-of-origin restrictions; NDAA §889/§5949 and CHIPS-Act guardrail flow-downs; BIS/EAR advanced-computing and semiconductor export-control flow-downs and end-use/end-user certifications; firmware integrity, secure-boot, and golden-image escrow terms; secure logistics; counterfeit-part and grey-market controls; and secure decommissioning and certified media-destruction terms
Draft security schedules in carrier and fiber agreements (encryption-in-transit, route integrity, lawful-intercept handling), and security obligations in peering agreements
Set background-screening, training, and badge-revocation requirements for vendor and contractor personnel with site or hardware access, and flow Anthropic personnel-security standards into guard-force, security-integrator, and EPC vendor MSAs
Support CFIUS and outbound-investment screening on infrastructure vendors and sites, provide NIST/ISO/SOC 2 physical-control evidence for customer and auditor assurance in partnership with security teams; and support security representations in customer contracts that reference physical infrastructure with Commercial Legal
Work closely with specialized outside counsel, ensuring their work product aligns with Anthropic’s security and commercial objectives
Build the function: develop and maintain the security-schedule library, design-basis templates, advise on vendor security questionnaire templates, and negotiation playbooks; train Procurement, Datacenter, and Network teams to apply them at scale
Serve as direct counsel to the CSO’s infrastructure-security organization, coordinating with Product Legal and Litigation on incident response, threat intelligence, law-enforcement and intelligence-community engagement, insider-threat governance, and model-weight security policy under Anthropic’s Responsible Scaling Policy
Escalate novel structures or terms that create downstream risk for Anthropic’s security posture or operational flexibility; ensure security requirements accommodate AI-specific threats including hardware tamper, supply-chain interdiction, and high-value-target facility risk
Monitor and assess the evolving regulatory landscape affecting security and data protection, identifying higher-risk obligations for the business and partnering with security to operationalize them through policies, controls, and compliance programs
Advise on risk assessments, risk acceptance decisions, and reporting to leadership and the board; and review remediation commitments arising from assessments, customer audits and regulator inquiries.
JD and active membership in at least one U.S. state bar
Fluency in security design-basis specifications, guard-force and access-control contracting, and how security schedules interact with build-to-suit, colo, procurement, and carrier agreements
Experience with NDAA §889/§5949, CHIPS-Act guardrails, CFIUS/outbound-investment screening, and trusted-supplier or country-of-origin programs
Comfort with NIST, ISO 27001, and SOC 2 physical-control frameworks and the evidence and attestation process that supports customer and auditor assurance
Ability to coordinate effectively with multiple internal legal teams, and specialized outside counsel while maintaining strategic direction
Strong judgment about when contractual security terms create downstream risk for Anthropic’s security posture, audit position, or operational flexibility
Effective collaboration skills for working with the CSO’s organization, procurement, datacenter, and network teams
Communication skills that translate security and supply-chain-integrity concepts into clear risk assessments for business stakeholders
Genuine interest in infrastructure security and appreciation for why physical, hardware, and network security is mission-critical for frontier AI
At least 10-12 years of relevant legal experience with meaningful exposure to physical and facility security contracting, hardware and supply-chain security, network security schedules, or security regulatory and assurance work for critical infrastructure
In-house experience at cloud service providers, hyperscalers, defense and aerospace primes, telecom carriers, utilities, semiconductor companies, or datacenter operators supporting physical-security, supply-chain-security, or security-assurance programs; or U.S. government experience at DoD, DHS/CISA, BIS, or CFIUS staff
Experience at large technology companies with first-party datacenter or hardware programs supporting security contracting from the buy side
Law firm experience at practices with national-security, supply-chain, or critical-infrastructure specialization, particularly those who have worked on NDAA §889, CFIUS, or trusted-supplier matters
Prior involvement in transactions requiring sophisticated understanding of secure logistics, chain-of-custody, firmware integrity, and certified media destruction
Familiarity with CCTV/biometrics privacy regimes, executive-protection contracting, and the contractual layer of insider-risk programs
Ability to obtain and maintain a U.S. security clearance
Role-specific policy: For this role, we expect staff to be able to work from our San Francisco, Seattle, Washington D.C., or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
We are currently seeking Government Contract Attorneys.
Ideal candidates will have a strong background in government contracts law, with expertise in advising clients on a wide range of issues related to state, local, and/or federal government contracts. These attorneys will be embedded within in-house legal teams and provide legal advice and support to their existing contracting teams, ensuring compliance with all applicable federal and state laws and regulations.
The estimated total compensation for full-time Axiom attorney roles is $104,500-325,000 per year and for non-attorney roles is $71,250-309,750 per year. Compensation decisions are based on various factors, including, but not limited to, experience, skills, certifications, location, and business needs. Employees may be eligible for additional benefits, including health, dental, and vision insurance; paid holidays and PTO; flexible work arrangements; and professional development opportunities and tools.
For over 25 years, Axiom has pioneered the alternative legal services industry, now serving more than 1,500 legal departments globally with our unique blend of world-class legal talent and advanced AI tools, delivering innovative solutions ranging from that combine top-tier talent with cutting-edge technology. We tackle complex legal matters across 12 practice areas for clients ranging from Fortune 100 to SMBs, empowering our legal professionals to engage in meaningful work that advances their careers. Our Talent NPS score of +71% and our Client NPS score of +64% highlight our commitment to excellence (Reported scores were Axiom’s average for 2025 – the legal industry average NPS score is +35%). Join our forward-thinking community where you’ll keep your career dynamic and multi-faceted, be part of a company that values agility, collaboration, and excellence, and love both the law and your life.
Learn more about life at Axiom.
Axiom is the global leader in high-caliber, on-demand legal talent. Covering North America, the UK, Europe, and APAC, we enable legal departments to drive efficiency and growth and meet the demands of today’s business landscape with best in class alterative legal services.
Diversity is core to our values and we are proud to be an equal opportunity employer. Axiom ensures equal employment opportunity in recruitment and employment, without discrimination or harassment on the basis of race, color, nationality, national or ethnic origin, religious creed or belief, political opinion, sex, gender, gender identity, gender identity status, pregnancy or maternity, age, disability, alienage or citizenship status, marital (or civil or other partnership recognized by law) status, genetic predisposition or carrier status, sexual orientation, military service, or any other characteristic protected by applicable law. Axiom prohibits and will not tolerate any such discrimination or harassment.
Upon request and consistent with applicable laws, Axiom will provide reasonable accommodations for individuals with disabilities who require accommodations to participate in each stage of the recruitment process. To request an accommodation, please contact benefits@axiomlaw.com.
Axiom respects your privacy. For an explanation of the kind of information we collect about you and how it is used, our full data privacy notice is available here.
Employment with Axiom may be contingent upon successful completion of a background check, providing proof of identity, and possessing the necessary legal authorization to work. Pursuant to the San Francisco Fair Chance Ordinance, NY Fair Chance Act, and Los Angeles Fair Chance Initiative, we will consider for employment qualified applicants with arrest and conviction records.
#LI-JG1
Ready to apply?
Apply to Axiom Talent Platform
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Product Support Specialist, you’ll be at the front lines of safely delivering AI to the world by responding to, investigating, and tracking user needs in your day to day. Additionally, you’ll help us identify – and close – gaps in our team’s technical knowledge, provide high-touch support to strategic customers, and demonstrate deep care for how we systematically support customers at scale.
While not required, we're particularly excited about candidates with one or more of these specializations:
We're hiring across a range of experience levels to build depth in both technical capabilities and enterprise support.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Secure Every Identity, from AI to Human
Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.
This is an opportunity to do career-defining work. We're all in on this mission. If you are too, let's talk.
Okta is the World’s Identity Company. As a cloud-first SaaS provider, our mission is to enable any organization to use any technology by providing world-class security and identity solutions. The Okta Legal team is a high-impact group of professionals dedicated to navigating the complex landscape of technology law while fueling innovation. We are looking for a sharp, collaborative IP lawyer with a software engineering, computer science, or electrical engineering background to help protect our innovations and brand, and continuously shape and refine our global IP strategy.
More specifically, we seek to hire a Patent Attorney with a software and patent prosecution background and relevant technical degree (CS, EE, or ECE) to work with Okta’s legal and business teams to further develop Okta’s global patent portfolio and corresponding strategy. The scope of the role includes identifying patentable inventions and securing corresponding patent assets, providing expert technical and legal analysis, handling patent challenges, providing guidance on patent transactions and licensing matters, and supporting M&A and litigation.
The ideal candidate will also have experience with or a strong interest in handling and/or supporting all other aspects of Okta’s intellectual property work, including trademark, copyright, and trade secrets and will support other legal projects as required.
#LI-KC4
#LI-Hybrid
(P19817_3383227)
Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.
The Okta Experience
We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.
Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.
If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.
Notice for New York City Applicants & Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.
Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/.
Ready to apply?
Apply to Okta
Share this job
Secure Every Identity, from AI to Human
Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.
This is an opportunity to do career-defining work. We're all in on this mission. If you are too, let's talk.
Okta is the World’s Identity Company. As a cloud-first SaaS provider, our mission is to enable any organization to use any technology by providing world-class security and identity solutions. The Okta Legal team is a high-impact group of professionals dedicated to navigating the complex landscape of technology law while fueling innovation. We are looking for a sharp, collaborative IP lawyer with a software engineering, computer science, or electrical engineering background to help protect our innovations and brand, and continuously shape and refine our global IP strategy.
More specifically, we seek to hire a Patent Attorney with a software and patent prosecution background and relevant technical degree (CS, EE, or ECE) to work with Okta’s legal and business teams to further develop Okta’s global patent portfolio and corresponding strategy. The scope of the role includes identifying patentable inventions and securing corresponding patent assets, providing expert technical and legal analysis, handling patent challenges, providing guidance on patent transactions and licensing matters, and supporting M&A and litigation.
The ideal candidate will also have experience with or a strong interest in handling and/or supporting all other aspects of Okta’s intellectual property work, including trademark, copyright, and trade secrets and will support other legal projects as required.
#LI-KC4
#LI-Hybrid
(P19817_3383227)
Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.
The Okta Experience
We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.
Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.
If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.
Notice for New York City Applicants & Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.
Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/.
Ready to apply?
Apply to Okta
Share this job
Secure Every Identity, from AI to Human
Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.
This is an opportunity to do career-defining work. We're all in on this mission. If you are too, let's talk.
Okta is the World’s Identity Company. As a cloud-first SaaS provider, our mission is to enable any organization to use any technology by providing world-class security and identity solutions. The Okta Legal team is a high-impact group of professionals dedicated to navigating the complex landscape of technology law while fueling innovation. We are looking for a sharp, collaborative IP lawyer with a software engineering, computer science, or electrical engineering background to help protect our innovations and brand, and continuously shape and refine our global IP strategy.
More specifically, we seek to hire a Patent Attorney with a software and patent prosecution background and relevant technical degree (CS, EE, or ECE) to work with Okta’s legal and business teams to further develop Okta’s global patent portfolio and corresponding strategy. The scope of the role includes identifying patentable inventions and securing corresponding patent assets, providing expert technical and legal analysis, handling patent challenges, providing guidance on patent transactions and licensing matters, and supporting M&A and litigation.
The ideal candidate will also have experience with or a strong interest in handling and/or supporting all other aspects of Okta’s intellectual property work, including trademark, copyright, and trade secrets and will support other legal projects as required.
#LI-KC4
#LI-Hybrid
(P19817_3383227)
Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.
The Okta Experience
We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.
Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.
If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.
Notice for New York City Applicants & Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.
Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/.
Ready to apply?
Apply to Okta
Share this job
Secure Every Identity, from AI to Human
Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.
This is an opportunity to do career-defining work. We're all in on this mission. If you are too, let's talk.
Okta is the World’s Identity Company. As a cloud-first SaaS provider, our mission is to enable any organization to use any technology by providing world-class security and identity solutions. The Okta Legal team is a high-impact group of professionals dedicated to navigating the complex landscape of technology law while fueling innovation. We are looking for a sharp, collaborative IP lawyer with a software engineering, computer science, or electrical engineering background to help protect our innovations and brand, and continuously shape and refine our global IP strategy.
More specifically, we seek to hire a Patent Attorney with a software and patent prosecution background and relevant technical degree (CS, EE, or ECE) to work with Okta’s legal and business teams to further develop Okta’s global patent portfolio and corresponding strategy. The scope of the role includes identifying patentable inventions and securing corresponding patent assets, providing expert technical and legal analysis, handling patent challenges, providing guidance on patent transactions and licensing matters, and supporting M&A and litigation.
The ideal candidate will also have experience with or a strong interest in handling and/or supporting all other aspects of Okta’s intellectual property work, including trademark, copyright, and trade secrets and will support other legal projects as required.
#LI-KC4
#LI-Hybrid
(P19817_3383227)
Below is the annual base salary range for candidates located in California (excluding San Francisco Bay Area), Colorado, Illinois, New York and Washington. Your actual base salary will depend on factors such as your skills, qualifications, experience, and work location. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental and vision insurance, 401(k), flexible spending account, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program please visit: https://rewards.okta.com/us.
The Okta Experience
We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.
Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.
If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.
Notice for New York City Applicants & Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.
Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/.
Ready to apply?
Apply to Okta
Share this job
Get to know The Pokémon Company International
The Pokémon Company International manages the Pokémon property outside of Asia and is responsible for brand management, licensing and marketing, the Pokémon Trading Card Game, the animated TV series, home entertainment, and the official Pokémon website. Pokémon was launched in Japan in 1996 and today is one of the most popular children’s entertainment properties in the world.
Learn more online at corporate.pokemon.com and pokemon.com.
Get to know the role
What you’ll do
What you’ll bring
Base Salary Range: For this role, new hires generally start between $141,000.00 - $187,000.00 per year. The full range is $141,000.00 - $254,000.00 per year. This range is applicable for the labor market where the role is intended to be hired. The final base salary is directly related to the candidate’s qualifications and professional experience uniquely.
#LI-MK1 #LI-Hybrid
How you’ll be successful
What to expect
The above statements are intended to describe the general nature and level of work being performed by people assigned to this role. They are not to be construed as an exhaustive list of all responsibilities, duties, and skills required. Employees may be required to perform duties outside of their normal responsibilities from time to time, as needed. For roles in the United Kingdom, candidates will need the right to work. In some cases, and for some roles, the Company may be able to arrange a visa. For roles in Ireland, this role requires candidates to have the right to live and work in the Republic of Ireland. However, we welcome applications from all nationalities and may consider supporting an employment permit application, in appropriate and suitable cases.
The Pokémon Company International is committed to the inclusion of all qualified applicants for consideration in our job application process. If you require reasonable accommodation to complete a job application, pre-employment testing, or a job interview, or to otherwise participate in the hiring process, please contact the Talent Acquisition team at accommodationrequest_ta@pokemon.com.
Ready to apply?
Apply to The Pokémon Company International
Share this job
Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.
The Legal Operations team is responsible for optimizing the efficiency and effectiveness of the Legal Department at Anduril and works cross-functionally to streamline workflows, automate routine tasks, implement best practices and support regulatory compliance efforts. With the overarching aim of supporting the Company’s overall strategic goals, the Legal Operations team acts as a bridge between legal, the Company’s business units, and outside counsel.
We are looking for a Legal Operations Associate / Paralegal (non-lawyer position) to join our rapidly growing team in Orange County, California or Washington DC. In this role, you will be responsible for providing extensive paralegal, operational, and administrative support in a fast-paced environment to our multi-disciplinary team. You will assist with contracts negotiation, knowledge management, document retention and management, process implementation and optimization, and special projects. A key focus of this role will be maintaining our outside counsel relationships, legal billing operations, and vendor management. This will require skills in legal research, drafting and editing, document management, project management and regulatory compliance. If you are someone who is scrappy, adaptable, loves to learn, and detail-oriented then this role is for you.
The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full time offers; and are considered part of Anduril's total compensation package. Additionally, Anduril offers top-tier benefits for full-time employees, including:
At Anduril, we invest in our people. Our comprehensive, competitive benefits package (available at little to no cost to employees) ensures you’re supported in health, recovery, and whatever comes next. For more information, Explore Our Benefits.
Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We've observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.
To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:
No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.
@anduril.com address.Exercise Caution with Unsolicited Outreach: If you receive any communication that appears suspicious, contains grammatical errors, or makes unusual requests, do not engage. Always confirm the sender's email domain is @anduril.com before providing any personal information or clicking on links.
What to Do If You Suspect Fraud: Should you encounter any questionable or fraudulent outreach claiming to be from Anduril, please report it immediately to contact@anduril.com. Your proactive caution is invaluable in protecting your personal information and upholding the security and trustworthiness of our recruitment efforts.
To view Anduril's candidate data privacy policy, please visit https://anduril.com/applicant-privacy-notice/.
By submitting your application, you consent to Anduril Industries using a third-party service provider to conduct pre-employment risk, integrity, and due diligence screening and assessing potential risks as part of your application process. This third-party service provider provides risk-intelligence services that may include analysis of sanctions and watchlists, adverse media, public-record information, and other lawful open-source or commercial data sources. This third-party service provider does not act as a consumer reporting agency. Use of this provider helps to ensure compliance with applicable laws and protect technology, intellectual property, and organizational security.
Ready to apply?
Apply to Anduril Industries
Share this job
Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.
The Legal Operations team is responsible for optimizing the efficiency and effectiveness of the Legal Department at Anduril and works cross-functionally to streamline workflows, automate routine tasks, implement best practices and support regulatory compliance efforts. With the overarching aim of supporting the Company’s overall strategic goals, the Legal Operations team acts as a bridge between legal, the Company’s business units, and outside counsel.
We are looking for a Legal Operations Associate / Paralegal (non-lawyer position) to join our rapidly growing team in Orange County, California or Washington DC. In this role, you will be responsible for providing extensive paralegal, operational, and administrative support in a fast-paced environment to our multi-disciplinary team. You will help support, lead and develop contracts negotiation, knowledge management, document retention and management, process implementation and optimization, and special projects. This will require skills in legal research, drafting and editing, document management, project management and regulatory compliance. If you are someone who is scrappy, adaptable, loves to learn, and detail-oriented then this role is for you.
The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full time offers; and are considered part of Anduril's total compensation package. Additionally, Anduril offers top-tier benefits for full-time employees, including:
At Anduril, we invest in our people. Our comprehensive, competitive benefits package (available at little to no cost to employees) ensures you’re supported in health, recovery, and whatever comes next. For more information, Explore Our Benefits.
Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We've observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.
To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:
No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.
@anduril.com address.Exercise Caution with Unsolicited Outreach: If you receive any communication that appears suspicious, contains grammatical errors, or makes unusual requests, do not engage. Always confirm the sender's email domain is @anduril.com before providing any personal information or clicking on links.
What to Do If You Suspect Fraud: Should you encounter any questionable or fraudulent outreach claiming to be from Anduril, please report it immediately to contact@anduril.com. Your proactive caution is invaluable in protecting your personal information and upholding the security and trustworthiness of our recruitment efforts.
To view Anduril's candidate data privacy policy, please visit https://anduril.com/applicant-privacy-notice/.
By submitting your application, you consent to Anduril Industries using a third-party service provider to conduct pre-employment risk, integrity, and due diligence screening and assessing potential risks as part of your application process. This third-party service provider provides risk-intelligence services that may include analysis of sanctions and watchlists, adverse media, public-record information, and other lawful open-source or commercial data sources. This third-party service provider does not act as a consumer reporting agency. Use of this provider helps to ensure compliance with applicable laws and protect technology, intellectual property, and organizational security.
Ready to apply?
Apply to Anduril Industries
Share this job
At Morgan & Morgan, the work we do matters. For millions of Americans, we’re their last line of defense against insurance companies, large corporations or defective goods. From attorneys in all 50 states, to client support staff, creative marketing to operations teams, every member of our firm has a key role to play in the winning fight for consumer rights. Our over 6,000 employees are all united by one mission: For the People.
Morgan & Morgan, P.A. is seeking a lawyer to serve as a Pre-Suit focused Attorney in our office in Washington, DC.
Responsibilities
Requirements
Benefits
Morgan & Morgan is a leading personal injury law firm dedicated to protecting the people, not the powerful. This success starts with our staff. For full-time employees, we offer an excellent benefits package including medical and dental insurance, 401(k) plan, paid time off and paid holidays.
Equal Opportunity Statement
Morgan & Morgan provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
E-Verify
This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S. If E-Verify cannot confirm that you are authorized to work, this employer is required to give you written instructions and an opportunity to contact Department of Homeland Security (DHS) or Social Security Administration (SSA) so you can begin to resolve the issue before the employer can take any action against you, including terminating your employment. Employers can only use E-Verify once you have accepted a job offer and completed the I-9 Form.
Privacy Policy
Here is a link to Morgan & Morgan's privacy policy.
Ready to apply?
Apply to Morgan & Morgan, P.A.
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Our Regulatory Programs pillar is a key pillar of our overall Integrity & Compliance function and covers a range of compliance domain areas including economic sanctions, US export controls, and regulatory compliance programs stemming from global AI safety regulation.
As a Content Moderation Specialist, you'll own day-to-day program management of Anthropic's global content moderation and online safety regulatory compliance program. Online safety regulation is one of the fastest-moving areas of technology law, and AI sits squarely in its sights. Regimes including the EU Digital Services Act, the UK Online Safety Act, the Australia Online Safety Act, and a growing set of emerging frameworks globally create novel obligations for how AI products are built, deployed, and governed. You will be at the forefront of translating those obligations into a defensible, well-documented compliance program — with regulatory risk assessments as the core of the work.
This is a deeply cross-functional role. You'll partner closely with internal counsel, Safeguards, and operations teams across Anthropic to build the compliance program and frameworks that demonstrate Anthropic meets its obligations under content regulation. This is a builder's role at a company that takes integrity seriously and moves fast — you'll exercise independent judgment on issues without clear precedent and help build durable programs that let Anthropic move quickly while honoring its obligations to regulators, customers, and the public.
Own the global content regulation risk assessment program, including the roadmap of required assessments across jurisdictions, a consistent and repeatable risk assessment methodology and framework, and the coordination of inputs, consultation, and approvals for each assessment
Build and maintain systems and trackers to assess, operationalize, and report on relevant regulatory requirements across Anthropic's products and jurisdictions
Partner with internal counsel, Safeguards, Policy, engineering, and operations teams to align internal practices with external commitments and legal obligations
Maintain a controls inventory and the compliance documentation library for content regulation, ensuring documentation is drafted, reviewed by the right stakeholders, and kept current
Conduct gap analysis when new or amended content regulations come into scope, and stand up the compliance readiness plan and workback for each
Provide regular written program status reporting to stakeholders and leadership, proactively surfacing stalled or at-risk items with a proposed path to unblock
Take on additional related work as the program evolves; job duties and responsibilities may change from time to time at Anthropic's discretion or as required by applicable law
Experience managing regulatory or compliance programs at a technology company or in a regulated industry
Hands-on experience conducting or program-managing regulatory risk assessments, including coordinating inputs across multiple functions
Demonstrated ability to build and maintain compliance program artifacts, including policies, risk assessment documentation, controls inventories, program trackers, and readiness plans
A track record of executing cross-functionally, driving outcomes across legal, product, policy, and operations partners without direct authority
Excellent written and verbal communication skills, including producing clear program documentation and status reporting for senior stakeholders
Sound judgment and the ability to make decisions and move work forward with incomplete information in an evolving regulatory environment
5+ years of relevant experience in regulatory program management or content moderation compliance
Direct experience with online safety or content moderation regulation, such as the EU Digital Services Act, UK Online Safety Act, Australia Online Safety Act, or comparable regimes (strongly preferred)
Experience in trust and safety, online safety, or regulatory compliance at a large consumer technology platform
Prior experience in a Big 4 or other professional services firm advising on content regulation, online safety, or platform compliance engagements
Experience designing risk assessment methodologies or compliance frameworks from first principles
Experience with multi-jurisdictional compliance programs in a rapidly scaling environment
Familiarity with how generative AI products intersect with content and online safety regulation
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
ABOUT GREENPEACE
Greenpeace is a global, independent campaigning organization that uses peaceful protest, creative communication and people power to expose global environmental problems and promote solutions that are essential to a green and peaceful future. In just the past year, we’ve secured a Global Oceans Treaty to protect wildlife and people who depend on healthy oceans. We persuaded the Biden Administration to phase out single-use plastics from federal operations. And we safeguarded healthy spaces, from national marine sanctuaries to bans on seabed mining to stopping liquefied national gas terminals poisoning Gulf Coast communities. Our work spans more than 50 years of fights and successes.
As an independent campaigning organization, Greenpeace does not accept funds from any corporations or government entities. The global Greenpeace network is headquartered in the Netherlands, with offices around the world in over fifty countries. Greenpeace USA is headquartered in Washington, DC.
POSITION SUMMARY
Greenpeace lawyers help change the world for the better by ensuring that Greenpeace organizations are internally strong and well-defended, as well as through proactive litigation and other strategic legal initiatives.
The Assistant General Counsel is charged with supporting the General Counsel and Legal Team in identifying and advising on organizational legal risks and opportunities. This includes working to protect the organization by, among other things, ensuring regulatory compliance, identifying and helping to mitigate risks and legally vetting publications, contracts and other documents.
The Assistant General Counsel may also work with outside counsel to assist in managing litigation to defend activists. Additionally, this role provides legal guidance and training to staff on a wide range of issues with emphasis on non-profit corporate, tax, employment, labor, contract, campaign finance, intellectual property and environmental matters. The Assistant General Counsel may also collaborate with attorneys in the global Greenpeace network to optimize movement lawyering principles, mitigate and reduce cross-organizational liability issues, especially related to global campaign activities.
This Assistant General Counsel is a full-time, hybrid, exempt, benefits-eligible position, preferably based in Washington, D.C. Candidates in other GPUS-authorized locations (DC, MD, VA, NY, NJ, CT) will be considered. This role reports to the General Counsel.
ROLE RESPONSIBILITIES
Legal Guidance:
Legal Compliance:
Risk Assessment and Mitigation:
Communication and Collaboration:
Other Duties:
ROLE REQUIREMENTS
Knowledge and Experience:
Skills, Attributes and Competencies:
Other:
COMPENSATION
Greenpeace offers a competitive benefits package including medical, dental and vision coverage with low contributions from staff. We have options for pre-tax flexible spending accounts for medical and dependent care costs. Greenpeace employees enjoy generous 401K matching and time-off for vacation, sick, personal, and parental leave. After five years, staff are eligible for a paid sabbatical. To encourage public transportation use and biking, we offer subsidies for staff who choose to bike or ride mass transit to work. Greenpeace is committed to the growth and development of staff and fostering a creative workplace culture. For more information, check out our benefits plan.
This role is a graded at a level 3, and a starting salary within the range of $78,668 - $100,014 is anticipated to be provided to the successful candidate having met the minimum requirements for this position consistent with reflecting our commitment to maintaining competitive and fair compensation practices.
If you want to save the world from climate change and believe in environmental and social justice, please submit an application on our careers page. Applications will be reviewed on a rolling basis and will conclude once we have found the best candidate.
Greenpeace is an Equal Opportunity Employer and does not discriminate on the basis of race, color, class, sex, gender, sexual orientation, disability, family/marital status, religion or other protected classes. We strongly encourage applications from marginalized communities. If you require assistance applying to this opportunity, please contact usa-resumes@greenpeace.org.
Only applicants eligible to work and live in the United States without any legal restrictions need apply for this opportunity.
Ready to apply?
Apply to Greenpeace USAShare this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Within Integrity & Compliance, the Privacy Programs pillar owns how we operationalize privacy across the company — from how we handle personal data in our products and research, to how we meet our obligations under the GDPR, CCPA, and the growing patchwork of global privacy law. We work closely with our Privacy Legal team on all privacy related matters.
We're hiring a Privacy Governance Lead to own the governance backbone of that work. You'll set the strategy for how privacy governance operates at Anthropic, define the policies and controls that translate privacy principles into operating practice, and help manage the relationship with internal and external stakeholders who depend on that framework holding up under scrutiny.
This is a foundational role with significant scope. You'll be shaping a privacy governance function from a relatively early stage, with the autonomy to set the standard and the mandate to drive cross-functional change. You'll partner closely with Privacy Legal, Security, Product, Research, and the wider I&C team, and you'll contribute directly to reporting that reaches the Audit Committee and boards. You'll report to the Head of Integrity & Compliance.
Set the strategy and roadmap for Anthropic's privacy governance framework, including the policies, standards, and internal controls that map to GDPR, CCPA/CPRA, and other applicable global privacy regimes
Own the privacy documentation lifecycle end-to-end — Data Protection Impact Assessments, Records of Processing, Transfer Impact Assessments, and other accountability artifacts — including the methodology, the tooling, and the quality bar
Establish governance forums and approval workflows for privacy-significant product, research, and vendor decisions, and chair the forums where novel or high-risk questions are resolved
Own the privacy controls testing program: define what "good" looks like, set the testing cadence, and present results to the Head of Integrity & Compliance and other leadership forums
Partner with Privacy Legal to anticipate emerging privacy law and translate new obligations into concrete control changes ahead of enforcement
In partnership with Legal, co-lead privacy regulator engagement on governance matters, including responses to inquiries, audits, and complaints
Oversee the management of inputs for regulatory responses with the Privacy Program pillar
Drive privacy training and awareness strategy for engineering, product, research, and go-to-market teams, calibrated to the actual decisions those teams make
Represent the privacy governance function in Internal Audit reporting, and in cross-functional risk and compliance forums
Build and develop the privacy governance team over time
Deep working knowledge of GDPR and at least one major US state privacy regime (CCPA/CPRA, or equivalent), including how their requirements translate into operational controls at scale
Demonstrated track record building, scaling, or transforming a privacy governance program end-to-end — policies, DPIAs, ROPAs, controls libraries, governance forums, and the operating model that supports them
Strong written communication, with the ability to produce clear policies, board-ready reporting, and practical guidance that engineering and product teams will actually use
Comfort owning hard cross-functional decisions and operating across legal, technical, and operational boundaries
A privacy certification such as CIPP/E, CIPP/US, or CIPM, or equivalent demonstrated expertise
Senior privacy governance leadership experience at a technology company operating under multiple privacy regimes simultaneously, ideally including one with novel data processing (AI/ML, large-scale platforms, or similar)
Direct experience engaging privacy regulators, particularly EU data protection authorities or the Irish DPC, on governance matters such as inquiries, audits, or complaints
Familiarity with AI-specific privacy considerations: training data governance, model memorization, output filtering, and the intersection with emerging AI regulation
Experience standing up governance functions in a high-growth environment, including building from a blank page
Demonstrated experience presenting to Audit Committees, boards, or equivalent senior governance bodies on privacy matters
Background that bridges privacy and broader compliance disciplines (security, regulatory, ABAC, enterprise risk management)
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
The Governance & Oversight pillar is the operational and structural backbone of Integrity & Compliance. It owns the frameworks, policies, controls testing, training, GRC tooling, and reporting that hold the rest of the function together — and that allow Anthropic to demonstrate accountability to employees, customers, regulators, and the public as the company scales.
We're hiring a Compliance Governance & Oversight Lead to set the strategy for how compliance governance works at Anthropic, own the systems and reporting that give leadership a clear line of sight into program health, and partner with the leads of our other I&C pillars — Privacy Programs, Regulatory Programs, and Corporate Compliance — to ensure their work is supported by a coherent operating model.
This is a senior leadership role in a function being built from the ground up. You'll have significant autonomy to shape the design of the pillar, and you'll build and develop the team that runs it. You'll report to the Head of Integrity & Compliance.
Set the strategy for compliance governance and oversight at Anthropic, defining how policies, controls, training, and reporting come together into a coherent operating model across the I&C function
Own the policy lifecycle end-to-end — drafting standards, approval workflows, version control, attestation, and accessibility — and ensure Anthropic's compliance policies are current, coherent, and demonstrably communicated
Lead selection, implementation, and ongoing administration of the GRC platform that serves as the central system of record for I&C, including risk tracking, control documentation, issue management, and workflow
Design and run the controls testing and monitoring program across all I&C pillars, partnering with pillar leads to verify controls are operating as intended and to surface gaps before they become issues
Own the compliance training program, working with subject matter experts across I&C to develop, deliver, and evidence training that meets regulatory requirements and embeds compliance into how the company operates
Drive issue management across the function — ensuring findings, exceptions, and remediation are documented, owned, escalated where needed, and driven to closure
Build the metrics, dashboards, and reporting cadences that give the Head of I&C, the broader leadership team and Internal Audit a clear view of program health, key risks, and strategic priorities
Prepare and present compliance reporting to leadership synthesizing inputs from across pillars into clear narratives
Coordinate the function's responses to compliance-related due diligence requests from customers, partners, and investors
Manage the I&C budget and vendor portfolio, providing the administrative foundation for the function's operations
Build and develop the Governance & Oversight team as the pillar's scope and headcount grow, and embed a culture of practical, well-evidenced compliance across the company
Significant experience in compliance, risk, internal audit, or a closely related field, including leadership of a compliance operations, governance, or oversight function
Demonstrated track record building or substantially scaling a compliance program — policies, controls testing, training, GRC tooling, and reporting — end-to-end, ideally from an early stage
Deep working knowledge of compliance program fundamentals: policy management, training, controls testing, issue management, and program reporting
Experience selecting, implementing, and operating GRC technology platforms, and using technology to make compliance more effective and less burdensome for the wider organization
Track record of preparing and presenting compliance reporting to senior leadership, Internal Audit, and/or board-level governance bodies
Strong written and verbal communication, with the ability to translate substantive compliance requirements into operating processes that engineering, product, and go-to-market teams will actually adopt
Demonstrated ability to lead through influence across a senior, cross-functional stakeholder group, and to make and defend cross-functional decisions
Experience building and developing teams
10+ years of relevant experience
Prior experience at a Big 4 or comparable compliance consulting or advisory professional services firm, in addition to in-house leadership experience
Experience leading compliance governance at a high-growth technology company, ideally one operating across multiple regulatory regimes
Exposure to AI-specific compliance considerations and the emerging regulatory landscape for AI
Experience standing up or transforming a compliance function in a hyper-scaling environment, including building from a blank page
Direct experience presenting to or supporting Audit Committee or board-level reporting on compliance matters
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Our Sanctions & Regulatory Programs pillar is a key pillar of our overall Integrity & Compliance function and covers a range of compliance domain areas including economic sanctions, US export controls, and regulatory compliance programs stemming from global AI safety regulation.
As a member of the Sanctions & Regulatory Programs team, you'll be a core driver of Anthropic's trade compliance program, covering both economic sanctions and US export controls, where the rules governing AI models and compute are still being written, and where sanctions and export control exposure runs through globally accessible APIs, downstream and embedded-partner channels, and rapid geographic expansion.
In this role you'll roll up your sleeves across the full sanctions and export controls compliance remit: partnering with internal counsel on the interpretation of applicable law, setting the policy frameworks and escalation standards that govern how Anthropic operates, and advising first-line teams (Operations, Product, Engineering, Go-to-Market, Finance, and Safeguards) on how to implement them. This is a builder's role at a company that takes integrity seriously and moves fast. You'll be at the forefront of novel trade compliance questions in artificial intelligence, exercise independent judgment on issues without clear precedent, and help build durable programs that let Anthropic move quickly while honoring its obligations to regulators, customers, and the public.
Own the policy frameworks for Anthropic's sanctions and export controls programs, and keep them current as regulations and the business evolve
Serve as the day-to-day subject-matter expert on international sanctions and US export controls (e.g., OFAC and other applicable sanctions regimes, EAR classification and advisory, BIS rules, deemed exports, and restricted-party / end-use and end-user controls), partnering with Trade Compliance Counsel on interpretation and with first-line teams on implementation
Conduct sanctions and export controls risk assessments for new products, features, partner and reseller channels, and geographic expansion, and translate findings into clear control requirements
Define screening standards, thresholds, and escalation criteria for restricted-party screening, and partner with Regulatory Operations on program design, edge-case handling, and exception review
Advise on third-party, downstream, and embedded-partner access risk, including contractual trade controls provisions and diligence standards for resellers, marketplaces, and platform partners
Review escalated matters from first-line teams, disposition complex screening and end-use cases, and determine when regulator engagement, licensing, or voluntary self-disclosure is warranted in partnership with Legal
Monitor applicable regulatory developments and enforcement actions, assess impact on Anthropic, and drive any required program changes
Conduct and coordinate testing and assurance over first-line trade compliance controls, and track findings and remediation to closure
Develop trade compliance training and awareness content for relevant employee populations
Prepare program reporting for I&C and Legal leadership on program health, key risks, and emerging issues
Substantive, hands-on experience with economic sanctions compliance (OFAC, EU, UK, and other applicable regimes), including program design, risk assessment, and screening governance, in a commercial environment
Substantive, hands-on experience with US export controls (EAR/BIS), including classification, end-use/end-user analysis, and deemed-export assessment
Experience building or materially maturing a sanctions or export controls compliance program: policies, risk frameworks, controls, testing, and training
A track record of partnering with legal counsel to interpret trade controls requirements and translating them into practical guidance for product, engineering, operations, and commercial teams
Experience reviewing and advising on trade controls provisions in commercial agreements and on third-party / downstream-customer diligence
Excellent written and verbal communication skills, including drafting policy and preparing materials for senior leadership
7+ years of relevant experience in sanctions and/or export controls compliance
Prior experience in a Big 4 or other compliance consulting or advisory professional services firm
Experience in a hyper-scaling tech company or other fast-paced environment
Exposure to trade compliance in software, cloud, or AI contexts
Sound judgment and the ability to make risk-based decisions and move work forward with imperfect information
A leadership style characterized by integrity, humility, strategic thinking, and solution-orientation
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Our Corporate Compliance program comprises a key pillar of our overall Integrity and Compliance function and covers a range of compliance domain areas including anti-bribery, anti-corruption, employee conduct and third party management.
As a member of the Corporate Compliance team, you'll be a core driver of a small, high-leverage team enhancing and executing on Anthropic's Corporate Compliance program. This is a generalist seat anchored by deep anti-bribery/anti-corruption expertise. You'll roll up your sleeves across the full Corporate Compliance remit — third-party risk, gifts & hospitality, conflicts of interest, outside business activities, the Code of Conduct, speak-up matters, and internal investigations — and be a subject-matter expert the business looks to on ABAC: setting the standards for third-party risk management, advising on government interactions and high-risk hospitality, and supporting Legal on compliance terms in commercial agreements. You'll also be a thought partner on how we use AI to run a modern compliance function.
This is a builder's role at a company that takes integrity seriously and moves fast. We're looking for a pragmatic, solution-oriented problem-solver who operates comfortably in grey areas, brings a beginner's mind to the job, and holds the line — clearly and credibly — where it matters. You'll work directly with legal, policy, partnerships, and go-to-market teams across multiple regions, exercise independent judgment on novel questions, and help design the frameworks that enable Anthropic to scale responsibly.
Advise business teams on government-official interactions, gifts and hospitality, third-party risk, and related FCPA / UK Bribery Act / local-law questions
Support Legal on anti-corruption, conflict-of-interest, and related compliance terms in commercial agreements
Review and disposition conflicts of interest, outside business activity requests, and gifts & hospitality matters, including escalations and edge cases
Serve as an ABAC subject-matter expert to the owner of the third-party risk management framework — advising on risk tiering, screening standards, ongoing monitoring, and escalation paths
Conduct and support internal investigations into compliance matters, including running discrete matters end-to-end as first chair
Develop policies, guidance, and training; partner across the business on cross-functional compliance topics
Help design and supervise AI-assisted compliance workflows (screening, intake triage, diligence drafting), bringing judgment to what gets automated and what stays human
Substantial depth in anti-corruption / FCPA / third-party risk — gained in-house, at a law firm white-collar/compliance practice, and/or in government (DOJ, SEC, SFO, or similar)
Hands-on experience running or overseeing third-party due diligence end to end: scoping, screening/adverse media, questionnaires, risk analysis, and mitigation
Experience conducting internal investigations — scoping, evidence collection and review, witness interviews, findings memos, and remediation recommendations
Experience advising Legal and deal teams on anti-corruption and related compliance terms and risks in commercial agreements
Clear, practical writing and the ability to give business-usable advice
Sound, independent judgment in ambiguous, fast-moving situations — knowing when to align, when to push, and when to escalate
7+ years of relevant experience
Range across the broader corporate compliance portfolio (Gifts & Hospitality, Conflicts, Outside Business Activities, Code of Conduct, speak-up) and the curiosity to learn what you haven't done yet
High EQ — reads rooms and relationships well and earns trust quickly across stakeholders
Genuine interest in using AI tools to do compliance work better
Experience standing up or maturing a compliance program at a high-growth or technology company
International experience — advising on market entry, partner/reseller programs, or government engagement outside the U.S.
Significant first-chair internal investigations experience, including privileged investigations and presenting findings to senior leadership or a board/audit committee
Familiarity with sanctions/export, political-law, or human-rights/supply-chain risk management (adjacent areas this role will touch)
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As part of the Anthropic security department, the compliance team owns understanding security and AI safety expectations, as established by regulators, customers, and (nascent) industry norms — which we also seek to influence. The compliance team uses this understanding to provide direction to internal partners on the priorities of security and safety requirements they must meet. The compliance team demonstrates adherence to security expectations through credential attainment, the establishment of assurance and oversight mechanisms, and direct engagement with auditors, customers, and partners.
This opportunity is unique. Anthropic is expanding HIPAA coverage across its product portfolio — including Claude Code, the Claude Developer Platform, and Claude Cowork — and we need to build the compliance infrastructure to match that expansion. We are looking for someone to own HIPAA compliance operations end-to-end, not just advise on it.
Done everything on this list before — we value learning agility and willingness to tackle novel compliance challenges in the AI space
Deadline to Apply: None, applications will be received on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Wilson Elser is a leading defense litigation law firm with more than 1400 attorneys in 46 offices throughout the United States. Founded in 1978, we rank among the top 100 law firms identified by The American Lawyer and 36 in the National Law Journal’s survey of the nation’s largest law firms.
Our firm is committed to attracting and retaining professionals who value each other and the service we provide by embracing Teamwork, Collaboration, Client Service, and Innovation. If you are a motivated professional looking for a long-term fit where you can grow in a role, and will be valued and empowered, then we invite you to apply to our Legal Support Assistant position in our Seattle Office.
This position is on-site 5-days per week in our Seattle office.
The Position
We are seeking a Legal Support Assistant to join our team and serve as a key support resource within a fast-paced law firm environment. This is an excellent entry-level opportunity for someone eager to gain hands-on experience and grow within the legal profession.
Key Responsibilities:
In this role, you’ll support attorneys and Legal Administrative Assistants while helping maintain a professional and welcoming office environment. Responsibilities include:
Qualifications:
A variety of factors are considered in making compensation decisions, including but not limited to experience, education, licensure and/or certifications, geographic location, market demands, other business and organizational needs, and other factors permitted by law. Final salary wages offered may be outside of this range based on other reasons and individual circumstances. This position is considered full-time and therefore qualifies for benefits including 401(k) retirement savings plan, medical, dental, vision, disability, and life insurance. Details of participation in these benefit plans will be provided if an employee receives an offer of employment.
Why Should You Apply?
Ready to apply?
Apply to Wilson Elser - Business & Legal ProfessionalsShare this job
Wilson Elser is a leading defense litigation law firm with more than 1400 attorneys in 46 offices throughout the United States. Founded in 1978, we rank among the top 100 law firms identified by The American Lawyer and 36 in the National Law Journal’s survey of the nation’s largest law firms.
Our firm is committed to attracting and retaining professionals who value each other and the service we provide by embracing Teamwork, Collaboration, Client Service, and Innovation. If you are a motivated professional looking for a long-term fit where you can grow in a role, and will be valued and empowered, then we invite you to apply to our Nurse Paralegal position in our Washington D.C or Baltimore, Maryland office.
The Position
Wilson Elser's Washington D.C. Office seeks a Full Time Nurse Paralegal to join our expanding litigation team. This is an excellent opportunity for healthcare professionals seeking to expand their careers and who are interested in gaining fast-paced professional experience with a national law firm. Paralegal training will be provided if needed.
Key Responsibilities:
Qualifications
Why Should You Apply?
Ready to apply?
Apply to Wilson Elser - Business & Legal ProfessionalsAt Curaleaf, we’re redefining the cannabis industry with a strong commitment to quality, expertise, and innovation. As a leading global cannabis provider, our brands—including Curaleaf, Select, and Grassroots—offer premium products and services in both medical and adult-use markets.
Join us at Curaleaf to be part of a high-growth, purpose-driven company that champions corporate social responsibility through our Rooted in Good initiative, supporting community outreach and positive change. Here, you’ll have the opportunity to make a meaningful impact, drive innovation, and help shape the future of cannabis.
Lead Store Associate
Type of Work: Full-Time
Shift Availability: The candidate must have open availability on weekdays, evenings, weekends, and holidays.
Hourly Pay Rate: $17/hr
Location: 2001 Washington St, Hanover, MA 02339
About the Role:
As a Lead Store Associate, you'll be at the forefront of delivering exceptional experiences to our customers while guiding and inspiring your team. As a key player in our retail operations, you'll serve as a brand ambassador, drive performance, and help shape a welcoming, compliant, and high-performing retail store environment.
What You’ll Do:
Customer Experience & People
Lead by example to create a warm, welcoming environment where every guest feels valued, and coach your team to deliver exceptional service with every interaction.
Empower associates to build meaningful customer relationships through personalized product recommendations and expert guidance.
Drive consultative selling on the floor by encouraging open-ended questions, active listening, and thoughtful product education.
Foster a culture of loyalty by guiding the team to enroll guests in our rewards program and capture customer insights that enhance future experiences.
Support your team in resolving customer concerns with empathy and professionalism, ensuring every challenge becomes an opportunity to build trust.
Promote a respectful, inclusive, and collaborative workplace—where positivity and teamwork fuel great customer experiences and smooth operations at checkout.
Operational Excellence & Processes
Execute opening and closing duties, including daily huddles and cash reconciliation.
Verify customer IDs and maintain accurate POS profiles.
Ensure inventory integrity through precise transaction processing and order fulfillment.
Lead inventory counts and participate in overnight audits.
Maintain a clean, organized, and guest-ready store environment.
Uphold compliance with SOPs, safety protocols, and state regulations.
Sales & Performance
Achieve daily KPI goals through consultative selling and upselling strategies.
Reinforce customer purchase decisions and highlight complementary products.
Stay informed on promotions and product updates to deliver a seamless shopping experience.
Leadership & Team Development
Guide and support store associates in daily operations.
Lead by example in customer service and operational excellence.
Train new hires and conduct ongoing training sessions.
Provide real-time coaching and feedback to drive team performance.
Delegate tasks effectively and act as the go-to leader in the absence of store management.
Support conflict resolution and promote a positive workplace culture.
Adapt quickly in a fast-paced environment and take on additional duties as needed.
What You’ll Bring:
Must be 21 years of age or older.
High School Diploma or General Educational Development (GED) certificate.
A minimum of 1 year of experience in a customer-facing or sales environment.
A minimum of 1 year of experience using point-of-sale (POS) systems and cash handling.
A minimum of 6 months of leadership experience in a retail environment.
Minimum of 6 months of hands-on experience in retail store operations, including but not limited to executing opening and closing procedures, and reconciling cash drawers.
Curaleaf Team Members need a minimum of 6 months of experience as a Store Associate demonstrating consistent performance.
Strong leadership and team-building skills with the ability to coach and motivate others.
Familiarity with retail standard operating procedures (SOPs), safety protocols, and loss prevention practices
High level of attention to detail, especially in cash handling and inventory management.
Excellent customer service and interpersonal communication skills.
Strong problem-solving and decision-making abilities.
Ability to work independently and as part of a team in a high-volume, compliance-driven setting.
Availability to work a flexible schedule, including evenings, weekends, holidays, and occasional overnights.
Multi-tasking with the ability to quickly pivot to other tasks
Comfortable using technology and learning new tools to support operations and customer interactions.
Strong time management and organizational skills.
Strong de-escalation skills and the ability to redirect upset customers to result in a positive guest experience.
Basic proficiency in mathematics and computer applications.
The ability to maintain a solution-driven mindset when dealing with upset customers.
Strong conflict resolution skills and the ability to handle high-stress situations.
Ability to support store leadership by taking initiative and a proactive approach
Ability to work across all stations within the store, based on business needs.
Even Better If You Have:
Previous knowledge of inventory management, systematic restocking of merchandise, fulfillment of online and in-store orders, and conducting daily inventory counts and comprehensive inventory audits to maintain stock accuracy and compliance.
Background in a sales role with a focus on tracking KPIs and meeting sales targets.
Previous experience within a regulated retail setting.
Physical Requirements:
The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job.
While performing the duties of this job, the employee is required to stand for extended periods, regularly stand, walk, talk, or hear, stoop, kneel, crouch, or crawl, use hands and fingers to feel, reach with hands and arms, and lift up to 25 pounds. The employee is occasionally required to climb, balance, and lift up to 50 pounds. This position requires close vision (clear vision at 20 inches or less).
Work Environment:
This position has allergen warnings, including potential exposure to dust, pollen, and plant pathogens, and requires the ability to work in confined spaces, possess high stress tolerance, adaptability, and flexibility to work in an ever-changing environment.
What We Offer:
Benefits vary by state, role type, and eligibility.
Follow us on Social Media:
Instagram: @curaleaf.usa
Twitter: @Curaleaf_Inc
LinkedIn: Curaleaf LinkedIn
Curaleaf Holdings, Inc. (TSX: CURA) (OTCQX: CURLF) ("Curaleaf") is a leading international provider of consumer products in cannabis with a mission to enhance lives by cultivating, sharing, and celebrating the power of the plant. As a high-growth cannabis company known for quality, expertise and reliability, the Company, and its brands, including Curaleaf, Select, Grassroots, Find, and Anthem provide industry-leading service, product selection and accessibility across the medical and adult-use markets. Curaleaf International is powered by a strong presence in all stages of the supply chain. Its unique distribution network throughout Europe, Canada and Australasia brings together pioneering science and research with cutting-edge cultivation, extraction and production. Home | Curaleaf | Cannabis with Confidence
Our Vision: To be the world's leading cannabis company by consistently delivering superior products and services and driving the global acceptance of cannabis.
Our Values:
Curaleaf is an equal opportunity employer. Curaleaf recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Individuals adversely impacted by the war on drugs are encouraged to apply.
Current Curaleaf employees should apply for open positions through our Internal Job Board, which can be accessed via the link on The Leaf.
***Please note that U.S. immigration rules and regulations are governed by federal law; not state law. Although Curaleaf operates in states that have legalized and decriminalized marijuana, it is still considered a controlled substance under federal regulations and thus illegal under federal law. For this reason, unless and until federal law changes, Curaleaf is prohibited from offering sponsorship for any non-immigrant or immigrant visas.
***Please also note that while candidates who are Lawful Permanent Residents (such as those with Green Cards) are eligible for employment with, and are hired by Curaleaf, working for a company engaged in a business that grows and sells a product that is illegal under federal law might compromise the candidate’s immigration status. Before a Lawful Permanent Resident accepts an offer of employment, Curaleaf recommends that the candidate seek the advice of an immigration lawyer.
Ready to apply?
Apply to Curaleaf
Share this job
At Curaleaf, we’re redefining the cannabis industry with a strong commitment to quality, expertise, and innovation. As a leading global cannabis provider, our brands—including Curaleaf, Select, and Grassroots—offer premium products and services in both medical and adult-use markets.
Join us at Curaleaf to be part of a high-growth, purpose-driven company that champions corporate social responsibility through our Rooted in Good initiative, supporting community outreach and positive change. Here, you’ll have the opportunity to make a meaningful impact, drive innovation, and help shape the future of cannabis.
Retail Sales Associate - Part Time
Shift Availability: 10-32 hours/week - availability on weekends strongly preferred
Hourly Pay Rate: $17.00/hr.
Location: 2001 Washington St, Hanover, MA 02339
About the Role:
As a Retail Sales Associate at Curaleaf, you’ll be the face of our brand and the go-to expert for our customers, helping them explore our cannabis products and delivering top-tier service with every interaction.
You’ll play a key role in creating a welcoming, informative, inclusive, and engaging experience, helping guests explore our products and find the right fit for their needs. Whether you’re recommending new items, handling purchases at the register, or chatting with a first-time shopper, you’ll be building trust and showcasing what makes Curaleaf special. If you’re passionate about cannabis, love working with people, and enjoy a fast-paced retail environment, this is the place for you.
What You’ll Do:
Customer Experience & Teamwork
Greet and assist all guests in a warm, professional manner to ensure an exceptional customer experience.
Build strong and genuine connections with customers, ask thoughtful questions, and offer personalized product advice.
Address concerns with professionalism and resolve issues to ensure customer satisfaction.
Encourage loyalty by signing guests up for rewards programs.
Work closely with your team to ensure a respectful, inclusive, and upbeat atmosphere.
Operations & Store Standards
Help open and close the store, handle cash, and follow all procedures and regulations.
Check IDs, update customer profiles, and process transactions accurately.
Keep the store clean, organized, and fully stocked, from the sales floor to the backroom.
Support inventory tasks like restocking, fulfilling online orders, and counting products.
Sales & Performance
Consistently meet (and beat!) daily sales goals by upselling, building trust, and using your product knowledge.
Highlight deals and complementary items at checkout to boost customer satisfaction.
Stay on top of promotions and product updates to give customers a seamless experience.
Stay agile and adaptable in a fast-paced, ever-changing retail environment.
Take on extra duties as needed to support the team and store success.
What You’ll Bring:
A high school diploma or GED.
At least 6 months of experience in a customer-facing or sales role, with a strong focus on service and satisfaction.
A minimum of 6 months of experience using point-of-sale (POS) systems and handling cash transactions accurately.
The ability to thrive both independently and as part of a team in a fast-paced, compliance-driven environment.
Flexibility to work various shifts, including evenings, weekends, holidays, and occasional overnights.
Excellent communication skills, with the ability to connect with customers and team members alike.
Strong multitasking abilities and the agility to shift priorities as needed.
Comfort with technology and a willingness to learn new tools to support daily operations and customer engagement.
Solid time management and organizational skills to stay on top of tasks and responsibilities.
Effective de-escalation techniques and a calm, solution-oriented approach to handling customer concerns.
Basic proficiency in math and computer applications to support transactions and reporting.
A positive, proactive mindset when resolving customer issues, always aiming for a great guest experience.
Strong conflict resolution skills and the ability to remain composed under pressure.
Initiative and a team-first attitude, with a readiness to support store leadership and contribute to overall success.
A natural ability to build rapport and maintain lasting customer relationships.
A collaborative spirit and a commitment to working well within a team environment.
Even Better If You Have:
A background in a sales role with a focus on tracking KPIs and meeting sales targets.
Familiarity with e-commerce systems and order fulfillment processes.
Previous experience within a regulated retail setting.
Physical Requirements:
The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job.
While performing the duties of this Job, the employee is regularly required to stand for extended periods of time, walk, talk or hear, stoop, kneel, crouch or crawl, use hands and fingers to feel, reach with hands and arms, and lift up to 25 pounds. The employee is occasionally required to climb, balance, and lift up to 50 pounds. This position requires close vision (clear vision at 20 inches or less).
Work Environment:
This position has allergen warnings including potential exposure to dust, pollen, and plant pathogens, and requires the ability to work in confined spaces, have a high stress tolerance, adaptability, and be flexible to work in an ever-changing environment.
What We Offer:
Benefits vary by state, role type, and eligibility.
Follow us on Social Media:
Instagram: @curaleaf.usa
Twitter: @Curaleaf_Inc
LinkedIn: Curaleaf LinkedIn
Curaleaf Holdings, Inc. (TSX: CURA) (OTCQX: CURLF) ("Curaleaf") is a leading international provider of consumer products in cannabis with a mission to enhance lives by cultivating, sharing, and celebrating the power of the plant. As a high-growth cannabis company known for quality, expertise and reliability, the Company, and its brands, including Curaleaf, Select, Grassroots, Find, and Anthem provide industry-leading service, product selection and accessibility across the medical and adult-use markets. Curaleaf International is powered by a strong presence in all stages of the supply chain. Its unique distribution network throughout Europe, Canada and Australasia brings together pioneering science and research with cutting-edge cultivation, extraction and production. Home | Curaleaf | Cannabis with Confidence
Our Vision: To be the world's leading cannabis company by consistently delivering superior products and services and driving the global acceptance of cannabis.
Our Values:
Curaleaf is an equal opportunity employer. Curaleaf recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Individuals adversely impacted by the war on drugs are encouraged to apply.
Current Curaleaf employees should apply for open positions through our Internal Job Board, which can be accessed via the link on The Leaf.
***Please note that U.S. immigration rules and regulations are governed by federal law; not state law. Although Curaleaf operates in states that have legalized and decriminalized marijuana, it is still considered a controlled substance under federal regulations and thus illegal under federal law. For this reason, unless and until federal law changes, Curaleaf is prohibited from offering sponsorship for any non-immigrant or immigrant visas.
***Please also note that while candidates who are Lawful Permanent Residents (such as those with Green Cards) are eligible for employment with, and are hired by Curaleaf, working for a company engaged in a business that grows and sells a product that is illegal under federal law might compromise the candidate’s immigration status. Before a Lawful Permanent Resident accepts an offer of employment, Curaleaf recommends that the candidate seek the advice of an immigration lawyer.
Ready to apply?
Apply to Curaleaf
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As Commercial Counsel, Networking & Datacenter Operations for Compute and Infrastructure at Anthropic, you’ll lead legal execution for the connective tissue and the steady-state running of Anthropic’s Infrastructure. This role spans dark fiber and IRU agreements, IP transit and peering, subsea-cable capacity, network-equipment procurement, and the Operations & Maintenance (O&M), facilities-management, commissioning, decommissioning, and resilience contracting that keeps production sites online once the keys are handed over.
You’ll work in close partnership with Anthropic teammates in Datacenter Legal, Strategic Technology & Procurement, Infrastructure Security, Treasury/Risk and Public Policy, in addition to specialized outside counsel. You will serve as the primary legal owner ensuring external work product aligns with Anthropic’s commercial objectives and integrates with our overall compute and availability strategy.
Negotiate dark fiber and IRU agreements (long-haul and metro), wavelength and lit-capacity services, IP transit and peering (settlement-free and paid), internet-exchange participation, subsea-cable capacity and landing-party agreements, carrier-hotel and meet-me-room agreements, and cross-connect MSAs
Support the telecom regulatory interface, including FCC Section 214 authorizations, submarine-cable landing licenses and Team Telecom (EO 13913 Committee) review, the FCC’s 2025 subsea-cable security and IRU rules, private-carrier status determinations, and rights-of-way, franchise, and pole-attachment where Anthropic builds first-party fiber — working in close coordination with Infrastructure Security Legal on the national-security overlay for subsea capacity and landing-party participation
Structure and negotiate O&M and integrated-facilities-management contracts, critical-spares and break-fix agreements, smart-hands SLAs with colo landlords, and disaster recovery and resilience contracting
Own asset disposition, decommissioning, and media-sanitization terms, and data-hall fit-out and tenant-improvement work, in coordination with Infrastructure Security and Datacenter Legal
Support property, builder’s-risk, business-interruption, and equipment-breakdown insurance placement with Treasury/Risk, and coordinate with Infrastructure Security Legal on guard-force, vendor-personnel, and physical-security flow-downs in O&M and facilities-management contracts
Work closely with specialized outside counsel, ensuring their work product aligns with Anthropic’s commercial objectives
Build and own the network and O&M contract template library, SLA-credit playbooks, and intake and approval workflows so Network and Site-Ops teams can self-serve standard transactions; train Procurement and operations partners to apply them at scale
Manage SLA-credit claims, outage and root-cause-related disputes, warranty and break-fix claims, and pre-litigation vendor escalations, in coordination with Litigation
Collaborate with the Compute team’s production-site and network-operations leads who handle operational execution, change management, and governance, providing legal and risk management expertise
Understand how networking and operations legal structures interact with the compute capacity they support — availability SLAs, route diversity, and maintenance windows
Escalate novel structures or terms that create downstream risk for Anthropic’s availability or operational flexibility; ensure vendor arrangements accommodate AI-specific requirements including high-density power, liquid cooling infrastructure, and rapid scaling capacity
JD and active membership in at least one U.S. state bar
Fluency in availability and SLA constructs, route diversity and how network and O&M terms interact with colo leases, build-to-suit agreements, and financing arrangements
Experience with high-volume, template-driven, recurring vendor contracting and the discipline to keep it fast without losing control of risk
Comfort with telecom regulatory concepts (FCC Section 214, cable-landing licenses, Team Telecom review, rights-of-way) and the operational hand-off from construction to commissioning to steady-state
Ability to coordinate effectively with specialized outside counsel while maintaining strategic direction
Strong judgment about when vendor terms create downstream risk for availability, security, or operational flexibility
Effective collaboration skills for working with business teams who run production sites and the network and excellent communication skills that translate network and operations concepts into clear risk assessments for business stakeholders
Genuine interest in digital infrastructure operations and appreciation for why network and site availability is mission-critical for frontier AI
At least 10–12 years of relevant legal experience with meaningful exposure to telecom and network transactions (dark fiber, IRUs, transit, peering, subsea), datacenter operations, facilities management, or critical-infrastructure O&M contracting
In-house experience at cloud service providers, neo-clouds, carriers, subsea-cable consortia, or datacenter operators supporting network, colo, and O&M transactions
Experience at large technology companies with first-party backbone or production-site programs supporting network and operations transactions from the buy side
Law firm experience at practices with telecom, technology transactions, or infrastructure specialization, particularly those who have worked on fiber, peering, subsea, or datacenter operations matters
Prior involvement in transactions requiring sophisticated understanding of commissioning requirements, operational handoff, change management, and decommissioning
Familiarity with insurance and risk-transfer structures used in large-scale operating infrastructure, and with international vendor contracting for non-US production sites
Role-specific policy: For this role, we expect staff to be able to work from our San Francisco, Seattle, Washington D.C., or New York office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is looking for a Technical Enablement Lead to curate & deliver the technical curriculum that lets our go-to-market teams, partner, and customers demonstrate and support Claude Code with confidence. You'll own Claude Code enablement—building demos, labs, and content that help sellers showcase agentic coding. This is an ideal role for an engineer who loves teaching, loves developer tools, and wants to shape how field teams talk about the future of AI-assisted development.
Deliver Claude Code training for onboarding and ongoing enablement programs
Create and maintain training content: demo scripts, hands-on labs, use case libraries, quick-reference guides
Build compelling demo scenarios across industries and developer personas
Develop competitive positioning content—battlecards, differentiation guides, objection handling
Run demo coaching sessions and weekly office hours
Translate new Claude Code features into field-ready content rapidly
Gather field feedback and track content effectiveness to continuously improve programs
7+ years in a technical role (software engineering, solutions engineering, developer advocacy, or technical enablement)
Strong programming skills—you've shipped real code recently, not just slideware
You use AI coding tools (ideally Claude Code) extensively in your own work—not as a novelty, but as core infrastructure for how you build and ship
Track record delivering live technical training or talks
Experience partnering directly with sales, solutions engineering, or GTM teams
Experience coaching others on demos or presentations
Familiarity with enterprise developer workflows
Public speaking, conference presentations, or community-facing developer advocacy
Location & Travel
Location is flexible, with preference for proximity to a major hub (SF, NYC, DC, or Seattle). Regular travel is expected for in-person training delivery and field events.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
The ACLU seeks applicants for the full-time position of Staff Attorney, Senior Staff Attorney, or Senior Counsel in the Criminal Law Reform Project of the ACLU’s National office in New York, NY, San Francisco, CA, or Washington, DC. This is a hybrid role that has in-office requirements of two (2) days per week or eight (8) days per month.
This is a 2-year, limited term position.
The Criminal Law Reform Project’s mission is to advocate for the constitutional and civil rights of those impacted by criminal legal systems. We use litigation and advocacy to confront systemic government conduct that fuels the carceral state and police abuse, and oppresses people based on race, class, and other characteristics. CLRP does this work because everyone suspected, accused, or convicted of a crime deserves dignity, fairness, and an opportunity to thrive.
This position is part of a collective bargaining unit. It is represented by ACLU Staff United (ASU).
Reporting to the Director or the Deputy Director for Policing, this role offers a unique opportunity to pursue impact litigation and integrated advocacy aimed at challenging systemic misconduct by federal and state law enforcement agents in carrying out the Trump Administration’s mass incarceration and deportation agendas. The role may also contribute to other areas of CLRP’s docket as necessary.
This is an ideal opportunity for an attorney committed to racial justice, transformative change, and the power of legal advocacy to shift narratives and systems.
Litigation and Advocacy
Staffing and Management
The ACLU has a litigator scale that determines pay for attorneys in our Legal Department. The range of salaries are the following, based on year of law school graduation (please consult the hiring manager for specific salary details, based on individual circumstances)
The ACLU is committed to equity, transparency, and clarity in pay. These salaries are reflective of positions based in New York, NY where are National Offices are headquartered. Salaries are subject to a regional pay adjustment if authorization is granted to work outside of the location listed in this posting.
For details on our pay structure, please visit: https://www.aclu.org/careers/ACLU_Geographic_Pay_Structure-July_2024.pdf
For over 100 years, the ACLU has worked to defend and preserve the individual rights and liberties guaranteed by the Constitution and laws of the United States. Whether it’s ending mass incarceration, achieving full equality for the LGBTQ+ community, establishing new privacy protections for our digital age, or preserving the right to vote or the right to have an abortion, the ACLU takes up the toughest civil liberties cases and issues to defend all people.
We know that great people make a great organization. We value our people and know that what we offer is essential not just their work, but to their overall well-being.
At the ACLU, we offer a broad range of benefits, which include:
Accessibility, equity, diversity and inclusion are core values of the ACLU and central to our work to advance liberty, equality, and justice for all. For us diversity, equity, accessibility, and inclusion are not just check-the-box activities, but a chance for us to make long-term meaningful change. We are a community committed to learning and growth, humility and grace, transparency and accountability. We believe in a collective responsibility to create a culture of belonging for all people within our organization – one that respects and embraces difference; treats everyone equitably; and empowers our colleagues to do the best work possible. We are as committed to anti-oppression, anti-ableism, and anti-racism internally as we are externally. Because whether we’re in the courts or in the office, we believe ‘We the People’ means all of us.
With this commitment in mind, we strongly encourage applications from all qualified individuals without regard to race, color, religion, gender, sexual orientation, gender identity or expression, age, national origin, marital status, citizenship, disability, veteran status and record of arrest or conviction, or any other characteristic protected by applicable law.
The ACLU is committed to providing reasonable accommodation to individuals with disabilities. If you are a qualified individual with a disability and need assistance applying online, please email benefits.hrdept@aclu.org. If you are selected for an interview, you will receive additional information regarding how to request an accommodation for the interview process.
Ready to apply?
Apply to ACLU - National Office
Share this job
SpaceX was founded under the belief that a future where humanity is out exploring the stars is fundamentally more exciting than one where we are not. Today SpaceX is actively developing the technologies to make this possible, with the ultimate goal of enabling human life on Mars.
SATELLITE POLICY ASSOCIATE (STARLINK REGULATORY AFFAIRS)
One of the most ambitious missions that SpaceX has undertaken to date, the Starlink satellite constellation is our solution to providing fast and reliable internet to the entire world. Our customers currently include teachers in the most remote parts of the Amazon, researchers in Antarctica, digital nomads who travel via RV, emergency responders, commercial airlines, cruise lines, and residential users who live in more rural or developing areas.
Telecommunications is a heavily regulated business and this position will direct SpaceX advocacy to ensure these regulations enable and protect the provision of this ground-breaking service. As a member of the Starlink Regulatory Team, you will advocate Starlink's mission and capability to various regulatory bodies around the world. You will leverage your strong legal expertise to partner with satellite policy advisors and engineers, ultimately influencing and managing the policy environment for both domestic and international portfolios that enable Starlink to serve people and enterprises on a global scale.
RESPONSIBILITIES:
BASIC QUALIFICATIONS:
PREFERRED SKILLS AND EXPERIENCE:
ADDITIONAL REQUIREMENTS:
COMPENSATION AND BENEFITS:
Pay range:
Satellite Policy Associate: $150,000.00 - $215,000.00/per year
Your actual level and base salary will be determined on a case-by-case basis and may vary based on the following considerations: job-related knowledge and skills, education, and experience.
Base salary is just one part of your total rewards package at SpaceX. You may also be eligible for long-term incentives, in the form of company stock, stock options, or long-term cash awards, as well as potential discretionary bonuses and the ability to purchase additional stock at a discount through an Employee Stock Purchase Plan. You will also receive access to comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short and long-term disability insurance, life insurance, paid parental leave, and various other discounts and perks. You may also accrue 3 weeks of paid vacation and will be eligible for 10 or more paid holidays per year. Employees accrue paid sick leave pursuant to Company policy which satisfies or exceeds the accrual, carryover, and use requirements of the law.
ITAR REQUIREMENTS:
SpaceX is an Equal Opportunity Employer; employment with SpaceX is governed on the basis of merit, competence and qualifications and will not be influenced in any manner by race, color, religion, gender, national origin/ethnicity, veteran status, disability status, age, sexual orientation, gender identity, marital status, mental or physical disability or any other legally protected status.
Applicants wishing to view a copy of SpaceX’s Affirmative Action Plan for veterans and individuals with disabilities, or applicants requiring reasonable accommodation to the application/interview process should reach out to EEOCompliance@spacex.com.
Ready to apply?
Apply to SpaceX
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic sits at the frontier of AI development, which makes us one of the most interesting targets in the world for nation-state and advanced criminal actors. The Threat Intelligence function within our Detection & Response team exists to make sure we see them coming. As a Threat Intelligence Engineer, you'll be a hands-on practitioner responsible for producing the actionable intelligence that drives our detections, hunts, and defensive priorities. You'll track the adversaries most likely to target a frontier AI lab, build the tooling and pipelines that turn raw indicators into operational defenses, and work shoulder-to-shoulder with detection engineers and incident responders to make sure intelligence actually changes outcomes. This is a builder's role on a small, high-leverage team — you'll have broad latitude to shape how threat intelligence is collected, analyzed, and operationalized at Anthropic.
Deadline to apply: None. Applications will be received on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic’s Global Safety, Intelligence, and Security (GSIS) team is responsible for protecting our people, facilities, and operations worldwide. The function spans intelligence, executive protection, investigations, travel and event security, security technology, security operations, environmental health and safety, and crisis management — and it’s growing quickly as Anthropic’s footprint, public profile, and threat surface all expand.
We’re looking for a senior operational leader to own the day-to-day resilience of every Anthropic location worldwide. Reporting to the Head of GSIS, you’ll lead three tightly connected functions — physical security operations, environmental health and safety, and event security — across a global office portfolio that includes specialized environments such as research labs. You’ll manage a growing team of practitioners and an existing EHS program, and you’ll be the person leadership turns to when something needs to be safe, secure, and operational, every day, in every location.
This is an unapologetically operational role. You’ll set the standards, run the programs, manage the vendors, own the metrics, and answer for outcomes when something goes wrong at 2am in a different time zone. You’ll partner closely with Real Estate and Workplace, People, Legal, IT, and the rest of the GSIS leadership team — and you’ll work alongside the Head of Protective Services on high-profile events where the threat picture and the operational footprint converge. The right person is equally comfortable writing an OSHA-compliant program from scratch, walking a new office build with a contractor, briefing an executive on event-day security posture, and coaching a guard force supervisor through an incident debrief.
Lead global physical security operations
Run a true Environmental Health and Safety program
Own event security globally
Lead and grow the team
This role is for operators who take pride in the unglamorous work of keeping people safe and operations running — if you want ownership, scope, and a team worth building, we’d like to meet you.
Deadline to apply: None. Applications will be received on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an Applied AI team member at Anthropic, you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping state and local government agencies understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack. You'll combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex mission challenges while maintaining our high standards for safety and reliability.
Working closely with our Sales, Product, and Engineering teams, you'll guide customers from initial technical discovery through successful deployment. You'll leverage your expertise to help customers understand Claude's capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.
Partner with account executives to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation
Serve as the primary technical advisor to enterprise customers throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams & stakeholders to drive customer success
Support customers building with Claude Code, the Claude API, and Claude for Enterprise
Create and deliver compelling technical content tailored to different audiences. You will need to be able to spread the gamut from technical deep dives for engineering & development teams up to business value focused conversations with executives
Guide technical architecture decisions and help customers integrate Claude effectively into their existing technology stack
Help customers develop evaluation frameworks to measure Claude's performance for their specific use cases
Identify common integration patterns and contribute insights back to our Product and Engineering teams
Travel frequently to customer sites for workshops, technical deep dives, and relationship building
Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns
Must have prior experience working with US federal, state, and/or local agencies
5+ years of experience in technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager
Experience navigating complex buying cycles involving multiple stakeholders
Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering & IT teams, and more
Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders
Experience designing scalable cloud architectures and integrating with enterprise systems
Familiar with Python
Familiarity with common LLM frameworks and tools or a background in machine learning or data science
Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
A love of teaching, mentoring, and helping others succeed
Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is building the Applied AI team to help the world's most ambitious organizations adopt Claude in ways that are transformative, safe, and lasting. Our Forward Deployed Engineers embed directly with strategic customers to ship production AI applications that solve real business problems. As this practice grows, we need a founding leader to build and scale it.
As the first FDE Manager, you'll own the team that sits at the frontier of enterprise AI deployment. You'll hire and develop a high-performing team of FDEs, set the technical and operational bar for customer engagements, and build the playbooks that turn one-off successes into repeatable patterns. You'll work hand-in-hand with Engagement Managers who own delivery logistics and stakeholder management, while you ensure your team is shipping quality code, growing technically, and representing Anthropic at the highest level in customer environments.
This is a player-coach role with a strong bias toward leadership. You'll stay close enough to the technical work to review architectures, debug production issues, and pair with your team when it matters — but your primary impact will come from the people you hire, the standards you set, and the culture you create. You'll partner cross-functionally with Sales, Product, and Engineering to shape how Anthropic serves its most strategic customers, and your team's field insights will directly influence product direction.
This role requires someone who thrives in ambiguity and is energized by building from zero to one. You'll be defining what good looks like for FDE management at Anthropic — there is no existing playbook to follow.
You'll be a founding leader of a team that defines how enterprises adopt and scale AI. Your work will directly influence Anthropic's product direction, create reusable patterns for the broader customer base, and establish Anthropic as the trusted partner for AI transformation — all while advancing the responsible development of frontier AI systems.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a Federal Partners Account Executive at Anthropic, you'll drive revenue by selling our safe, frontier AI solutions directly to Systems Integrators (SI) and Independent Software Vendors (ISV) in the public sector space. You'll focus on selling directly to partners to ensure Anthropic's AI capabilities are delivered within their own solutions and service offerings. Working closely with GTM, product, and marketing teams, you'll help these partners understand and implement our technology while driving significant revenue growth.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a member of the Applied AI team at Anthropic, you will be a Forward Deployed Engineer (FDE) who embeds directly with our most strategic customers to drive transformational AI adoption. You will collaborate closely with customer teams to ship advanced AI applications that solve real world business problems. Our FDEs engage with customers to accelerate the adoption of existing products and create new applications built on our models.
Working closely with our Post-Sales, Product, and Engineering teams, you'll combine engineering expertise, an understanding of frontier AI applications, and customer-facing skills to understand customer workflows and develop innovative solutions that address complex business challenges while maintaining our high standards for safety and reliability.
You will sit at the frontier of enterprise AI deployments and serve as one of our founding FDEs who helps to shape our forward-deployed motion. We expect our FDEs to operate autonomously, thrive under ambiguity, and represent Anthropic at the highest level in customer environments. This is a significant responsibility: you’ll play a key role in championing our mission in the enterprise.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Are you a mission-driven leader with expertise in federal government personnel management? If so, Democracy Forward Foundation (“Democracy Forward”) is hiring a Director of Government Workforce Reform to join our team.
Democracy Works 250 is a new initiative from Democracy Forward’s Civil Service Strong that is building the personnel systems and structures our democracy needs for the next 250 years, and we’re looking for a bold, strategic, and future-focused Director of Government Workforce Reform to help lead the critical workforce policy aspect of that work.
This is not a policy role for tinkering at the edges. It’s an opportunity to lead the most ambitious redesign of federal workforce policy in a generation—one that meets the scale of the moment and shapes how federal government institutions function for decades to come.
As Director of Government Workforce Reform, you will drive a transformational process to reimagine how government recruits, supports, and empowers the people who make it work. You will lead a national reform coalition and run high-impact working groups, and translate insights from public servants and cross-sector experts into a future-ready, field-informed reform options library.
This position is full-time. Democracy Forward’s office is located in Washington, DC. While this position offers flexibility to work remotely, it requires regular in-office presence eight (8) times per month (averaging 2 times per week) to effectively support the nature of the role. Remote candidates within the United States will also be considered. If a candidate is outside of the DC area, travel to DC is expected 1-2 times per quarter.
The Director of Government Workforce Reform will report to the Director of Democracy Works 250 and work closely with staff and fellows on the Civil Service Strong team.
Democracy Works 250 is working to deliver a coalition-backed options library for how government hires, manages, and empowers people in a modern democracy. This options library will serve as a roadmap for workforce transformation—defining the systems, protections, and incentives needed to build a federal workforce that is effective, fair, and prepared for what’s ahead.
Below are the key areas of responsibility for this role. The specifics of each responsibility may vary based on the needs of the organization and department.
Architect Transformational Policy Reform:
Convene and Drive a National Reform Coalition:
Lead a High-Impact Working Groups and Policy Tracks:
Bridge Strategy Across Projects:
Strategic & Visionary:
Experienced & Impact-Oriented:
Coalition-Minded:
Committed to People-Centered Government:
We recognize that strong candidates may not meet 100% of the qualifications listed. If you believe you can contribute meaningfully to our team, we encourage you to apply.
The salary range for this position is $150,000- $170,000 commensurate with experience plus excellent benefits. Compensation is commensurate with experience.
Democracy Forward Foundation is an equal opportunity employer. We value a diverse workforce and an inclusive culture. We encourage applications from all qualified individuals. We strongly encourage people of color; LGBTQIA+ people; women; people with disabilities; and veterans to apply. Please note that all lawyer positions require a law degree and active membership in the bar of a state or the District of Columbia.
Not sure you meet all of our qualifications? Research shows that men apply for jobs when they meet an average of 60% of the criteria. Yet women and other people who are systematically marginalized tend to only apply if they meet every requirement. If you believe that you could excel in this role, we encourage you to apply. We are dedicated to considering a broad array of candidates, including those with diverse workplace experiences and backgrounds. Whether you are new to mission-based legal and policy work, returning to work after a gap in employment, or simply looking to transition or take the next step in your career path, we welcome the opportunity to have you on our radar. Please include in your cover letter the nature of your interest in legal and policy work promoting democracy and progress and what you hope to bring to our team and the role for which you are applying.
Democracy Forward is committed to providing equal employment opportunities for all, including those with disabilities. If you require a reasonable accommodation to complete this application, interview, or otherwise participate in the hiring process, please direct your inquiries to recruiting@democracyforward.org or 833-391-4732.
Ready to apply?
Apply to Democracy Forward
Share this job
Are you a future-focused leader eager to reimagine the talent structures of the federal government? If so, Democracy Forward Foundation (“Democracy Forward”) is hiring a Director of the Future of Government Talent Program to join our team.
The next era of effective government depends on the teams behind it—how they’re structured, how they work, and who’s on them.
Democracy Forward is launching a bold, multi-year initiative to reinvent how the federal government organizes and deploys its talent—and we’re seeking a visionary and operationally grounded Director of the Future of Government Talent Program to lead this work.
This is a chance to lead a systems transformation effort that will design the teams, skills, and staffing models our federal government needs to deliver—now and in the decades ahead. By building modern talent and team structures across government functions, this initiative will help ensure the government is ready with the people and capacity to deliver results when it is ready to build and hire.
This position is full-time. Democracy Forward’s office is located in Washington, DC. While this position offers flexibility to work remotely, it requires regular in-office presence eight (8) times per month (averaging 2 times per week) to effectively support the nature of the role. Remote candidates within the United States will also be considered. If a candidate is outside of the DC area, travel to DC is expected 1-2 times per quarter.
The Director of the Future of Government Talent Program will report to the Director of Democracy Works 250 and work closely with staff and fellows on the Civil Service Strong team.
Democracy Works 250 is designing the teams—and finding the people—that will power a modern federal government.
This initiative will create the talent models and actually find the people needed to equip government with agile and interdisciplinary teams that can deliver on the policies and programs Americans expect. By transforming how government organizes its people, we aim to transform how government shows up for the people it serves.
Below are the key areas of responsibility for this role. The specifics of each responsibility may vary based on the needs of the organization and department.
Design Interdisciplinary, Future-Ready Government Teams:
Build a National Strategy to Source and Prepare Talent:
Oversee Development of a Civic Talent Platform & Training Curriculum:
Co-Design with Communities and Institutions:
Bridge Strategy Across Projects:
A Builder with Bold Imagination:
Fluent in Talent—from Government and Beyond:
A Strategic Recruiter and Workforce Planner:
Collaborative and Equity-Driven:
We recognize that strong candidates may not meet 100% of the qualifications listed. If you believe you can contribute meaningfully to our team, we encourage you to apply.
The salary range for this position is $150,000 - $170,000 plus excellent benefits. Compensation is commensurate with experience.
Democracy Forward Foundation is an equal opportunity employer. We value a diverse workforce and an inclusive culture. We encourage applications from all qualified individuals. We strongly encourage people of color; LGBTQIA+ people; women; people with disabilities; and veterans to apply. Please note that all lawyer positions require a law degree and active membership in the bar of a state or the District of Columbia.
Not sure you meet all of our qualifications? Research shows that men apply for jobs when they meet an average of 60% of the criteria. Yet women and other people who are systematically marginalized tend to only apply if they meet every requirement. If you believe that you could excel in this role, we encourage you to apply. We are dedicated to considering a broad array of candidates, including those with diverse workplace experiences and backgrounds. Whether you are new to mission-based legal and policy work, returning to work after a gap in employment, or simply looking to transition or take the next step in your career path, we welcome the opportunity to have you on our radar. Please include in your cover letter the nature of your interest in legal and policy work promoting democracy and progress and what you hope to bring to our team and the role for which you are applying.
Democracy Forward is committed to providing equal employment opportunities for all, including those with disabilities. If you require a reasonable accommodation to complete this application, interview, or otherwise participate in the hiring process, please direct your inquiries to recruiting@democracyforward.org or 833-391-4732.
Ready to apply?
Apply to Democracy Forward
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an Applied AI team member at Anthropic, you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping large enterprises understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack. You'll combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex business challenges while maintaining our high standards for safety and reliability.
Working closely with our Sales, Product, and Engineering teams, you'll guide customers from initial technical discovery through successful deployment. You'll leverage your expertise to help customers understand Claude's capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.
Please note this role requires 3 days in office per week.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
As a member of the Applied AI team at Anthropic, you will be a technical Product Engineer focused on becoming a trusted technical advisor to Digital Native Businesses - technology companies adopting the Claude API into their core products. You will work closely with customer product and engineering teams as they ship new products powered by Claude: advising on architecture design decisions, developing evaluation frameworks, and guiding customers through the most cutting-edge implementation patterns for LLMs.
Working closely with our Sales, Product, and Engineering teams, you'll guide a focused portfolio of customers from technical discovery through successful deployment. You'll combine deep engineering expertise with customer-facing skills to help customers unlock the full potential of Claude APIs and move their products closer to the frontier, while maintaining our high standards for safety and reliability.
Responsibilities:
You may be a good fit if you have:
Please note this role requires 3 days in office per week.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As a member of the Applied AI team at Anthropic, you will be a Forward Deployed Engineer (FDE) who embeds directly with our most strategic federal civilian (and, occasionally, state and local) customers to drive transformational AI adoption. You will collaborate closely with customer teams to ship advanced AI applications that solve real world business problems. Our FDEs engage with customers to accelerate the adoption of existing products and create new applications built on our models.
Working closely with our Post-Sales, Product, and Engineering teams, you'll combine engineering expertise, an understanding of frontier AI applications, and customer-facing skills to understand customer workflows and develop innovative solutions that address complex business challenges while maintaining our high standards for safety and reliability.
You will sit at the frontier of government AI deployments and serve as one of our founding FDEs who helps to shape our forward-deployed motion. We expect our FDEs to operate autonomously, thrive under ambiguity, and represent Anthropic at the highest level in customer environments. This is a significant responsibility: you’ll play a key role in championing our mission in the enterprise.
Work within customer systems to build production applications with Claude models, ensuring that these products meet customer requirements.
Deliver technical artifacts for customers like MCP servers, sub-agents, and agent skills that will be used in production workflows.
Provide white glove deployment support for Anthropic products in federal government environments.
Identify and codify repeatable deployment patterns and contribute insights back to our Product and Engineering teams.
Maintain strong knowledge of the latest developments in LLM capabilities, implementation patterns, and AI product development stacks.
Build long term relationships with customers and proactively identify new opportunities for AI deployment throughout the lifecycle of an engagement.
Travel frequently (25-50%) to customer sites to build in person with customers.
Be a champion for Anthropic’s mission in the field.
Security clearance not required, but a plus
Must have prior experience working with US federal civilian, state, or local agencies
4+ years of experience in a technical, customer facing role such as Forward Deployed Engineer, or as a Software Engineer with consulting experience. Former technical founders are also encouraged to apply.
Production experience with LLMs including advanced prompt engineering, agent development, evaluation frameworks, and deployment at scale.
Strong programming skills with proficiency in Python (and ideally in one or more additional languages like TypeScript, Java, etc) and experience shipping production applications
High agency with an ability to navigate ambiguity present in complex organizations.
High cooperation mindset for cross-organizational collaboration, balancing competing priorities with integrity.
Passion for advancing safe, beneficial AI systems through creative technical applications.
Strong communication skills to conduct discovery with customers and to convey technical concepts to diverse stakeholders while maintaining a low ego and collaborative approach.
Experience with government IT systems and/or AI deployment patterns is a plus.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an Applied AI team member at Anthropic, you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping national security and defense agencies understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack. You'll combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex mission challenges while maintaining our high standards for safety and reliability.
Working closely with our Sales, Product, and Engineering teams, you'll guide customers from initial technical discovery through successful deployment. You'll leverage your expertise to help customers understand Claude's capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.
Partner with account executives to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation
Serve as the primary technical advisor to enterprise customers throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams & stakeholders to drive customer success
Support customers building with Claude Code, the Claude API, and Claude for Enterprise
Create and deliver compelling technical content tailored to different audiences. You will need to be able to spread the gamut from technical deep dives for engineering & development teams up to business value focused conversations with executives
Guide technical architecture decisions and help customers integrate Claude effectively into their existing technology stack
Help customers develop evaluation frameworks to measure Claude's performance for their specific use cases
Identify common integration patterns and contribute insights back to our Product and Engineering teams
Travel frequently to customer sites for workshops, technical deep dives, and relationship building
Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns
TS/SCI clearance required
Must have prior experience working with US national security (defense and/or intelligence) agencies
5+ years of experience in technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager
Experience navigating complex buying cycles involving multiple stakeholders
Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering & IT teams, and more
Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders
Experience designing scalable cloud architectures and integrating with enterprise systems
Familiar with Python
Familiarity with common LLM frameworks and tools or a background in machine learning or data science
Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
A love of teaching, mentoring, and helping others succeed
Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an Applied AI team member at Anthropic, you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping federal civilian agencies understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack. You'll combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex mission challenges while maintaining our high standards for safety and reliability.
Working closely with our Sales, Product, and Engineering teams, you'll guide customers from initial technical discovery through successful deployment. You'll leverage your expertise to help customers understand Claude's capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.
Partner with account executives to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation
Serve as the primary technical advisor to enterprise customers throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams & stakeholders to drive customer success
Support customers building with Claude Code, the Claude API, and Claude for Enterprise
Create and deliver compelling technical content tailored to different audiences. You will need to be able to spread the gamut from technical deep dives for engineering & development teams up to business value focused conversations with executives
Guide technical architecture decisions and help customers integrate Claude effectively into their existing technology stack
Help customers develop evaluation frameworks to measure Claude's performance for their specific use cases
Identify common integration patterns and contribute insights back to our Product and Engineering teams
Travel frequently to customer sites for workshops, technical deep dives, and relationship building
Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns
Security clearance not required, but a plus
Must have prior experience working with US federal agencies
5+ years of experience in technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager
Experience navigating complex buying cycles involving multiple stakeholders
Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering & IT teams, and more
Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders
Experience designing scalable cloud architectures and integrating with enterprise systems
Familiar with Python
Familiarity with common LLM frameworks and tools or a background in machine learning or data science
Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
A love of teaching, mentoring, and helping others succeed
Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As an Applied AI team member at Anthropic, you will be a Pre-Sales architect focused on becoming a trusted technical advisor helping systems integrators, startups, and other GovTech companies understand the value of Claude and paint the vision on how they can successfully integrate and deploy Claude into their technology stack. You'll combine your deep technical expertise with customer-facing skills to architect innovative LLM solutions that address complex mission challenges while maintaining our high standards for safety and reliability.
Working closely with our Sales, Product, and Engineering teams, you'll guide customers from initial technical discovery through successful deployment. You'll leverage your expertise to help customers understand Claude's capabilities, develop evals, and design scalable architectures that maximize the value of our AI systems.
Partner with account executives to deeply understand customer requirements and translate them into technical solutions, ensuring alignment between business objectives and technical implementation
Serve as the primary technical advisor to enterprise customers throughout their Claude adoption journey, from discovery to initial evaluation through deployment. You will need to coordinate internally across multiple teams & stakeholders to drive customer success
Support customers building with Claude Code, the Claude API, and Claude for Enterprise
Create and deliver compelling technical content tailored to different audiences. You will need to be able to spread the gamut from technical deep dives for engineering & development teams up to business value focused conversations with executives
Guide technical architecture decisions and help customers integrate Claude effectively into their existing technology stack
Help customers develop evaluation frameworks to measure Claude's performance for their specific use cases
Identify common integration patterns and contribute insights back to our Product and Engineering teams
Travel frequently to customer sites for workshops, technical deep dives, and relationship building
Maintain strong knowledge of the latest developments in LLM capabilities and implementation patterns
US Secret Clearance preferred for working with defense tech customers
Prior work in government, government contracting, or government tech preferred
5+ years of experience in technical customer-facing roles such as Solutions Architect, Sales Engineer, or Technical Account Manager
Experience navigating complex buying cycles involving multiple stakeholders
Exceptional ability to build relationships with and communicate technical concepts to diverse stakeholders to include C-suite executives, engineering & IT teams, and more
Strong technical communication skills with the ability to translate customer requirements between technical and business stakeholders
Experience designing scalable cloud architectures and integrating with enterprise systems
Familiar with Python
Familiarity with common LLM frameworks and tools or a background in machine learning or data science
Excitement for engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
A love of teaching, mentoring, and helping others succeed
Excellent communication and interpersonal skills, able to convey complicated topics in easily understandable terms to a diverse set of external and internal stakeholders. You enjoy engaging in cross-organizational collaboration, working through trade-offs, and balancing competing priorities
Passion for thinking creatively about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
The Detection & Response (D&R) team plays a critical role in protecting our systems, users, and data from security threats. We’re looking for an experienced Technical Program Manager to own and evolve incident management within D&R. This is a senior-level specialization on the Technical Program Manager ladder, focused on how we detect, respond to, and learn from security and operational incidents.
You’ll be the driving force behind maturing and scaling our incident response lifecycle—from detection and triage through containment, remediation, and post-incident review. Critically, some of the highest-impact work in this role happens after the immediate response: gathering data on incident trends, reporting on patterns and root causes, and working cross-functionally across engineering, security, infrastructure, and product teams to ensure that broad fixes and systemic improvements are actually implemented.
You won’t just manage incidents—you’ll make sure we get meaningfully better after each one.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Share this job
Are you a highly organized and detail-oriented professional with an interest in nonprofit development? If so, Democracy Forward Foundation (“Democracy Forward”) is looking for a Development Associate to join our team.
This position will be responsible for providing critical administrative and data management support for our Development team such as grants tracking, document management and preparation, record retention, and calendar management to ensure smooth operations of the development portfolio. The Development Associate will need to be a systems-thinker who can execute our development operations while also having an eye for process improvement.
This position is full-time. Democracy Forward’s office is located in Washington, DC. While this position offers flexibility to work remotely, it requires regular in-office presence eight (8) times per month (averaging 2 times per week) for DC-based employees. Remote candidates within the United States will also be considered. If a candidate is outside of the DC area, travel to DC is expected 1–2 times per quarter.
The Development Associate will report to the Senior Director of Development and will work closely with all members of the Development team staff across the organization.
Below are the key areas of responsibility for this role. The specifics of each responsibility may vary based on the needs of the organization and department.
Data Management:
Development Support:
We recognize that strong candidates may not meet 100% of the qualifications listed. If you believe you can contribute meaningfully to our team, we encourage you to apply.
Required:
Preferred:
The salary for this position is $72,100 plus excellent benefits. This role is overtime eligible. Compensation is commensurate with experience.
Democracy Forward Foundation is an equal opportunity employer. We value a diverse workforce and an inclusive culture. We encourage applications from all qualified individuals. We strongly encourage people of color; LGBTQIA+ people; women; people with disabilities; and veterans to apply. Please note that all lawyer positions require a law degree and active membership in the bar of a state or the District of Columbia.
Not sure you meet all of our qualifications? Research shows that men apply for jobs when they meet an average of 60% of the criteria. Yet women and other people who are systematically marginalized tend to only apply if they meet every requirement. If you believe that you could excel in this role, we encourage you to apply. We are dedicated to considering a broad array of candidates, including those with diverse workplace experiences and backgrounds. Whether you are new to mission-based legal and policy work, returning to work after a gap in employment, or simply looking to transition or take the next step in your career path, we welcome the opportunity to have you on our radar. Please include in your cover letter the nature of your interest in legal and policy work promoting democracy and progress and what you hope to bring to our team and the role for which you are applying.
Democracy Forward is committed to providing equal employment opportunities for all, including those with disabilities. If you require a reasonable accommodation to complete this application, interview, or otherwise participate in the hiring process, please direct your inquiries to recruiting@democracyforward.org or 833-391-4732.
Ready to apply?
Apply to Democracy Forward
Bond Vet is on a mission to strengthen the human-animal bond through better pet care. We offer primary and urgent care, so we’re there for pets when they need us most. Our clinics are designed with pets and people in mind: warm, friendly, and highly sniffable. We balance this design with a strong focus on technology, all built in-house, which means we can easily innovate our systems to improve the veterinary team, pet, and client experience.

Bond Vet is building the next generation of veterinary clinics from the ground up — and we’re looking for a collaborative Veterinary Clinic Manager to join our team.
Our Veterinary Clinic Manager will lead our clinic team (veterinarians, technicians, assistants + care coordinators) and oversee day-to-day operations. You are a reliable, proactive, and conscientious leader who is dedicated to the success of your team.
This is a full time position.
Ready to apply?
Apply to Bond Vet
Share this job
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
We're seeking an exceptional engineer to join Anthropic's Detection Platform team to build and scale our next-generation security analytics infrastructure. In this role, you'll architect and implement data pipelines that process massive amounts of security telemetry, develop ML-powered detection systems, and create innovative solutions that leverage Claude to transform security operations.
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
Ready to apply?
Apply to Anthropic
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.