Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
In this role, you will:
Requirements:
Compensation & Benefits:
Retirement: 401(k) plan
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
You might excel in this role if you:
Nice to Have:
Compensation & Benefits:
Retirement: 401(k) plan
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
10a Labs’ Investigations Team is seeking additional support to respond to critical safety incidents and conduct investigations across a range of abuse areas, including but not limited to violence, hate, mental health, CBRNE, child safety, and cyber abuse.
This role requires the ability to work Pacific Time (PT) hours and have valid work authorization in the United States.
About the Role:
As a Threat Investigator, you will be supporting our Investigations Team and responsible for detecting and/or responding to malicious uses and abuse, investigating the activity, drafting reports based on your findings, and making recommendations. This will require technical experience investigating threat actors, harmful behaviors, and/or other activities; as well as a strong ability to use SQL and Python to query, transform, and understand data. You will also respond to escalations and reactive leads, including those that are not caught by our existing safety systems.
Investigations involve sensitive and distressing content, including sexual, violent, or otherwise-disturbing material.
This role includes serving in an on-call capacity that will involve resolving urgent escalations outside of normal work hours, occasionally including evenings and weekends. The typical on-call rotation is 7 consecutive days per 2 months, though this may shift based on operational needs and candidates should be prepared to remain flexible.
This is a remote, full-time position.
In this role, you will:
Requirements:
Preferred Qualities (but not required):
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
Data Center Watch is a research project that tracks political risks emerging from local opposition to data centers with the aim of understanding the emerging regulatory landscape of data center development in the U.S.
In this role, you will:
We're looking for someone who:
Requirements:
Compensation & Benefits:
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
In this role, you will:
We're looking for someone who:
Requirements:
Compensation & Benefits:
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
About the Role: We are hiring a Technical Recruiter to architect and scale our talent engine and strengthen core people operations.
This is a high-ownership role focused primarily on recruiting exceptional technical and mission-driven talent in competitive markets. You will build talent pipelines across engineering, red teaming, research, product, operations, and specialized technical roles, and design the systems that enable them to succeed once hired.
You will work directly with leadership to shape hiring strategy and ensure we consistently hire exceptional talent.
Thank you for submitting your application. If there's a good fit, a member of the 10a Labs team will reach out to discuss next steps.
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
About the role: As an Engineering Fellow, you will apply your technical skills to support high-impact research problems. Fellows will contribute across the project lifecycle — from processing diverse data sources and designing dynamic visualizations, to deploying sophisticated models and building cloud infrastructure. This is a hands-on role at the intersection of applied research and practical engineering, with opportunities to explore novel methods, test ideas quickly, and generate insights.
Fellows specialize in one of three concentrations based on interest and past experience: Software Engineering, Data Engineering, or Machine Learning.
In this role, you will:
Software Engineering concentration responsibilities may include:
Data Engineering concentration responsibilities may include:
Machine Learning concentration responsibilities may include:
We’re looking for someone who:
Requirements:
Nice to have:
Software Engineering:
Data Engineering:
Machine Learning Engineering:
Benefits:
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
Thank you for submitting your application. If there's a good fit, a member of the 10a Labs team will reach out to discuss next steps.
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
Our client’s team is responsible for identifying and investigating misuses of their products – especially new types of abuse–and creating systems of defence to address them. Part of this team specialises in the latter, and they need additional support to help them monitor abuse on product launches and design and build scaled detection, review, enforcement, and measurement for uncovered risks. This position is a Full-Time (40 hours per week), contract position.
About The Role:
Protection Science Engineering is an interdisciplinary role mixing data science, machine learning, investigation, and policy/protocol development. As a Protection Scientist Engineer within the client team, you will assist in designing and building systems to proactively identify and enforce on abuse on their AI products. This includes ensuring they have robust abuse monitoring in place for new products, sustaining monitoring for existing products, and prototyping and incubating systems of defense against the highest risk harms. You will also respond to and investigate critical escalations, especially those that are not caught by existing safety systems. This will require developing understanding of the products and data, and working cross-functionally with product, policy, and engineering teams.
You will need a strong ability to use SQL and python to query, transform, and understand data, and to build and improve prototype detection. An investigative mindset is key, with experience in identifying and enforcing on bad actors (in any industry). A background including data science, machine learning and classification basics, AI, and/or threat investigation is a plus.
This is a remote position, based in London, UK. The position includes participation in an on-call rotation that will involve resolving urgent escalations outside of normal work hours, for at least 7 consecutive days per quarter. Some detection and investigations may involve exposure to sensitive and distressing content, including sexual, violent, or otherwise-disturbing material.
In This Role, You Will:
Requirements:
Desired Qualities (but not required):
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
3–8 Years of Industry Experience | Remote | High-Impact
About the Role: We’re looking for an infrastructure-focused engineer who thrives at the intersection of machine learning, systems, and product delivery. This is a hands-on role responsible for deploying, monitoring, and scaling a real-time ML-powered content moderation system used to detect and triage abuse, threats, and edge-case language. You’ll work closely with ML engineers, researchers, and clients to build infrastructure that makes high-performance models accessible and reliable in the wild.
In This Role, You Will:
We’re Looking for Someone Who:
Nice to Have Experience With:
What Success Looks Like in the First 3 Months:
Compensation & Benefits:
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
About the role: 10a Labs is seeking an AI Engineer for a four-month contract (potential to extend) to partner closely with engineers and internal teams to design and implement cutting-edge AI solutions. You’ll build practical systems that leverage modern language models that will be applied to real-world problems.
This role blends applied AI expertise, strong engineering fundamentals, and creative problem-solving. You’ll prototype quickly, iterate rapidly, and translate emerging model capabilities into reliable, production-ready systems.
In this role, you will:
We’re looking for someone who:
Requirements:
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
Role overview: As a Security Engineer, you will be on the front line of securing cutting-edge AI systems. You’ll identify vulnerabilities, build protections into code and workflows, and partner with researchers to reduce risks from adversarial actors. This role requires strong technical skills, a security-first mindset, and comfort working in a fast-paced, startup environment where threats and priorities evolve quickly.
In this role, you will:
We're looking for someone who:
Requirements:
Very Nice to Have:
Compensation & Benefits:
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
10a Labs’ Investigations Team needs additional support to respond to critical safety incidents and conduct investigations across a range of abuse areas, including but not limited to Violence, Hate, Mental Health, CBRNE, Child Safety, Cyber Abuse and more.
About the Role:
As a Threat Investigator, you will be supporting our London-based Europe, Middle East, and Africa (EMEA) Investigations Team. You will be responsible for detecting and/or responding to malicious use and abuse on our platform, investigating activity, drafting reports based on your findings, and making recommendations. This will require technical experience investigating threat actors, harmful behaviors, cyber abuse, and/or other activities; as well as a strong ability to use SQL and Python to query, transform, and understand data. You will also respond to escalations and reactive leads, including those that are not caught by our existing safety systems.
Investigations involve sensitive and distressing content, including sexual, violent, or otherwise-disturbing material.
This role includes serving in an on-call capacity that will involve resolving urgent escalations outside of normal work hours, occasionally including evenings and weekends. The typical on-call rotation is 7 consecutive days per 2 months, though this may shift based on operational needs and candidates should be prepared to remain flexible.
In this role, you will:
Requirements:
Desired Qualities (but not required):
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
10a Labs’ Investigations Team needs additional support to respond to critical safety incidents and conduct investigations across a range of abuse areas, including but not limited to Violence, Hate, Mental Health, CBRNE, Child Safety, Cyber Abuse and more.
About the Role:
As a Threat Investigator, you will be supporting our remote-first, Sydney-based Asia-Pacific (APAC) Investigations Team. You will be responsible for detecting and/or responding to malicious use and abuse on our platform, investigating activity, drafting reports based on your findings, and making recommendations. This will require technical experience investigating threat actors, harmful behaviors, cyber abuse, and/or other activities; as well as a strong ability to use SQL and Python to query, transform, and understand data. You will also respond to escalations and reactive leads, including those that are not caught by our existing safety systems.
Investigations involve sensitive and distressing content, including sexual, violent, or otherwise-disturbing material.
This role includes serving in an on-call capacity that will involve resolving urgent escalations outside of normal work hours, occasionally including evenings and weekends. The typical on-call rotation is 7 consecutive days per 2 months, though this may shift based on operational needs and candidates should be prepared to remain flexible.
In this role, you will:
Requirements:
Desired Qualities (but not required):
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
About The role: We’re looking for an experienced ML engineer with a strong foundation in traditional ML and hands-on experience applying those skills to modern LLM systems. This is an applied role for someone who owns the full ML lifecycle—from data pipelines and model training to evaluation, deployment, and ongoing iteration in real-world production environments.
At least 3–8+ Years of Industry Experience Required
In This Role, You Will:
We’re Looking For Someone Who:
Requirements:
Nice To Have Experience With:
What Success Looks Like in the First 3 Months:
Compensation & Benefits:
Ready to apply?
Apply to 10a Labs
About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
Ready to apply?
Apply to 10a Labs
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.