Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
The ability to effectively evaluate and monitor AI systems will grow in importance as models become more capable, autonomous, and integrated into society. If models can detect and game evaluations, obscure their reasoning, or behave differently under observation, the safety claims that governments and developers rely on become unreliable. Understanding and addressing these risks is essential to ensuring that oversight of advanced AI systems keeps pace with their capabilities.
The Model Transparency team is a research team within AISI focused on ensuring that evaluations, assessments, and monitoring of frontier AI systems remain reliable as models become less transparent. We research how and why oversight is declining – through phenomena such as evaluation awareness, unfaithful chain-of-thought reasoning, and changes in model architectures – and develop methods (including white and black box methods) to detect, measure, and mitigate potential issues. We share our findings with frontier AI companies (including Anthropic, OpenAI, DeepMind), UK government officials, and allied governments, and publicly to inform their deployment, research, and policy decisions. We also work directly with safety teams at frontier labs, contributing to safety case reviews and helping improve their alignment evaluation methodology.
Our recent work includes auditing games for sandbagging, reproducing natural emergent misalignment from reward hacking, and identifying open-weight language models that game propensity evaluations.
We're looking for Research Scientists and Research Engineers for the Model Transparency team with expertise in technical AI safety – such as interpretability, capability or alignment evaluations, model transparency – or with broader experience with frontier LLM research and development. An ideal candidate would have a strong track record of high-quality research in technical AI safety or adjacent fields.
We're interested in candidates along the spectrum between Research Engineers and Research Scientists. The application form will ask you to indicate which role you lean towards.
The team is led by Joseph Bloom, advised by Geoffrey Irving. You'll work with talented, mission-driven technical staff across AISI, including alumni from Anthropic, OpenAI, DeepMind, and top universities. You may also collaborate with external research teams including those at frontier AI labs, METR, and FAR.
We are open to hires across a range of experience levels.
This role requires three days a week in person, with flexibility for occasional periods of remote working.
The work could also involve:
If you’re unsure whether you meet the criteria below, we’d encourage you to apply anyway – we’d rather you erred on the side of applying than not.
We don’t expect RS candidates to meet all of the following, but they are useful signal:
We don’t expect RE candidates to meet all of the following, but they are useful signal:
Candidates should expect to go through some or all of the following stages:
Impact you couldn't have anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The full range of salaries are available below:
Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see our candidate guidance for more information on appropriate and inappropriate use.
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Ready to apply?
Apply to AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
The Human Influence team studies when, why, and how frontier AI systems influence human attitudes and behaviour. The team's mandate is to build a rigorous, world-class evidence base for the safe and responsible development of frontier AI. We measure the impacts of frontier AI systems on human users, to identify risks to human agency and wellbeing; and develop mitigation strategies. This includes research on persuasion, manipulation, deception, advice-giving, theory of mind, anthropomorphism, sycophancy, and socioaffective human–AI relationships.
Our team includes top technical talent from academia and frontier AI companies. Our projects combine methods from computational social science, AI safety and security, cognitive science, behavioural science, computer science, machine learning, and data science. Many of our projects involve conducting careful and rigorous human–AI interaction experiments and randomised controlled trials (RCTs).
On our team, you will have the:
As an example of our work, we recently completed the largest-ever study on the persuasive capabilities of conversational AI, a large-scale study on how people use and follow personal advice from AI chatbots and a longitudinal study on how anthropomorphic AI facilitates human-AI relationship building.
Successful candidates will work with our Research Scientists to design and run studies that answer these important questions. The role is particularly suitable for candidates with an interest in pursuing a research career (e.g. recently graduated MSc students or early-stage PhD students). We encourage applications from candidates who are excited about this opportunity, but who may not meet all the stated criteria.
We are especially excited about candidates with experience in one or more of these areas:
This is a full- or part-time, fixed-term contract (6-months) in London.
Required Skills and Experience
Desired Skills and Experience
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
In accordance with the salary figures below, this role has been specifically scoped at Level 3.
Impact you couldn't have anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The full range of salaries are available below:
Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see our candidate guidance for more information on appropriate and inappropriate use.
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Ready to apply?
Apply to AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
AISI's Chem Bio (CB) team conducts technical research to assess evolving AI capabilities related to science R&D and CB misuse, and the effectiveness of technical safeguards that might mitigate risks arising from those capabilities.
The goal of our research is to inform critical decisions on security, opportunities, policy, and risk mitigation made by governments and AI developers.
We're a close-knit, unusually interdisciplinary team—made up of machine learning researchers and engineers, software engineers, virologists and bacteriologists, behavioural research scientists, biosecurity experts, long-standing CB policy specialists and talented generalists—who work closely with other technical and policy teams across government.
We are building a dedicated engineering function within the CB team — a small team that owns the shared platform, tooling, and infrastructure that our research projects depend on. This role is a senior individual contributor within that function. The successful candidate will:
We are looking for the following skills, experience and attitudes, but a successful candidate will not necessarily need to meet all these criteria. We can be flexible in shaping the role and salary to your background, expertise, and level of experience.
Strong candidates may also have:
Please note that this is a reserved post. We can only consider applications from UK nationals (including dual nationals who hold British citizenship). Appointment is conditional on successfully completing UK Government SC clearance. Prior clearance is not required—we will sponsor and support you. You should normally have been resident in the UK for the past 5 years. You may also be required to undergo Developed Vetting (DV). DV typically requires a longer period of UK residency (around 10 years). Employment is conditional on obtaining and maintaining the required clearance(s). More detail on clearance eligibility can be found on the UK Government website: National security vetting: clearance levels - GOV.UK.
Other core requirements:
Impact you couldn't have anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The full range of salaries are available below:
Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see our candidate guidance for more information on appropriate and inappropriate use.
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Ready to apply?
Apply to AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
AISI's Chem Bio (CB) team conducts technical research to assess evolving AI capabilities related to science R&D and CB misuse, and the effectiveness of technical safeguards that might mitigate risks arising from those capabilities.
The goal of our research is to inform critical decisions on security, opportunities, policy, and risk mitigation made by governments and AI developers.
We're a close-knit, unusually interdisciplinary team—made up of machine learning researchers and engineers, software engineers, virologists and bacteriologists, behavioural research scientists, biosecurity experts, long-standing CB policy specialists and talented generalists—who work closely with other technical and policy teams across government.
We are building a dedicated engineering function within the CB team — a small team that owns the shared platform, tooling, and infrastructure that our research projects depend on. This role leads that function. The successful candidate will:
We are looking for the following skills, experience and attitudes, but a successful candidate will not necessarily need to meet all these criteria. We can be flexible in shaping the role and salary to your background, expertise, and level of experience.
Strong candidates may also have:
Please note that this is a reserved post. We can only consider applications from UK nationals (including dual nationals who hold British citizenship). Appointment is conditional on successfully completing UK Government SC clearance. Prior clearance is not required—we will sponsor and support you. You should normally have been resident in the UK for the past 5 years. You may also be required to undergo Developed Vetting (DV). DV typically requires a longer period of UK residency (around 10 years). Employment is conditional on obtaining and maintaining the required clearance(s). More detail on clearance eligibility can be found on the UK Government website: National security vetting: clearance levels - GOV.UK
Other core requirements
Impact you couldn't have anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The full range of salaries are available below:
Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see our candidate guidance for more information on appropriate and inappropriate use.
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Ready to apply?
Apply to AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
About the Team
The Cyber and Autonomous Systems Team (CAST) is looking to research and map the evolving frontier of AI capabilities and propensities to inform critical security decisions that reduce loss-of-control risks from frontier AI. We focus on preventing harms from high-impact cybersecurity capabilities and highly capable autonomous AI systems.
Our team is a blend of high-velocity generalists and technical staff, from organisations such as Meta, Amazon, Palantir, DSTL and Jane Street. Our recent work has included building model evaluations suites – such as Replibench - the world’s most comprehensive evaluation suite for understanding the risk of a model autonomously replicating itself over the internet. We also regularly test the cyber and other relevant capabilities of frontier models, before they are released, to understand their risks.
As AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks. In this role, you'll join a strongly collaborative team to help create new kinds of capability and safety evaluations to evaluate frontier AI systems as they are released.
About the Role
This is a cybersecurity engineer position focused on building environments and challenges to benchmark the cyber capabilities of AI systems. You'll design cyber ranges, CTF-style tasks, and evaluation infrastructure that allows us to rigorously measure how well frontier AI models perform on real-world cybersecurity tasks.
This work belongs inside UK government because understanding AI cyber capabilities is critical to national security, and robust empirical testing requires coordination across government, industry, and international partners to inform policy decisions on AI safety.
You'll work closely with research engineers, infrastructure engineers, and machine learning researchers across AISI. As a small, fast-moving team building first-of-its-kind evaluation infrastructure, you'll be able to influence research directions, own whole pieces of work, and bring your ideas to the table.
Core Responsibilities
Example Projects
Impact
Your work will directly shape the UK government's understanding of AI cyber capabilities, inform safety standards for frontier AI systems, and contribute to the global effort to develop rigorous evaluation methodologies. The evaluations you build will help determine how advanced AI systems are assessed before deployment
What we are looking for
We're flexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below.
Essential
Preferred
Example backgrounds
Core requirements
What We Offer
Impact you couldn't have anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Salary
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The full range of salaries are available below:
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior team here at AISI.
Candidates should expect to go through some or all of the following stages once an application has been submitted:
Impact you couldn't have anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The full range of salaries are available below:
Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see our candidate guidance for more information on appropriate and inappropriate use.
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Ready to apply?
Apply to AI Security Institute
The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
Risks from misaligned AI systems will grow in importance as AI systems become more capable, autonomous, and integrated into society. AI control measures seek to detect, constrain, and/or counteract potentially misaligned AI models; we expect these measures to become increasingly important in the face of capable AI systems that may be unreliable, deceptive, or misaligned.
The Control Red Team partners with leading frontier AI companies to stress-test control measures. The team uses techniques from adversarial ML to develop algorithms to find a range of failures in control measures, which are then used to assess strengthen control measures. These partnerships allow us to directly influence vital control measures, while our position in government lets us bring our understanding of the state of control measures to broader government as they make critical deployment, research, and policy decisions.
The Control Red Team grew out of our previous work on control, including a library for running AI control experiments, stress-testing asynchronous monitors, chain-of-thought monitorability, evaluating control for LLM agents, practical challenges of control monitoring, and AI control safety cases. The Control Red Team additionally draws from expertise within our broader Red Team, which has world-leading expertise in human-led attacks against AI systems.
We're looking for an experienced researcher to lead the Control sub-team, driving its research agenda and managing a team of talented research scientists. The ideal candidate combines deep technical expertise in AI control and alignment with the leadership ability to set direction, develop people, and represent the team's work to senior stakeholders inside and outside government. We expect to offer this role at Level 5–7, with total annual compensation (base salary plus technical allowance) ranging from £105,000 to £145,000.
As Sub Team Lead, you will shape the Control sub-team's strategy and priorities with the Red Team lead, mentor junior and senior researchers, and serve as a key point of contact with frontier AI labs, UK government officials, and international partners. You'll work closely with the broader Red Team leadership – currently led by Xander Davies and advised by Geoffrey Irving and Yarin Gal – and collaborate with external teams including Redwood Research, Google DeepMind, Anthropic, and OpenAI.
Representative projects you might work on
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
The experiences listed below should be interpreted as examples of the expertise we're looking for, as opposed to a list of everything we expect to find in one applicant:
You may be a good fit if you have:
Strong candidates may also have:
The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior leadership team here at AISI.
Candidates should expect to go through some or all of the following stages once an application has been submitted:
Impact you couldn't have anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The full range of salaries are available below:
Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see our candidate guidance for more information on appropriate and inappropriate use.
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Ready to apply?
Apply to AI Security Institute
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.