Back
UK AI Safety Institute (AISI)
governmentCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: UK AI Safety Institute
AISI is a key institutional actor in AI safety, representing one of the first government-led efforts to systematically evaluate frontier AI models; its work and publications are directly relevant to governance, evaluation methodology, and international AI safety coordination.
Metadata
Importance: 72/100homepage
Summary
The UK AI Safety Institute (AISI) is the UK government's dedicated body for evaluating and mitigating risks from advanced AI systems. It conducts technical safety research, develops evaluation frameworks for frontier AI models, and works with international partners to inform global AI governance and policy.
Key Points
- •First state-backed organization dedicated to AI safety, employing over 100 technical staff focused on frontier AI evaluation
- •Conducts rigorous technical research on AI capabilities, risks, and risk mitigation strategies
- •Collaborates with AI developers, policymakers, and international partners to shape global AI governance
- •Has published research on frontier AI trends and AI persuasion capabilities
- •Serves as a model for government-led AI safety infrastructure, influencing similar institutes in other countries
Cited by 27 pages
Cached Content Preview
HTTP 200Fetched Mar 15, 20264 KB
[Read the Frontier AI Trends Report](https://www.aisi.gov.uk/frontier-ai-trends-report)
Please enable javascript for this website.
A
A
[](https://www.aisi.gov.uk/)
## Rigorous AI research to enable advanced AI governance
[Join us](https://www.aisi.gov.uk/careers) [View our work](https://www.aisi.gov.uk/blog)
### Governments have a critical role to play in ensuring advanced AI is safe, secure and beneficial.
The AI Security Institute is the first state-backed organisation dedicated to advancing this goal.
We are conducting research and building infrastructure to understand the capabilities and impacts of advanced AI and to develop and test risk mitigations.
We are also working with the wider research community, AI developers and other governments to affect how AI is developed and to shape global policymaking on this issue.

## Featured work
[View all](https://www.aisi.gov.uk/blog)
[**AISI's first Frontier AI Trends Report**](https://www.aisi.gov.uk/frontier-ai-trends-report)
Our first public, evidence‑based assessment of how the world’s most advanced AI systems are evolving, bringing together two years of AISI's frontier model testing.
[Read the full report](https://www.aisi.gov.uk/frontier-ai-trends-report)
[**How do AI models persuade? Exploring the levers of AI-enabled persuasion through large-scale experiments**](https://www.aisi.gov.uk/blog/how-do-ai-models-persuade-exploring-the-levers-of-ai-enabled-persuasion-through-large-scale-experiments)
A deep dive into AISI’s study of the persuasive capabilities of conversational AI, published today in Science.
[Read more](https://www.aisi.gov.uk/blog/how-do-ai-models-persuade-exploring-the-levers-of-ai-enabled-persuasion-through-large-scale-experiments) [Read more](https://www.aisi.gov.uk/#)
[**Deepening our partnership with Google DeepMind**](https://www.aisi.gov.uk/blog/deepening-our-partnership-with-google-deepmind)
Expanding our collaboration with a new research MOU
[Read more](https://www.aisi.gov.uk/blog/deepening-our-partnership-with-google-deepmind) [Read more](https://www.aisi.gov.uk/#)
[View our work](https://www.aisi.gov.uk/blog)
## Our mission is to equip governments with a scientific understanding of the risks posed by advanced AI.
### Technical research
Monitoring the fast-moving landscape of AI development
Evaluating the risks AI poses to national security and public safety
Advancing solutions like safeguards, alignment, and control
### Global impact
Working with AI developers to ensure responsible development
Informing policymakers abou
... (truncated, 4 KB total)Resource ID:
fdf68a8f30f57dee | Stable ID: MDhiMzZiZj