Our 2025 Year in Review
governmentCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: UK AI Safety Institute
Annual review from the UK's primary government AI safety body, useful for tracking the institutional evolution of national AI safety efforts and the state of frontier model evaluations as of 2025.
Metadata
Summary
The UK AI Security Institute (AISI) reviews its 2025 achievements, including publishing the first Frontier AI Trends Report based on two years of testing over 30 frontier AI systems. Key advances include deepened evaluation suites across cyber, chem-bio, and alignment domains, plus pioneering work on sandbagging detection, self-replication benchmarks, and AI-enabled persuasion research published in Science.
Key Points
- •Released first Frontier AI Trends Report providing evidence-based analysis of frontier AI capability growth across cyber, biology, chemistry, and software engineering.
- •Tested 30+ frontier AI models with rigorous, reproducible evaluations; developed novel benchmarks for self-replication detection and sandbagging behavior.
- •Published large-scale persuasion study (76,000+ participants) in Science, marking AISI's first peer-reviewed publication in a top journal.
- •Expanded international partnerships and government preparedness work, reflecting a sharpened remit under the new 'AI Security Institute' name.
- •Collaborated directly with AI developers to identify and mitigate critical vulnerabilities before deployment, combining scientific rigor with practical action.
Cited by 7 pages
| Page | Type | Quality |
|---|---|---|
| UK AI Safety Institute | Organization | 52.0 |
| Alignment Evaluations | Approach | 65.0 |
| Capability Elicitation | Approach | 91.0 |
| AI Governance Coordination Technologies | Approach | 91.0 |
| Dangerous Capability Evaluations | Approach | 64.0 |
| Evals-Based Deployment Gates | Approach | 66.0 |
| Third-Party Model Auditing | Approach | 64.0 |
Cached Content Preview
[Read the Frontier AI Trends Report](https://www.aisi.gov.uk/frontier-ai-trends-report)
Please enable javascript for this website.
A
A
[](https://www.aisi.gov.uk/)
[Blog](https://www.aisi.gov.uk/blog)
[Organisation](https://www.aisi.gov.uk/category/organization)
# Our 2025 year in review
Adam Beaumont, Director of the UK AI Security Institute, reflects on the year's biggest achievements.
—
Dec 22, 2025
As we close out 2025, I’ve found myself reflecting on what an extraordinary year it has been for the AI Security Institute (AISI). When I joined only a few months ago, I was instantly struck by what an incredibly unique team of talent has assembled, experts with direct experience of the frontiers of AI, drawn from around the world. People who have answered the call to help navigate AI’s challenges safely and securely for decades to come.
This year, that mission sharpened and accelerated in ways that make me incredibly proud of this organisation and the people in it. We’ve shifted to a clearer, more focused remit, reflected in our new name: the UK AI Security Institute. And across technical science, government preparedness, international partnerships, and the tools we’ve put in the hands of the wider ecosystem, AISI has stepped up in a way few would have thought possible two years ago.
Everything we have achieved this year points in one direction: rigorous science, turned into practical action, so the UK can stay safe while realising AI’s enormous potential.
Below are ten achievements that capture the scale and direction of our work this year.
#### **1\. We delivered the UK government’s first Frontier AI Trends Report**
Last week, we released our first [Frontier AI Trends Report](https://www.aisi.gov.uk/frontier-ai-trends-report), providing a clear, evidence‑based picture of how frontier AI capabilities are evolving, based on two years of AISI’s testing.
This is a major milestone for the UK and for AISI. For the first time, we’ve put hard numbers behind what the most advanced AI systems can actually do. The trends we highlight – rapid capability growth across cyber, biology, chemistry and software engineering, alongside improving safeguards – underline why having this capacity inside the UK government matters. It gives the public, policymakers and our partners a clear view of AI progress that is grounded in data rather than speculation.
****
#### **2\. We tested more frontier AI systems than ever before**
Our technical team has now tested more than 30 of the world’s most advanced models. Developers continue to work with us because our evaluations are rigorous, reproducible and grounded in real‑world risk. This year alo
... (truncated, 10 KB total)3dec5f974c5da5ec | Stable ID: NTIyY2NiMW