Back
DeepMind: Deepening AI Safety Research with UK AISI
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Google DeepMind
Represents an example of a leading AI lab formalizing safety collaboration with a national government body, relevant to discussions of governance structures and evaluation standards for frontier AI models.
Metadata
Importance: 52/100blog postnews
Summary
DeepMind announces an expanded collaboration with the UK AI Security Institute (AISI) to advance AI safety research, focusing on evaluations, red-teaming, and safety testing of frontier AI models. The partnership aims to develop shared methodologies and tools for assessing risks from advanced AI systems.
Key Points
- •DeepMind is deepening its formal partnership with the UK's AI Safety Institute to jointly advance frontier AI safety research.
- •The collaboration focuses on developing rigorous evaluation frameworks and red-teaming approaches for advanced AI systems.
- •The partnership supports government efforts to build technical capacity for independent AI safety assessments.
- •Shared research aims to improve understanding of potential risks from increasingly capable AI models.
- •Reflects broader industry trend of AI labs engaging with national safety institutes to establish evaluation standards.
Cited by 3 pages
| Page | Type | Quality |
|---|---|---|
| Apollo Research | Organization | 58.0 |
| International AI Safety Summit Series | Event | 63.0 |
| AI Safety Cases | Approach | 91.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20268 KB
[Skip to main content](https://deepmind.google/blog/deepening-our-partnership-with-the-uk-ai-security-institute/#page-content)
December 11, 2025
Responsibility & Safety
# Deepening our partnership with the UK AI Security Institute
William Isaac and Owen Larter
Share

Your browser does not support the audio element.
**Listen to article** 5 minutes
Today, we're announcing an expanded partnership with the [UK AI Security Institute (AISI)](https://www.aisi.gov.uk/) through a new Memorandum of Understanding focused on foundational security and safety research, to help ensure artificial intelligence is developed safely and benefits everyone.
The research partnership with AISI is an important part of our [broader collaboration](https://deepmind.google/blog/strengthening-our-partnership-with-the-uk-government-to-support-prosperity-and-security-in-the-ai-era) with the UK government on accelerating safe and beneficial AI progress.
## Building on a foundation of collaboration
AI holds immense potential to benefit humanity by helping treat disease, accelerate scientific discovery, create economic prosperity and tackle climate change. For these benefits to be realised, we must put safety and responsibility at the heart of development. Evaluating our models against a broad spectrum of potential risks remains a critical part of our safety strategy, and external partnerships are an important element of this work.
This is why we have partnered with the UK AISI since its inception in November 2023 to test our most capable models. We are deeply committed to the UK AISI’s [goal](https://www.aisi.gov.uk/about) to equip governments, industry and wider society with a scientific understanding of the potential risks posed by advanced AI as well as potential solutions and mitigations.
We are actively working with AISI to build more robust evaluations for AI models, and our teams have collaborated on safety research to move the field forward, including recent work on [Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety](https://tomekkorbak.com/cot-monitorability-is-a-fragile-opportunity/cot_monitoring.pdf). Building on this success, today we are broadening our partnership from testing to include wider, more foundational, research in a variety of areas.
## What the partnership involves
Under this new research partnership, we're broadening our collaboration to include:
- Sharing access to our proprietary models, data and ideas to accelerate research progress
- Joint reports and publications sharing findings with the research community
- More collaborative security and safety research combining our teams' expertise
- Technical discussions to
... (truncated, 8 KB total)Resource ID:
d648a6e2afc00d15 | Stable ID: YWIwZjE4Yz