Back
UK AI Safety Institute
governmentCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: UK Government
Official UK government document describing AISI's founding mandate and evaluation approach; a key reference for understanding how state-backed AI safety evaluation infrastructure is being built at the frontier model level.
Metadata
Importance: 68/100guidance documentprimary source
Summary
This document outlines the UK AI Safety Institute's (AISI) mission, structure, and evaluation methodology for advanced AI systems. Established in November 2023, AISI focuses on pre- and post-deployment capability assessments, foundational safety research, and international information sharing to support AI governance.
Key Points
- •AISI is the world's first state-backed organization dedicated to advanced AI safety, launched at the Bletchley AI Safety Summit in November 2023.
- •Core functions include conducting capability evaluations of advanced AI models before and after deployment, with focus on identifying potentially harmful capabilities.
- •AISI has assembled 25+ researchers with 165+ combined years of experience and partnered with 22 organizations for government-led AI evaluations.
- •The institute facilitates information exchange between policymakers, international partners, private companies, academia, and civil society.
- •AISI's evaluations framework addresses both highly capable general-purpose AI systems and narrow AI models that could pose societal risks.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Output Filtering | Approach | 63.0 |
| Emergent Capabilities | Risk | 61.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 202627 KB
AI Safety Institute approach to evaluations - GOV.UK
Cookies on GOV.UK
We use some essential cookies to make this website work.
We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.
We also use cookies set by other sites to help us deliver content from their services.
You have accepted additional cookies. You can change your cookie settings at any time.
You have rejected additional cookies. You can change your cookie settings at any time.
Accept additional cookies
Reject additional cookies
View cookies
Hide cookie message
Skip to main content
Department for
Science, Innovation
& Technology
AI Safety Institute
Print this page
© Crown copyright 2024
This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@nationalarchives.gov.uk .
Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.
This publication is available at https://www.gov.uk/government/publications/ai-safety-institute-approach-to-evaluations/ai-safety-institute-approach-to-evaluations
AI Safety Institute: why it was established
The AI Safety Institute ( AISI ) is the world’s first state-backed organisation focused on advanced AI safety for the public benefit and is working towards this by bringing together world-class experts to understand the risks of advanced AI and enable its governance. It is part of the UK’s Department for Science, Innovation, and Technology (DSIT).
AISI was launched at the AI Safety Summit in November 2023 by the Secretary of State for DSIT, Michelle Donelan, and the Prime Minister, Rishi Sunak. Since this Summit, we have built up a world-class team of two dozen researchers with over 165 years of combined experience, and we have partnered with 22 organisations to enable government-led evaluations of advanced AI systems.
Following the AI Safety Summit, AISI established three core functions:
1. Develop and conduct evaluations on advanced AI systems . AISI will assess potential risks of new models before and after they are deployed, including by evaluating for potentially harmful capabilities.
2. Drive foundational AI safety research . We will launch research projects in foundational AI safety to support new forms of governance and enable fundamentally safer AI development, both internally and by supporting world-class external researchers.
3. Facilitate informatio
... (truncated, 27 KB total)Resource ID:
533b576199ec323d | Stable ID: NjkxMjI5Yz