Skip to content
Longterm Wiki
Back

UK AI Safety Institute Wikipedia

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

Useful background reference for understanding the UK's institutional AI safety efforts; relevant to discussions of AI governance, evaluation frameworks, and international policy coordination.

Metadata

Importance: 52/100wiki pagereference

Summary

Wikipedia article covering the UK AI Safety Institute (AISI), a government body established in 2023 to advance AI safety research and evaluation. It provides an overview of the institute's mission, structure, key activities such as frontier model evaluations, and its role in international AI safety coordination. The article serves as a reference point for understanding the UK's institutional approach to governing advanced AI.

Key Points

  • AISI was established by the UK government in November 2023, announced at the Bletchley Park AI Safety Summit.
  • Its primary mandate includes evaluating frontier AI models for safety risks before and after deployment.
  • The institute conducts red-teaming and safety testing in collaboration with leading AI developers like OpenAI and Anthropic.
  • AISI plays a role in international coordination on AI safety, including partnerships with the US AI Safety Institute.
  • It represents a significant example of government-led AI safety infrastructure and policy implementation.

Cited by 5 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202630 KB
[Jump to content](https://en.wikipedia.org/wiki/Artificial_intelligence_safety_institute#bodyContent)

From Wikipedia, the free encyclopedia

(Redirected from [AI Safety Institute](https://en.wikipedia.org/w/index.php?title=AI_Safety_Institute&redirect=no "AI Safety Institute"))

Type of organization

An **artificial intelligence safety institute[\[1\]](https://en.wikipedia.org/wiki/Artificial_intelligence_safety_institute#cite_note-:1-1)** is a type of state-backed organization aiming to evaluate and ensure the safety of advanced [artificial intelligence](https://en.wikipedia.org/wiki/Artificial_intelligence "Artificial intelligence") (AI) models, also called [frontier AI models](https://en.wikipedia.org/wiki/Frontier_model "Frontier model").[\[2\]](https://en.wikipedia.org/wiki/Artificial_intelligence_safety_institute#cite_note-2)

[AI safety](https://en.wikipedia.org/wiki/AI_safety "AI safety") gained prominence in 2023, notably with [public declarations](https://en.wikipedia.org/wiki/Statement_on_AI_risk_of_extinction "Statement on AI risk of extinction") about potential [existential risks from AI](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence "Existential risk from artificial general intelligence"). During the [AI Safety Summit](https://en.wikipedia.org/wiki/AI_Safety_Summit_2023 "AI Safety Summit 2023") in November 2023, the United Kingdom and the United States both created their own AISI. During the [AI Seoul Summit](https://en.wikipedia.org/wiki/AI_Seoul_Summit_2024 "AI Seoul Summit 2024") in May 2024, international leaders agreed to form a network of AI Safety Institutes, comprising institutes from the UK, the US, Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada and the European Union.[\[1\]](https://en.wikipedia.org/wiki/Artificial_intelligence_safety_institute#cite_note-:1-1) In 2025, the UK's AI Safety Institute was renamed the "AI Security Institute", and its US counterpart became the Center for AI Standards and Innovation (CAISI).

## Timeline

\[ [edit](https://en.wikipedia.org/w/index.php?title=Artificial_intelligence_safety_institute&action=edit&section=1 "Edit section: Timeline")\]

In 2023, [Rishi Sunak](https://en.wikipedia.org/wiki/Rishi_Sunak "Rishi Sunak"), the Prime Minister of the United Kingdom, expressed his intention to "make the UK not just the intellectual home but the geographical home of global AI safety regulation" and unveiled plans for an AI Safety Summit.[\[3\]](https://en.wikipedia.org/wiki/Artificial_intelligence_safety_institute#cite_note-3) He emphasized the need for independent safety evaluations, stating that AI companies cannot "mark their own homework".[\[4\]](https://en.wikipedia.org/wiki/Artificial_intelligence_safety_institute#cite_note-4) During the summit in November 2023, the UK AISI was officially established as an evolution of the _Frontier AI Taskforce_,[\[5\]](https://en.wikipedia.org/wiki/Artificial_intelligence_safety_institute#ci

... (truncated, 30 KB total)
Resource ID: 89860462901f56f7 | Stable ID: NzQ3ZDZjMT