Skip to content
Longterm Wiki
Back

Introducing the UK AI Safety Institute

government

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: UK Government

The founding document of the UK AI Safety Institute, a landmark in state-level AI governance, establishing institutional framing and mission priorities that influenced subsequent international AI safety efforts including the Bletchley Declaration.

Metadata

Importance: 72/100organizational reportprimary source

Summary

The UK government's foundational document introducing the AI Safety Institute (AISI), the first state-backed organization dedicated to advanced AI safety for the public interest. It outlines AISI's mission to minimize surprise from rapid AI advances and develop sociotechnical infrastructure to understand and mitigate AI risks, presented to Parliament in November 2023 by Ian Hogarth.

Key Points

  • AISI is the first state-backed organization focused specifically on advanced AI safety for the public interest, established under the Sunak government in 2023.
  • Mission framed around 'minimizing surprise' from rapid AI advances, developing sociotechnical infrastructure for understanding and mitigating AI risks.
  • Foreword acknowledges dual-use nature of AI: potential for scientific breakthroughs vs. risks of power concentration, security threats, and labor disruption.
  • Explicitly rejects technological determinism: 'AI is not a natural phenomenon happening to us' but a human creation that can be proactively shaped.
  • Invokes I.J. Good's call to take speculative AI risks seriously, signaling openness to frontier and long-term safety concerns.

Cited by 2 pages

PageTypeQuality
UK AI Safety InstituteOrganization52.0
AI Safety Institutes (AISIs)Policy69.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202634 KB
Introducing the AI Safety Institute - GOV.UK 

 
 
 

 
 

 
 

 
 
 
 
 

 
 

 

 

 

 
 
 
 
 

 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 Cookies on GOV.UK

 
 We use some essential cookies to make this website work.

 We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

 We also use cookies set by other sites to help us deliver content from their services.

 
 You have accepted additional cookies. You can change your cookie settings at any time. 

 You have rejected additional cookies. You can change your cookie settings at any time. 

 
 
 
 
 

 Accept additional cookies 

 

 Reject additional cookies 

 View cookies 
 
 
 
 Hide cookie message
 
 
 
 
 Skip to main content 

 

 

 
 

 

 
 
 
 

 
 

 

 

 
 
 
 
 
 
 
 
 
 Department for
Science, Innovation
& Technology 
 
 

 
 
 
 
 AI Safety Institute 
 
 

 
 

 
 2022 to 2024 Sunak Conservative government ">
 
 
 This was published under the 2022 to 2024 Sunak Conservative government 
 

 
 

 
 
 
 

 
 Print this page 
 
 

 
 
 

 
 © Crown copyright 2024

 
 This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@nationalarchives.gov.uk .

 
 Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.

 
 This publication is available at https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute

 

 
 
 
 
 
 
 Shutterstock (SkillUp/pingebat)

 

 Presented to Parliament by the Secretary of State for Science, Innovation and Technology by Command of His Majesty.

 November 2023

 CP 960

 E03012924

 ISBN: 978-1-5286-4538-6

 

 Ministerial foreword

 

 The release of ChatGPT was a Sputnik moment for humanity – we were surprised by rapid and unexpected progress in a technology of our own creation. With accelerating investment into and public adoption of advanced AI , these systems are becoming more powerful and consequential to our lives.

 These systems could free people everywhere from tedious routine work and amplify our creative abilities. But they could also change our future labour markets and economy more quickly than any other technological advance in history. They could help our scientists unlock bold new discoveries, opening the door to a world without cancer and with access to near-limitless clean energy. But they could also further concentrate unaccountable power into the hands of a few, or be maliciously used to undermine societal trust, erode public safety, or threaten international security.

 Some of these risks already manifest as harms to people today and are exacerbated by advances at the frontier of 

... (truncated, 34 KB total)
Resource ID: 587a6715a0cb4099 | Stable ID: MDZjYjc5MD