Frontier AI: capabilities and risks – discussion paper - GOV.UK
governmentCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: UK Government
Published by the UK government in 2023 ahead of the Bletchley Park AI Safety Summit, this paper is a key policy document establishing official framing of frontier AI risks and helped set the agenda for international AI governance discussions.
Metadata
Summary
A UK government discussion paper examining the capabilities and potential risks of frontier AI systems, intended to inform policy discussions ahead of the 2023 AI Safety Summit at Bletchley Park. It outlines the current state of advanced AI development, identifies key risk categories including misuse and loss of control, and frames the policy challenges governments face in governing these systems.
Key Points
- •Defines 'frontier AI' as highly capable general-purpose AI models at the cutting edge of development, potentially posing severe risks if misused or misaligned.
- •Identifies two primary risk categories: misuse by malicious actors (e.g., bioweapons, cyberattacks) and risks from loss of human control over AI systems.
- •Acknowledges significant uncertainty in predicting AI trajectories but argues precautionary governance measures are warranted given potential severity of harms.
- •Frames international coordination as essential, as unilateral national measures are insufficient to address globally deployed AI systems.
- •Served as a foundational document for the UK's AI Safety Summit agenda, helping establish shared terminology and risk framing among governments.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Safety Research Allocation Model | Analysis | 65.0 |
Cached Content Preview
Frontier AI: capabilities and risks – discussion paper - GOV.UK
Cookies on GOV.UK
We use some essential cookies to make this website work.
We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.
We also use cookies set by other sites to help us deliver content from their services.
You have accepted additional cookies. You can change your cookie settings at any time.
You have rejected additional cookies. You can change your cookie settings at any time.
Accept additional cookies
Reject additional cookies
View cookies
Hide cookie message
Skip to main content
Research and analysis
Frontier AI: capabilities and risks – discussion paper
A discussion paper on the capabilities of, and risks from, frontier AI.
From:
Department for Science, Innovation and Technology
Published
25 October 2023
Last updated
28 April 2025
—
See all updates
Get emails about this page
Documents
Capabilities and risks from frontier AI: discussion paper
PDF , 1.98 MB , 45 pages
Frontier AI: capabilities and risks – discussion paper
HTML
Future risks of frontier AI (Annex A)
PDF , 1.08 MB , 44 pages
Future risks of frontier AI (Annex A)
HTML
Safety and security risks of generative artificial intelligence to 2025 (Annex B)
PDF , 1.53 MB , 6 pages
Safety and security risks of generative artificial intelligence to 2025 (Annex B)
HTML
AI 2030 Scenarios Report HTML (Annex C)
HTML
AI 2030 Scenarios Report PDF (Annex C)
PDF , 23.8 MB , 158 pages
This file may not be suitable for users of assistive technology.
Request an accessible format.
If you use assistive technology (such as a screen reader) and need a version of this document in a more accessible format, please email alt.formats@dsit.gov.uk . Please tell us what format you need. It will help us if you say what assistive technology you use.
Details
This discussion paper was written to inform discussions at the AI Safety Summit 2023 . It does not represent government policy.
A shared understanding of the risks is essential to unlock the enormous benefits of frontier AI. This paper outlines the capabilities, risks, and cross-cutting challenges presented by the technology, pointing particularly to dangers around misuse, social harms, and loss of control.
The paper was reviewed by an expert panel composed of external experts, including Turing Prize Winner Yos
... (truncated, 4 KB total)0a52c15a31cd8d81 | Stable ID: YmI3OThkZD