Seoul Frontier AI Safety Commitments
governmentCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: UK Government
Official UK government publication of voluntary industry commitments from the Seoul AI Summit (May 2024); a key international governance milestone setting expectations for frontier AI developers ahead of the Paris AI Summit.
Metadata
Summary
At the 2024 Seoul AI Summit, the UK and South Korean governments announced voluntary safety commitments signed by 16 major AI organizations (later expanded to 20), including OpenAI, Google, Meta, Microsoft, and Anthropic. Signatories pledged to assess risks across the AI lifecycle, conduct red-teaming for severe threats, invest in cybersecurity, enable AI-content provenance, and publish safety frameworks before the France AI Summit. These commitments represent a landmark multilateral industry pledge on frontier AI safety practices.
Key Points
- •20 major AI organizations (including OpenAI, Google, Meta, Microsoft, Anthropic, xAI, NVIDIA) signed voluntary frontier AI safety commitments at the Seoul 2024 summit.
- •Commitments include risk assessment across the AI lifecycle, red-teaming for severe/novel threats, cybersecurity safeguards for model weights, and transparency on capabilities/limitations.
- •Signatories must publish safety frameworks focused on severe risks before the upcoming France AI Summit, with approaches expected to evolve as the science advances.
- •Organizations pledged to develop mechanisms helping users identify AI-generated audio/visual content and to share information on safety issues and vulnerabilities.
- •The document represents a non-binding but high-profile multilateral industry governance milestone, building on the Bletchley Declaration from the 2023 UK AI Safety Summit.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Evals-Based Deployment Gates | Approach | 66.0 |
Cached Content Preview
Frontier AI Safety Commitments, AI Seoul Summit 2024 - GOV.UK
Cookies on GOV.UK
We use some essential cookies to make this website work.
We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.
We also use cookies set by other sites to help us deliver content from their services.
You have accepted additional cookies. You can change your cookie settings at any time.
You have rejected additional cookies. You can change your cookie settings at any time.
Accept additional cookies
Reject additional cookies
View cookies
Hide cookie message
Skip to main content
Department for
Science, Innovation
& Technology
© Crown copyright 2025
This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@nationalarchives.gov.uk .
Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.
This publication is available at https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024
The UK and Republic of Korea governments announced that the following organisations have agreed to the Frontier AI Safety Commitments:
Amazon
Anthropic
Cohere
Google
G42
IBM
Inflection AI
Meta
Microsoft
Mistral AI
Naver
OpenAI
Samsung Electronics
Technology Innovation Institute
xAI
Zhipu.ai
The following organisations have been added to the existing list:
Magic
Minimax
01.ai
NVIDIA
The above organisations, in furtherance of safe and trustworthy AI , undertake to develop and deploy their frontier AI models and systems [footnote 1] responsibly, in accordance with the following voluntary commitments, and to demonstrate how they have achieved this by publishing a safety framework focused on severe risks by the upcoming AI Summit in France.
Given the evolving state of the science in this area, the undersigned organisations’ approaches (as detailed in paragraphs I-VIII) to meeting Outcomes 1, 2 and 3 may evolve in the future. In such instances, organisations will provide transparency on this, including their reasons, through public updates.
The above organisations also affirm their commitment to implement current best practices related to frontier AI safety, including: internal and external red-teaming of frontier AI models and systems for severe and novel threats; to work toward information sh
... (truncated, 9 KB total)4487a62bbc1c45d6 | Stable ID: NTg5NTFiYT