Seoul Statement of Intent toward International Cooperation on AI Safety Science
governmentCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: UK Government
This is a key multilateral government document establishing the political and institutional framework for international AI safety science cooperation, relevant to understanding how national AI Safety Institutes are being networked globally.
Metadata
Summary
The Seoul Statement of Intent, signed by 11 countries and the EU at the May 2024 AI Seoul Summit, formalizes multilateral commitment to coordinated AI safety science cooperation. It builds on the Bletchley Park Summit by pledging to leverage national AI Safety Institutes, share scientific assessments, and develop interoperable technical methodologies for AI risk evaluation.
Key Points
- •Signed by 11 countries and the EU at the AI Seoul Summit on May 21, 2024, continuing momentum from the 2023 Bletchley Park AI Safety Summit.
- •Commits signatories to coordinated, transparent, and reciprocal international collaboration on AI safety research and evidence generation.
- •Emphasizes leveraging national AI Safety Institutes as key infrastructure for shared scientific assessments and safety evaluations.
- •Aims to foster interoperability in technical methodologies to enable consistent, comparable AI safety testing across jurisdictions.
- •Positions international safety science cooperation as foundational to trustworthy AI innovation and evidence-based policy globally.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Safety Institutes (AISIs) | Policy | 69.0 |
| Seoul Declaration on AI Safety | Policy | 60.0 |
Cached Content Preview
Seoul Statement of Intent toward International Cooperation on AI Safety Science, AI Seoul Summit 2024 (Annex) - GOV.UK
Cookies on GOV.UK
We use some essential cookies to make this website work.
We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.
We also use cookies set by other sites to help us deliver content from their services.
You have accepted additional cookies. You can change your cookie settings at any time.
You have rejected additional cookies. You can change your cookie settings at any time.
Accept additional cookies
Reject additional cookies
View cookies
Hide cookie message
Skip to main content
Department for
Science, Innovation
& Technology
2022 to 2024 Sunak Conservative government ">
This was published under the 2022 to 2024 Sunak Conservative government
© Crown copyright 2024
This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@nationalarchives.gov.uk .
Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.
This publication is available at https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024/seoul-statement-of-intent-toward-international-cooperation-on-ai-safety-science-ai-seoul-summit-2024-annex
1. Gathered at the AI Seoul Summit on 21 May 2024, and following on from the AI Safety Summit in Bletchley Park on 2nd November 2023 and acknowledging the Safety Testing Chair’s Statement of Session Outcomes from the Bletchley Leaders’ Session, world leaders representing Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, the Republic of Singapore, the United Kingdom, and the United States of America affirm the importance of international coordination and collaboration, based in openness, transparency, and reciprocity, to advance the science of AI safety. We affirm that safety is a key element in furtherance of responsible AI innovation.
2. We commend the collective work to create or expand public and/or government-backed institutions, including AI Safety Institutes, that facilitate AI safety research, testing, and/or developing guidance to advance AI safety for commercially and publicly available AI systems.
2.1. We acknowledge the need for a reliable, interdisciplinary, and reproducible body of evidence to inform policy efforts related to A
... (truncated, 6 KB total)9d15bf121467eba7 | Stable ID: ZTQ2NTNiMT