Skip to content
Longterm Wiki
Back

The AI Safety Institute International Network: Next Steps

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: CSIS

Published by CSIS, this policy analysis is relevant for understanding international efforts to institutionalize AI safety governance through coordinated national safety institutes, particularly following the UK Bletchley Declaration.

Metadata

Importance: 58/100organizational reportanalysis

Summary

This CSIS analysis examines the international network of AI Safety Institutes established across multiple countries and provides recommendations for strengthening their coordination, scope, and effectiveness. It addresses how these institutes can better collaborate on technical safety evaluations and policy alignment to address frontier AI risks.

Key Points

  • Reviews the formation and current state of the international AI Safety Institute network launched in part through the Bletchley AI Safety Summit process
  • Recommends improved coordination mechanisms between national AI Safety Institutes to avoid duplicative efforts and share evaluation methodologies
  • Argues for clearer mandates and resource commitments from participating governments to make the network more effective
  • Highlights the need for harmonized standards for AI model evaluations across jurisdictions to facilitate consistent safety assessments
  • Emphasizes the geopolitical dimension of AI safety governance and the challenge of building trust among nations with competing AI interests

Cited by 6 pages

Cached Content Preview

HTTP 200Fetched Mar 15, 202645 KB
The AI Safety Institute International Network: Next Steps and Recommendations 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 

 

 
 
 Skip to main content
 
 
 

 
 

 
 
 
 
 
 
 

 

 
 
 
 
 

 
 
 
 
 
 
 
 

 
 
 

 
 
 
 
 

 

 
 
 
 

 
 
 
 
 
 The AI Safety Institute International Network: Next Steps and Recommendations 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 

 
 Photo: Yoichiro Chino via Getty Images

 
 
 

 
 
 
 
 

 

 
 

 

 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 
 
 Table of Contents
 

 

 
 Table of Contents

 
 
 
 

 

 

 

 
 

 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 

 
 

 
 
 
 
 
 
 

 
 

 

 
 
 

 

 
 
 Report 
 by 
 
 Gregory C. Allen 
 and 
 Georgia Adamson 
 
 
 

 Published October 30, 2024

 
 

 

 
 
 
 
 

 
 
 
 
 
 
 
 
 

 
 Available Downloads
 

 
 
 
 Download the Full Report 
 357kb 
 

 
 

 

 
 
 
 
 

 
 
 
 
 1. OVERVIEW 

 On November 21 and 22, 2024, technical artificial intelligence (AI) experts from nine countries and the European Union will meet for the first time in San Francisco. The agenda: starting the next phase of international cooperation on AI safety science through a network of AI safety institutes (AISIs). The United States, United Kingdom, European Union, Japan, Singapore, South Korea, Canada, France, Kenya, and Australia make up the initial members of the network, which was first launched by U.S. secretary of commerce Gina Raimondo at the May 2024 AI Seoul Summit. At the time of the launch, Italy and Germany were also potential members of the network, as signatories to the Seoul Statement of Intent toward International Cooperation on AI Safety Science , or Seoul Statement, the network’s founding document. However, a September announcement by Raimondo and U.S. secretary of state Antony Blinken confirmed that Kenya would instead be the final member of the AISI International Network at this stage.

 
 

 

 
 
 
 
 

 
 
 
 
 
 On November 21 and 22, 2024, technical artificial intelligence (AI) experts from nine countries and the European Union will meet for the first time in San Francisco. The agenda: starting the next phase of international cooperation on AI safety science through a network of AI safety institutes (AISIs). 

 
 
 

 

 
 
 
 
 

 
 
 
 
 According to the Seoul Statement, the international network will serve to “accelerate the advancement of the science of AI safety” at a global level by promoting “complementarity and interoperability” between institutes and fostering a “common international understanding” of AI safety approaches. While the statement does not define specific goals or mechanisms for AISI collaboration, it suggests that they “may include” coordinating research, sharing resources and relevant information, developing best practices, and exchanging or codeveloping AI model evaluations. Now, in the months following the AI Seoul Summit, AISI network members must begin to articulate th

... (truncated, 45 KB total)
Resource ID: 0572f91896f52377 | Stable ID: NzEyZGNhZW