Skip to content
Longterm Wiki
Back

OECD: AI Safety Institutes Challenge

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: OECD

Published by OECD in July 2024, this piece provides a comparative overview of national AI Safety Institutes and the systemic challenges they face, useful for understanding the international policy landscape around AI safety governance.

Metadata

Importance: 58/100organizational reportanalysis

Summary

This OECD analysis examines the emerging landscape of national AI Safety Institutes (AISIs) established by the US, UK, Japan, Canada, Singapore, and EU, assessing their roles in evaluating AI capabilities and risks. It identifies key challenges these bodies face, including surveying AI unpredictability, establishing evaluation standards, conducting safety research, and coordinating internationally. The piece argues that while AISIs represent a significant step toward coordinated global AI safety governance, substantial structural and resource challenges remain.

Key Points

  • Multiple countries (US, UK, Japan, Canada, Singapore) have established dedicated AI Safety Institutes to evaluate AI capabilities, conduct safety research, and share information.
  • AISIs face the challenge of surveying inherently unpredictable AI systems whose capabilities and risks may evolve faster than current evaluation frameworks.
  • Standardized evaluation approaches are lacking; some sensitive evaluations (e.g., national security) can only be conducted by authorized government bodies.
  • The EU AI Office's Safety Unit fulfills similar functions to AISIs but also carries primary regulatory responsibilities, creating a distinct hybrid model.
  • International coordination among AISIs is critical, as AI risks and capabilities are inherently cross-border and no single country can address them alone.

Cited by 1 page

PageTypeQuality
AI Safety Institutes (AISIs)Policy69.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202620 KB
Civil society

# AI Safety Institutes: Can countries meet the challenge?

[![Alexandre Variengien](https://wp.oecd.ai/app/uploads/2024/07/Alexandre-Variengien-124x124.png)](https://oecd.ai/en/community/alexandre-variengien) [![Charles Martinet](https://wp.oecd.ai/app/uploads/2024/07/Charles-Martinet-124x124.png)](https://oecd.ai/en/community/charles-martinet)

[Alexandre Variengien](https://oecd.ai/en/community/alexandre-variengien) [, Charles Martinet](https://oecd.ai/en/community/charles-martinet)

July 29, 2024 — ![clock](https://oecd.ai/en/assets/images/clock-small.svg)7 min read

- [![LinkedIn logo](https://oecd.ai/en/assets/images/linkedin-black.svg)](https://www.linkedin.com/shareArticle?mini=true&url=https://oecd.ai/en/wonk/ai-safety-institutes-challenge&title=AI%20Safety%20Institutes:%20Can%20countries%20meet%20the%20challenge?)
- [![Twitter logo](https://oecd.ai/en/assets/images/twitter-black.svg)](https://twitter.com/share?url=https://oecd.ai/en/wonk/ai-safety-institutes-challenge&text=AI%20Safety%20Institutes:%20Can%20countries%20meet%20the%20challenge?&hashtags=OECD)
- [![Facebook logo](https://oecd.ai/en/assets/images/facebook-black.svg)](https://www.facebook.com/sharer/sharer.php?u=https://oecd.ai/en/wonk/ai-safety-institutes-challenge)

Governments need to understand what these models can do to manage the risks and seize the benefits of AI. In recent months, recognising the need to keep up with the unprecedented pace of AI development, the [U.S](https://www.nist.gov/aisi)., [U.K.](https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute), [Japan](https://aisi.go.jp/), [Canada](https://www.canada.ca/en/department-finance/news/2024/04/remarks-by-the-deputy-prime-minister-on-securing-canadas-ai-advantage.html) and [Singapore](https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2024/digital-trust-centre) have established specialised bodies known as AI Safety Institutes (AISIs). They are tasked with evaluating AI systems’ capabilities and risks, conducting fundamental safety research, disseminating their results and facilitating information exchange among stakeholders. Through its [AI Safety Unit](https://ec.europa.eu/commission/presscorner/detail/en/ip_24_2982), the European Union’s AI Office will carry out similar responsibilities in addition to its primary mandate as regulators. The inception of these bodies marks a significant step towards formulating a coordinated global strategy for ensuring safe and beneficial AI.

![countries of the international network for AI safety](https://wp.oecd.ai/app/uploads/2024/07/image-6-1024x467.png)_These countries are part of the international network for AI safety, with their respective national bodies dedicated to AI safety. France, Italy, Germany, and Australia have yet to announce a dedicated body._

## What roles for AI safety institutes?

**Survey the unpredictability of AI systems.** By nature, AI models are 

... (truncated, 20 KB total)
Resource ID: 5ce5182494b7fbe9 | Stable ID: MGVjNmQ4ZW