Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: TechPolicy.Press

Useful policy-focused analysis for those tracking how national AI safety institutions are being shaped by political and economic pressures, relevant to understanding the institutional landscape for AI governance and safety evaluation globally.

Metadata

Importance: 52/100opinion pieceanalysis

Summary

This TechPolicy.Press analysis examines the evolving role of AI Safety Institutes (AISIs) across multiple countries, exploring how they balance safety evaluation mandates with broader innovation and governance objectives. It assesses how these institutes inform national and international AI policy frameworks and their potential influence on global AI governance norms.

Key Points

  • AI Safety Institutes have emerged in multiple countries as key institutional nodes for evaluating frontier AI risks and informing regulatory frameworks.
  • There is growing tension between AISIs' original safety-focused mandates and pressure to also support economic competitiveness and innovation goals.
  • AISIs play an important role in international coordination on AI governance standards, including through bilateral agreements and multilateral forums.
  • The institutional design and independence of AISIs significantly affects their credibility and effectiveness as safety evaluators.
  • Policy choices about AISI scope and authority will shape whether safety or commercial considerations dominate national AI governance approaches.

Cited by 1 page

PageTypeQuality
AI Safety Institutes (AISIs)Policy69.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20268 KB
# From Safety to Innovation: How AI Safety Institutes Inform AI Governance

Prithvi Iyer / Oct 25, 2024

![ ](https://cdn.sanity.io/images/3tzzh18d/production/f2e85e33d472f843a9734aa648a7bc9150ee98c1-1200x675.png)

Leaders gathered at the AI Safety Summit hosted by the UK government in Bletchley Park, November 2023. [Source](https://bletchleypark.org.uk/bletchley-park-makes-history-again-as-host-of-the-worlds-first-ai-safety-summit/)

Governments around the world have been trying to keep pace with rapid advancements in artificial intelligence. In this context, a number of governments have established AI safety institutes (AISIs). Seven jurisdictions (Japan, the UK, the US, the EU, Singapore, France, and Canada) have some version of an AI Safety Institute. Most recently, India [announced](https://www.techpolicy.press/indias-ai-safety-institute-key-considerations-for-a-critical-initiative/) public consultations to set up its own institute, indicating that this model is also being adopted by countries in the Global Majority.

A [new report](https://arxiv.org/pdf/2410.09219) from Renan Araujo, Kristina Fort, and Oliver Guest compares these institutes across jurisdictions, providing insights into their characteristics, functions, and limitations in shaping global AI governance. AISIs are the “first institutional models of AI governance that different governments have adopted in a similarly shaped way,” according to the report’s authors. Given these similarities, it is worth exploring how AISIs work in practice and the challenges they may face in the years ahead.

This report focuses specifically on jurisdictions that explicitly use the term “AI Safety Institute” and identifies three countries, the United Kingdom, the United States, and Japan, as constituting the “first wave” of AISIs. The [UK](https://www.aisi.gov.uk/) and the [US](https://www.nist.gov/aisi) were the first to establish an AISI in November 2023, and [Japan](https://aisi.go.jp/) followed suit by setting up its AISI in February 2024.

The report’s authors left the [EU AI Office](https://artificialintelligenceact.eu/the-ai-office-summary/) out of this “first wave” because it is a particularly unique institutional development that took a different and more expansive approach compared to other countries. With the enactment of the [EU AI Act,](https://artificialintelligenceact.eu/) the EU AI Office regulates the entire AI market in the EU, giving it a "much broader and stronger mandate than the first wave of AISIs," which are largely focused on developing safety evaluations and technical standards.

According to the report, this “first wave” of AISIs has three common characteristics:

- **Safety-focus**: The first wave of AISIs was informed by the [Bletchley AI Safety Summit](https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023), which declared that “AI should be desig

... (truncated, 8 KB total)
Resource ID: ddf1a0ba01aef78e | Stable ID: MTMzNzkxZG