Skip to content
Longterm Wiki
Back

The Global Landscape of AI Safety Institutes

web

A useful reference for understanding how national AI Safety Institutes are structured, what they do, and how the international landscape of AI governance institutions is evolving post-Bletchley.

Metadata

Importance: 62/100blog postanalysis

Summary

This article provides a comprehensive overview of AI Safety Institutes (AISIs) as a novel global governance model, cataloguing existing institutes worldwide and analyzing their core functions: evaluating frontier AI systems, conducting safety research, and facilitating stakeholder information exchange. It examines the historical development from the UK's 2023 Bletchley Park summit through a growing second wave of national institutes, and questions the recent shift in some jurisdictions from 'safety' to 'security' framing.

Key Points

  • AISIs are state-backed entities focused on evaluating frontier AI models, conducting safety research, and facilitating information exchange between government, industry, and academia.
  • The AISI model originated with the UK's Foundational Model Taskforce (April 2023) and was formalized through the Bletchley Declaration at the November 2023 UK AI Safety Summit.
  • AISIs generally lack direct regulatory power and are not comprehensive AI authorities; their core functions are research, international cooperation, and standards development.
  • A second wave of AISIs is emerging globally following initial institutes in the UK, US, and Japan, raising questions about institutional design and varying national approaches.
  • The 2025 Paris AI Action Summit signaled a potential shift away from safety-focused governance, raising concerns about the continued relevance of the AISI model.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202643 KB
# The Global Landscape of AI Safety Institutes

Mar 14

Written By [Rebekah Tweed](https://alltechishuman.org/all-tech-is-human-blog?author=613f7c10350cf566783a9339)

**Authors: Juhi Kore, Ebani Dhawan**

_This article provides a snapshot of key insights from a forthcoming comprehensive report._

## I. Introduction

The recent Paris AI Action Summit, held in February 2025, was a pivotal moment in the international governance of artificial intelligence (AI). This series of summits began with the groundbreaking UK AI Safety Summit at Bletchley Park in November 2023, where the concept of AI Safety Institutes (AISIs) was first introduced and formalised through the Bletchley Declaration, reflecting the global community’s sustained commitment to address AI safety.

As the summit series evolved, France and other international stakeholders appeared to move its focus away from safety. This shift raises important questions: **is the safety-focused approach of AISIs still relevant, and if so, why does it continue to matter amid the rapidly evolving AI landscape?**

This report aims to provide a comprehensive examination of AI Safety Institutes as a novel governance model. We will **explore what these institutes do, present a catalogue of current AISIs worldwide, analyse the challenges and opportunities they face, and examine the recent shift from 'safety' to 'security' in some jurisdictions**. As the first global institutional model for AI governance, AISIs represent a significant innovation in how nations approach the complex challenges posed by increasingly capable AI systems.

## II. What do AISIs do?

### What is an AISI?

An AI Safety Institute is a state-backed specialised entity with three primary objectives: (1) **evaluating AI systems, particularly frontier and advanced models**; (2) **conducting safety research to advance the science of AI risk assessment**; and (3) **facilitating information exchange among diverse stakeholders including governments, industry, and academia** (Araujo et al., 2024). These institutes represent the first global-scale institutional model specifically focused on AI governance, providing technical expertise to inform policy decisions while remaining independent from direct regulatory power. It is important to note AISIs are not the catch-all government agency for any AI-related activity.

First-wave AISIs share several defining characteristics: they are safety-focused, government-affiliated, emphasise technical expertise, and operate as specialised organisations with mission-driven mandates. These institutes typically maintain a startup-like culture, bringing together experts from diverse fields including computer science, policy, and various domains of AI safety (Hobbhahn, 2023).

It is equally important to understand what AISIs are not. They **generally lack direct regulatory power (although this varies by jurisdiction) and are not designed to be comprehensive authorities on all AI-related activities within government*

... (truncated, 43 KB total)
Resource ID: 48668fbbdd965679 | Stable ID: ZmExMDMxYz