Skip to content
Longterm Wiki

AI Safety Institutes (AISIs)

in-effectInternational
InternationalIntroduced Nov 2023Wiki article →

Government-run institutions dedicated to evaluating frontier AI systems for dangerous capabilities and safety properties. Pioneered by the UK AISI in 2023, with analogues in the US (USAISI), EU, Japan, and others. Play a key role in pre-deployment evaluations and responsible scaling policy thresholds.

Related Wiki Pages

Top Related Pages

Organizations

US AI Safety InstituteUK AI Safety InstituteMETR

Risks

Bioweapons RiskCyberweapons Risk

Approaches

AI Governance Coordination TechnologiesAI Safety Intervention PortfolioAI Evaluation

Analysis

AI Safety Intervention Effectiveness MatrixLongterm WikiAI Lab Whistleblower Dynamics Model

Policy

Bletchley DeclarationSingapore Consensus on AI Safety Research Priorities

Concepts

State Capacity and AI GovernanceSelf-Improvement and Recursive Enhancement

Historical

International AI Safety Summit Series

Other

AI EvaluationsElizabeth Kelly

Key Debates

AI Governance and PolicyAI Structural Risk Cruxes

Quick Facts

Jurisdiction
International
Introduced
Nov 2023
Status
in-effect
Scope
International

Position Summary

Support3
Mixed1

Sources