AI Safety Institutes (AISIs)
in-effectInternationalGovernment-run institutions dedicated to evaluating frontier AI systems for dangerous capabilities and safety properties. Pioneered by the UK AISI in 2023, with analogues in the US (USAISI), EU, Japan, and others. Play a key role in pre-deployment evaluations and responsible scaling policy thresholds.
Related Wiki Pages
Top Related Pages
International Coordination Mechanisms
International coordination on AI safety involves multilateral treaties, bilateral dialogues, and institutional networks to manage AI risks globally.
International Compute Regimes
Multilateral coordination mechanisms for AI compute governance, exploring pathways from non-binding declarations to comprehensive treaties.
AI Whistleblower Protections
Legal and institutional frameworks for protecting AI researchers and employees who report safety concerns.
Model Registries
Centralized databases of frontier AI models that enable governments to track development, enforce safety requirements, and coordinate international...
AI Lab Safety Culture
This response analyzes interventions to improve safety culture within AI labs.
Organizations
Historical
Quick Facts
- Jurisdiction
- International
- Introduced
- Nov 2023
- Status
- in-effect
- Scope
- International
Position Summary
Sources
- UK AI Safety InstituteUK Government
- US AI Safety InstituteNIST
- Inspect FrameworkUK AISI
- Seoul Declaration on AISI NetworkSummit Participants