41QualityAdequateQuality: 41/100LLM-assigned rating of overall page quality, considering depth, accuracy, and completeness.52.3ImportanceUsefulImportance: 52.3/100How central this topic is to AI safety. Higher scores mean greater relevance to understanding or mitigating AI risk.
Content4/13
SummarySummaryBasic text summary used in search results, entity link tooltips, info boxes, and related page cards.ScheduleScheduleHow often the page should be refreshed. Drives the overdue tracking system.Set updateFrequency in frontmatterEntityEntityYAML entity definition with type, description, and related entries.Add entity YAML in data/entities/Edit history2Edit historyTracked changes from improve pipeline runs and manual edits.OverviewOverviewA ## Overview heading section that orients readers. Helps with search and AI summaries.
–Tables1/ ~2TablesData tables for structured comparisons and reference material.Add data tables to the pageDiagrams0DiagramsVisual content — Mermaid diagrams, charts, or Squiggle estimate models.Add Mermaid diagrams or Squiggle modelsInt. links14/ ~3Int. linksLinks to other wiki pages. More internal links = better graph connectivity.Ext. links0/ ~2Ext. linksLinks to external websites, papers, and resources outside the wiki.Add links to external sourcesFootnotes0/ ~2FootnotesFootnote citations [^N] with source references at the bottom of the page.Add [^N] footnote citationsReferences0/ ~1ReferencesCurated external resources linked via <R> components or cited_by in YAML.Add <R> resource linksQuotes0QuotesSupporting quotes extracted from cited sources to back up page claims.crux citations extract-quotes <id>Accuracy0AccuracyCitations verified against their sources for factual accuracy.crux citations verify <id>RatingsN:2.5 R:4.5 A:3.5 C:4.5RatingsSub-quality ratings: Novelty, Rigor, Actionability, Completeness (0-10 scale).
Change History2
Clarify overview pages with new entity type7 weeks ago
Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.
Fix conflicting numeric IDs + add integrity checks#1687 weeks ago
Fixed all 9 overview pages from PR #118 which had numeric IDs (E687-E695) that conflicted with existing YAML entities. Reassigned to E710-E718. Then hardened the system to prevent recurrence:
1. Added page-level numericId conflict detection to `build-data.mjs` (build now fails on conflicts)
2. Created `numeric-id-integrity` global validation rule (cross-page uniqueness, format validation, entity conflict detection)
3. Added `numericId` and `subcategory` to frontmatter Zod schema with format regex
Government AI Safety Organizations (Overview)
Overview
Governments have begun establishing dedicated institutions to address AI safety risks, with AI Safety Institutes (AISIs) emerging as a key organizational model since 2023. These bodies operate with public mandates and budgets that distinguish them from the largely philanthropically-funded nonprofit landscape, though they face different constraints including political cycles and bureaucratic processes.
National AI Safety Institutes
Organization
Country
Founded
Focus
Budget
UK AI Safety InstituteOrganizationUK AI Safety InstituteThe UK AI Safety Institute (renamed AI Security Institute in Feb 2025) operates with ~30 technical staff and 50M GBP annual budget, conducting frontier model evaluations using its open-source Inspe...Quality: 52/100
UK
2023
Frontier model evaluation, safety research
≈$65M
US AI Safety InstituteOrganizationUS AI Safety InstituteThe US AI Safety Institute (AISI), established November 2023 within NIST with $10M budget (FY2025 request $82.7M), conducted pre-deployment evaluations of frontier models through MOUs with OpenAI a...Quality: 91/100
US
2024
Standards development, model evaluation
≈$47.7M requested
NIST AIOrganizationNIST and AI SafetyNIST plays a central coordinating role in U.S. AI governance through voluntary standards and risk management frameworks, but faces criticism for technical focus over systemic issues and funding con...Quality: 63/100
US
Ongoing
AI Risk Management Framework, standards
Part of NIST budget
The UK AISI was the first national AI Safety Institute, established following the Bletchley Park AI Safety Summit in November 2023. The US AISI was established within NIST in 2024. Both conduct pre-deployment evaluations of frontier AI models.
Intergovernmental Bodies
Global Partnership on Artificial Intelligence (GPAI)OrganizationGlobal Partnership on Artificial Intelligence (GPAI)GPAI represents the first major multilateral AI governance initiative but operates as a non-binding policy laboratory with limited enforcement power and structural coordination challenges. While pr...Quality: 50/100: Multilateral initiative with 29 member countries working on responsible AI development and governance
International Network of AI Safety Institutes
As of early 2026, 11+ countries have established or announced AI Safety Institutes, forming a growing network for international safety coordination. Members share evaluation methodologies, coordinate on frontier model assessments, and develop common safety benchmarks. Key members include the UK, US, Japan, Canada, France, and South Korea, with India joining in 2026.
Key Dynamics
Political vulnerability: Government AI safety bodies are subject to changes in political leadership and priorities. The US AISI's mandate and funding depend on congressional and executive support, which can shift between administrations.
Relationship with labs: AISIs must balance cooperative relationships with frontier labs (needed for access to models) against independent oversight mandates. The UK AISI has voluntary agreements with major labs for pre-deployment access.
Complementarity with nonprofits: Government bodies focus on standards, regulation, and institutional evaluation capacity, while nonprofit safety organizations (like METROrganizationMETRMETR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replication, cybersecurity, CBRN, and manipulation capabi...Quality: 66/100 and Apollo ResearchOrganizationApollo ResearchApollo Research demonstrated in December 2024 that all six tested frontier models (including o1, Claude 3.5 Sonnet, Gemini 1.5 Pro) engage in scheming behaviors, with o1 maintaining deception in ov...Quality: 58/100) conduct more specialized technical research. There is increasing collaboration between the two.
Think tank ecosystem: Government AI safety bodies are informed by a dense ecosystem of policy think tanks and research centers. Organizations like CSETOrganizationGeorgetown CSETCSET is a $100M+ Georgetown center with 50+ staff conducting data-driven AI policy research, particularly on U.S.-China competition and export controls. The center conducts hundreds of annual gover...Quality: 43/100, BrookingsOrganizationBrookings Institution AI and Emerging Technology InitiativeThe Brookings AIET Initiative is one of the most-cited think tank programs on AI policy in Washington. Part of the Governance Studies program, it produces influential research on AI regulation, wor..., RANDOrganizationRAND Corporation AI Policy ResearchRAND Corporation's AI policy research shapes Pentagon and NATO thinking on autonomous weapons, escalation risk, and AI-enabled warfare. RAND's AI work spans national security, defense applications,..., CSISOrganizationCSIS Wadhwani Center for AI and Advanced TechnologiesThe Wadhwani Center for AI and Advanced Technologies at CSIS publishes influential research on AI in national security, military competition, autonomous weapons, and US-China tech rivalry. Establis..., and CarnegieOrganizationCarnegie Endowment for International Peace AI ProgramCarnegie's AI program researches how AI reshapes global governance, geopolitics, and democratic institutions. Operating through offices in Washington, Beijing, Brussels, Beirut, and New Delhi, Carn... produce research that shapes government AI policy, while advocacy organizations like AI Policy InstituteOrganizationAI Policy InstituteThe AI Policy Institute (AIPI) is a DC-based advocacy organization founded in 2023 that focuses on channeling public concern about AI into effective policy. Known for extensive public opinion polli... and Americans for Responsible InnovationOrganizationAmericans for Responsible InnovationAmericans for Responsible Innovation (ARI) is a DC-based AI policy group founded in 2023 with backing from EA-aligned donors including Dustin Moskovitz's Open Philanthropy network. ARI focuses on b... push for legislative action. See the AI Safety Organizations overviewE821A well-organized reference overview of ~20 AI safety organizations categorized by function (alignment research, policy, field-building), with a comparative budget/headcount table showing estimated ...Quality: 48/100 for the full landscape.
Global Partnership on Artificial Intelligence (GPAI)OrganizationGlobal Partnership on Artificial Intelligence (GPAI)GPAI represents the first major multilateral AI governance initiative but operates as a non-binding policy laboratory with limited enforcement power and structural coordination challenges. While pr...Quality: 50/100Georgetown CSETOrganizationGeorgetown CSETCSET is a $100M+ Georgetown center with 50+ staff conducting data-driven AI policy research, particularly on U.S.-China competition and export controls. The center conducts hundreds of annual gover...Quality: 43/100Americans for Responsible InnovationOrganizationAmericans for Responsible InnovationAmericans for Responsible Innovation (ARI) is a DC-based AI policy group founded in 2023 with backing from EA-aligned donors including Dustin Moskovitz's Open Philanthropy network. ARI focuses on b...AI Policy InstituteOrganizationAI Policy InstituteThe AI Policy Institute (AIPI) is a DC-based advocacy organization founded in 2023 that focuses on channeling public concern about AI into effective policy. Known for extensive public opinion polli...Brookings Institution AI and Emerging Technology InitiativeOrganizationBrookings Institution AI and Emerging Technology InitiativeThe Brookings AIET Initiative is one of the most-cited think tank programs on AI policy in Washington. Part of the Governance Studies program, it produces influential research on AI regulation, wor...CSIS Wadhwani Center for AI and Advanced TechnologiesOrganizationCSIS Wadhwani Center for AI and Advanced TechnologiesThe Wadhwani Center for AI and Advanced Technologies at CSIS publishes influential research on AI in national security, military competition, autonomous weapons, and US-China tech rivalry. Establis...
Concepts
Safety Orgs OverviewSafety Orgs OverviewA well-organized reference overview of ~20 AI safety organizations categorized by function (alignment research, policy, field-building), with a comparative budget/headcount table showing estimated ...Quality: 48/100