Comprehensive survey of 10 structural metrics for AI governance capacity, finding: 1-3 year policy lag times, 16-26 point elite-public trust gaps, moderate market concentration (HHI ~2,500), 2,000+ documented incidents but very low near-miss reporting, and weak international coordination. Most metrics are measured annually via established indices (Freedom House, WGI, Edelman) though several key dimensions (coordination success rate, societal resilience) remain conceptual with low data quality.
Meta & Structural Indicators
Structural Indicators
Comprehensive survey of 10 structural metrics for AI governance capacity, finding: 1-3 year policy lag times, 16-26 point elite-public trust gaps, moderate market concentration (HHI ~2,500), 2,000+ documented incidents but very low near-miss reporting, and weak international coordination. Most metrics are measured annually via established indices (Freedom House, WGI, Edelman) though several key dimensions (coordination success rate, societal resilience) remain conceptual with low data quality.
Key Links
| Source | Link |
|---|---|
| Wikipedia | en.wikipedia.org |
These metrics assess the structural and meta-level conditions that determine society's ability to navigate AI development safely. Unlike direct capability or safety metrics, these measure the quality of the broader systems—governance institutions, information environments, coordination mechanisms—that mediate AI's societal impact.
Overview
Structural indicators help answer: Is society equipped to handle AI risks? They track whether institutions can make good decisions, whether information environments support informed debate, and whether coordination mechanisms can address collective action problems.
Key distinctions:
- Direct metrics (capabilities, safety research) → What AI can do and how safe it is
- Structural metrics (these) → Whether society can govern AI effectively
Many of these metrics are conceptual or partially measured—they represent important dimensions we should track, even if comprehensive data doesn't yet exist.
1. Information Environment Quality
Measured Indicators
Freedom House "Freedom on the Net" Score
- Latest (2025): United States remains "Free" but with declining scores
- Concerns about misinformation ahead of 2024 elections contributing to "unreliable information environment"
- United Kingdom saw decline due to false information leading to riots in summer 2024
- Interpretation: Score based on internet freedom, content controls, and users' rights
- Source: Freedom on the Net 2025↗🔗 web★★★★☆Freedom HouseFreedom on the Net 2025Source ↗
RSF World Press Freedom Index
- 2025 Global Average: Economic indicator at "unprecedented, critical low"
- Global press freedom now classified as "difficult situation" for first time in Index history
- DisinformationRiskAI DisinformationPost-2024 analysis shows AI disinformation had limited immediate electoral impact (cheap fakes used 7x more than AI content), but creates concerning long-term epistemic erosion with 82% higher beli...Quality: 54/100 prevalence: 138/180 countries report political actors involved in disinformation campaigns
- 31 countries report "systematic" disinformation involvement
- 2024 US-specific: Press freedom violations increased to 49 arrests/charges and 80 assaults on journalists (vs 15 and 45 in 2023)
- Source: RSF World Press Freedom Index 2025↗🔗 webRSF World Press Freedom Index 2025The 2025 RSF World Press Freedom Index reveals a critical economic threat to journalism worldwide, with media outlets struggling financially and losing independence in most coun...economicSource ↗
Trust in Institutions (Edelman Trust Barometer 2024)
- Business: 63% trust (only trusted institution)
- Government: Low trust, 42% trust government leaders
- Media: Actively distrusted
- Mass/Elite divide on AI: 16-point gap in US (43% high-income vs 27% low-income trust AI)
- Innovation management: 2:1 margin believe innovation is poorly managed
- Source: 2024 Edelman Trust Barometer↗🔗 web★★★☆☆Edelman2024 Edelman Trust BarometerinstitutionsmediademocracySource ↗
Conceptual Indicators (Limited Direct Measurement)
AI-Specific Misinformation Prevalence
- Conceptual metric: % of AI-related claims in public discourse that are false or misleading
- Proxy data: 62% of voters primarily concerned (vs 21% excited) about AI (AIPI polling)
- Elite/public gap: "Large disconnect between elite discourse and what American public wants" - AI Policy Institute
- Challenge: No systematic tracking of AI misinformation rates
- Source: AI Policy Institute Polling↗🔗 webAI Policy Institute PollingA YouGov survey shows strong public support for AI regulation, with most voters worried about potential catastrophic risks and preferring a cautious approach to AI development.governancex-riskSource ↗
2. Institutional Decision-Making Quality
Measured Proxies
World Bank Worldwide Governance Indicators (WGI)
- Government Effectiveness dimension: Quality of public services, bureaucracy competence, civil service independence, policy credibility
- Scale: -2.5 to +2.5 (normalized with mean ~0), also mapped to 0-100 scale
- Latest: 2024 methodology update covering 214 economies, 1996-2023 data
- Data sources: 35 cross-country sources including household surveys, firm surveys, expert assessments
- Limitation: "Inputs" focused (institutional capacity) rather than "outputs" (decision quality)
- Source: World Bank WGI 2024↗🔗 webWorld Bank WGI 2024The World Bank's Worldwide Governance Indicators (WGI) measure six key governance dimensions using perception data from multiple sources. The 2025 edition introduces methodologi...governanceSource ↗
V-Dem Digital Society Index
- Coverage: Measures government internet censorship, social media monitoring, online media fractionalization
- Note: 2024 specific data on information environment not retrieved, but framework exists
- Source: V-Dem Institute (v-dem.net)
Conceptual Indicators
AI Policy Quality Index
- Conceptual metric: Expert assessment of whether AI policies address actual risks proportionately
- Current status: No standardized index exists
- Proxy: Mixed signals—EU AI ActPolicyEU AI ActComprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. ...Quality: 55/100 implemented, US executive order, but critiques of regulatory lag
Evidence-Based Policy Rate for AI
- Conceptual metric: % of major AI policy decisions informed by rigorous evidence
- Challenge: Would require systematic policy analysis across jurisdictions
- Current: Anecdotal evidence suggests variable quality
3. Elite vs Public Opinion Divergence on AI
Measured Divergence
Expert vs Public Trust Gap (Pew Research 2024)
- Finding: "Experts are far more positive and enthusiastic about AI than the public"
- Methodology: 5,410 US adults (Aug 2024) vs 1,013 AI experts (Aug-Oct 2024)
- Experts: Identified via authors/presenters at 21 AI conferences in 2023-2024
- Source: Pew Research: Public and AI Experts↗🔗 web★★★★☆Pew Research CenterPew Research: Public and AI ExpertsA comprehensive study comparing perspectives of U.S. adults and AI experts on artificial intelligence's future, highlighting differences in optimism, job impacts, and regulatory...governanceeconomicSource ↗
AI Policy Institute Polling (2024)
- Development pace preference: 72% prefer slowing AI development vs 8% prefer speeding up
- Risk vs excitement: 62% primarily concerned vs 21% primarily excited
- Catastrophic risk belief: 86% believe AI could accidentally cause catastrophic event
- Liability: 73% believe AI companies should be held liable for harm
- Regulation preference: 67% think AI models' power should be restricted
- Elite disconnect quote: "Large disconnect between elite discourse or discourse in labs and what American public wants" - Daniel Colson, AIPI Executive Director
- Source: AIPI Polling↗🔗 webAI Policy Institute PollingA YouGov survey shows strong public support for AI regulation, with most voters worried about potential catastrophic risks and preferring a cautious approach to AI development.governancex-riskSource ↗
Trust Gap in AI Companies (Edelman 2024)
- Technology sector vs AI innovation: 26-point gap (76% trust tech sector vs 50% trust AI)
- AI company trust declineRiskAI-Driven Trust DeclineUS government trust declined from 73% (1958) to 17% (2025), with AI deepfakes projected to reach 8M by 2025 accelerating erosion through the 'liar's dividend' effect—where synthetic content possibi...Quality: 55/100: From 62% (5 years ago) to 54% (2024)
- Rejection willingness: 43% will actively reject AI products if innovation poorly managed
- Source: Edelman Trust Barometer 2024 - AI Insights↗📋 report★★★☆☆EdelmanEdelman Trust Barometer 2024 - AI InsightsSource ↗
Interpretation
Magnitude: Large and growing gap between expert optimism and public concern
Direction: Public more risk-focused; experts more capability-focused
Policy implication: Democratic deficit if AI development primarily shaped by technologists
4. Time from AI Risk Identification to Policy Response
Measured Cases
EU AI Act Timeline (Response to GPT-class models)
- GPT-3 release: June 2020
- EU AI Act proposal: April 2021 (10 months after GPT-3)
- GPT-4 release: March 2023
- EU AI Act agreement: December 2023 (9 months after GPT-4)
- AI Act signed: June 2024
- Entered force: August 2024
- GPAI provisions applicable: August 2025 (29 months after GPT-4)
- Full applicability: August 2026
- Interpretation: ~2.5 years from GPT-4 to enforceable rules on GPAI models
- Source: EU AI Act Implementation Timeline↗🔗 webEU AI Act Implementation TimelineThe EU AI Act implementation follows a gradual rollout with key dates from 2024 to 2031, establishing progressive regulatory milestones for AI systems and governance.governanceSource ↗
US Executive Order on AIPolicyUS Executive Order on Safe, Secure, and Trustworthy AIExecutive Order 14110 (Oct 2023) established compute thresholds (10^26 FLOP general, 10^23 biological) and created AISI, but was revoked after 15 months with ~85% completion. The 10^26 threshold wa...Quality: 91/100
- GPT-4 release: March 2023
- Executive Order 14110: October 30, 2023 (7 months after GPT-4)
- Limitation: Executive order, not legislation; limited enforceability
- Source: Biden Administration AI Executive Order
AI Safety InstitutesPolicyAI Safety Institutes (AISIs)Analysis of government AI Safety Institutes finding they've achieved rapid institutional growth (UK: 0→100+ staff in 18 months) and secured pre-deployment access to frontier models, but face critic...Quality: 69/100
- UK AISI announced: November 2023 (Bletchley Park AI Safety Summit)
- US AISI operational: Early 2024
- AISI Network launched: May 2024 (Seoul AI Summit)
- First AISI Network meeting: November 2024 (San Francisco)
- Lag interpretation: ~8-20 months from GPT-4 to safety institute operations
- Source: AISI International Network↗🔗 web★★★★☆OECDAISI International NetworkThe AISI Network, launched in May 2024, seeks to promote safe and trustworthy AI development through international collaboration, knowledge sharing, and coordinated governance a...governancesafetySource ↗
Conceptual Metric
Average Policy Lag Time
- Conceptual metric: Median time from risk becoming evident to enforceable policy
- Challenge: Defining "risk becomes evident" vs "risk exists"
- Current estimate: 1-3 years for major risks based on available cases
- Comparison: Aviation safety regulations often follow major accidents within months
5. Coordination Failure Rate on AI GovernanceParameterAI GovernanceThis page contains only component imports with no actual content - it displays dynamically loaded data from an external source that cannot be evaluated.
Measured Indicators
G7 Hiroshima AI Process Code of Conduct
- Status: Adopted but "provides little guidance" on implementation
- Critique: "Staffed by diplomats who lack depth of in-house technical expertise"
- Implementation gap: Code instructs to "identify, evaluate, mitigate risks" without how-to guidance
- Source: CSIS: G7 Hiroshima AI Process↗🔗 web★★★★☆CSISCSIS: G7 Hiroshima AI ProcessThe report examines the G7's emerging approach to AI governance, highlighting potential enhancements for international cooperation on AI development and regulation.governanceSource ↗
OECD AI Principles (2019, updated 2024)
- Adherents: 47 countries including EU
- Compliance mechanism: None (non-binding)
- Monitoring: AI Policy Observatory tracks implementation but no enforcement
- Implementation rate: Variable—no systematic tracking of adherence
- Source: OECD AI Principles 2024 Update↗🔗 web★★★★☆OECDOECD AI Principles 2024 UpdateThe OECD has updated its AI Principles to address emerging challenges in AI technology, focusing on safety, ethics, and international cooperation across 47 jurisdictions.safetySource ↗
International AI Safety Institute Network
- Members (Nov 2024): 10 countries/regions (Australia, Canada, EU, France, Japan, Kenya, Korea, Singapore, UK, US)
- Challenges identified:
- Confidentiality and security concerns
- Legal incompatibilities between national mandates
- Varying technical capacities
- Global South institutes risk becoming "token members"
- Most institutes still hiring/setting priorities as of 2024
- Coordination body: None yet (recommended but not established)
- Success metric: Too early to assess
- Source: AISI Network Analysis↗🔗 webAISI Network AnalysisSumaya Nur Adan (2024)The document outlines a proposed structure for the International Network of AI Safety Institutes, focusing on prioritizing standards, information sharing, and safety evaluations...safetyevaluationSource ↗
Conceptual Indicators
Coordination Success Rate
- Conceptual metric: % of identified coordination problems that achieve multilateral solutions
- Current status: Low coordination success on binding agreements
- Examples of failure:
- No binding international compute governance
- No global model registry
- Fragmented incident reporting systems
- Limited cross-border enforcement
- Examples of partial success:
- AISI Network formation
- OECD Principles (soft coordination)
- G7/G20 discussions ongoing
Race-to-the-Bottom Index
- Conceptual metric: Evidence of jurisdictions weakening standards to attract AI companies
- Current: Anecdotal concerns but no systematic measurement
- Source: International Governance of AI↗🔗 web★★★★☆Springer (peer-reviewed)International Governance of AIThe article explores various governance strategies for transformative AI, analyzing potential approaches from subnational norms to international regimes. It highlights the uniqu...governanceSource ↗
6. Democratic vs Authoritarian AI Adoption Rates
Measured Data
AI Surveillance Adoption
- China's market dominance: Exports AI surveillance to "nearly twice as many countries as United States"
- Chinese surveillance camera market: Hikvision + Dahua = 34% global market share (2024)
- Global reach: PRC-sourced AI surveillance in 80+ countries (authoritarian and democratic)
- China's domestic deployment: Over half the world's 1 billion surveillance cameras located in China
- Source: Global Expansion of AI Surveillance↗🔗 web★★★★☆Carnegie EndowmentGlobal Expansion of AI SurveillanceA comprehensive study reveals the widespread adoption of AI surveillance technologies worldwide, with Chinese companies playing a major role in supplying these systems to govern...authoritarianismhuman-rightsdigital-repressionSource ↗
Export Patterns
- China's bias: "Significant bias in exporting to autocratic regimes"
- Huawei "Safe City" agreements (2009-2018): 70%+ involved countries rated "partly free" or "not free" by Freedom House
- Nuance: "China is exporting surveillance tech to liberal democracies as much as targeting authoritarian markets"
- Impact finding: Mature democracies did not experience erosion when importing surveillance AI; weak democracies exhibited backsliding regardless of supplier
- Source: Data-Centric Authoritarianism↗🔗 webData-Centric AuthoritarianismThe report examines how China is developing advanced technologies like AI surveillance, neurotechnologies, quantum computing, and digital currencies that enable unprecedented da...x-riskgovernanceauthoritarianismSource ↗
Authoritarian Advantage Factors
- China's structural advantages for AI surveillance:
- Lax data privacy laws
- Government involvement in production/research
- Large population for training data
- Societal acceptance of state surveillance
- Strong AI industrial sectors
- Source: AI and Authoritarian Governments↗🔗 webAI and Authoritarian GovernmentsThe source explores how AI technologies, particularly in China, are being used for extensive surveillance and population control. It highlights the potential threats to individu...Source ↗
Conceptual Indicator
Democratic vs Authoritarian AI Capability Gap
- Conceptual metric: Relative AI capability development in democracies vs autocracies
- Proxy: US vs China capability race
- US: 40 notable AI models (2024) vs China: 15 models
- US private investment: $109.1B vs China: $9.3B
- But China's DeepSeek/Qwen/Kimi "closing the gap on reasoning and coding"
- Interpretation: US maintains edge but China rapidly improving
- Source: State of AI Report 2025↗🔗 webState of AI Report 2025The annual State of AI Report examines key developments in AI research, industry, politics, and safety for 2025, featuring insights from a large-scale practitioner survey.safetyrisk-factorgame-theorycoordination+1Source ↗
7. Concentration of AI Capability (Herfindahl Index)
Measured Market Concentration
Enterprise LLM Market Share (2024-2025)
- AnthropicOrganizationAnthropicComprehensive profile of Anthropic, founded in 2021 by seven former OpenAI researchers (Dario and Daniela Amodei, Chris Olah, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish) with early funding...: 32% usage share, 40% revenue share
- OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to commercial AGI developer, with detailed analysis of governance crisis, safety researcher exodus (75% of ...: 25% usage share, 27% revenue share (down from 50% in 2023)
- Google: 20% usage share
- Meta (Llama): 9%
- DeepSeek: 1%
- Approximate HHI: ~2,500 (0.32² + 0.25² + 0.20² + 0.09² + 0.01²) × 10,000 ≈ 2,050-2,500
- Interpretation: "Moderate concentration" (HHI 1,500-2,500); top 3 control ~77%
- Source: 2025 State of Generative AI in Enterprise - Menlo Ventures↗🔗 web2025 State of Generative AI in Enterprise - Menlo VenturesMenlo Ventures (2025)A market analysis report examining the current state and future trajectory of generative AI technologies in enterprise settings, highlighting adoption trends and economic implic...economicSource ↗
Frontier Model Development Concentration
- US dominance: 40 notable models (2024) vs China: 15, Europe: 3
- Competition assessment: "OpenAI retains narrow lead at frontier, but competition intensified"
- China status: "Credible #2" with DeepSeek, Qwen, Kimi
- Source: Stanford AI Index 2025↗🔗 web★★★★☆Stanford HAIStanford AI Index 2025The 2025 AI Index Report from Stanford HAI offers a detailed analysis of AI's technological, economic, and social developments. It highlights key trends in performance, investme...capabilitieseconomicSource ↗
Investment/Funding Concentration
- Foundation model funding (2025): $80B (40% of all global AI funding)
- OpenAI + Anthropic: 14% of all global venture investment across all sectors
- Big Tech backing: "Interconnected web of 90+ partnerships" among Google, Apple, Microsoft, Meta, Amazon, Nvidia
- Regulatory concern: UK CMA and US FTC investigating concentration via partnerships/investments
- Source: Big Tech's Cloud Oligopoly↗🔗 webBig Tech's Cloud OligopolyA detailed analysis reveals how major tech companies like Microsoft, Amazon, and Google are dominating the AI and cloud computing markets through strategic investments and infra...x-riskirreversibilitypath-dependenceSource ↗
Conceptual Extensions
Compute Concentration
- Conceptual metric: HHI for GPU/training compute access
- Challenge: Private compute capacity not publicly reported
- Known: Nvidia dominance in AI chips; hyperscaler concentration (AWS, Azure, GCP)
- Implication: Capability concentration may exceed market share concentration
Talent Concentration
- Conceptual metric: % of top AI researchers at small number of organizations
- Challenge: Defining "top researchers" and tracking mobility
- Proxy: Conference authorship concentration, hiring trends
8. Societal ResilienceAi Transition Model ParameterSocietal ResilienceThis page contains only a component reference with no visible content. Unable to assess any substantive material about societal resilience or its role in AI transitions. to AI Disruption
Conceptual Framework
WEF Global Risks Report 2024 - Resilience Assessment
- Key finding: "Weakened economies and societies may only require smallest shock to edge past tipping point of resilience"
- Current crises eroding resilience: COVID-19 aftermath, Russia-Ukraine war "exposed cracks in societies"
- Long-term erosion: "Decades of investment in human development slowly being chipped away"
- Conflict risk: "Corroding societal resilience risk creating conflict contagion"
- Source: WEF Global Risks Report 2024↗🔗 web★★★★☆World Economic ForumWEF Global Risks Report 2024Source ↗
Measured Proxies
Economic DisruptionRiskAI-Driven Economic DisruptionComprehensive survey of AI labor displacement evidence showing 40-60% of jobs in advanced economies exposed to automation, with IMF warning of inequality worsening in most scenarios and 13% early-c...Quality: 42/100 Preparedness
- Social safety nets: Vary widely by country (unemployment insurance, retraining programs)
- Financial instruments: Insurance, catastrophe bonds, public risk pools
- Challenge: No unified "AI disruption resilience" score exists
Digital Literacy and Misinformation Resilience
- Recommendation: "Digital literacy campaigns on misinformation and disinformation"
- Current: No systematic measurement of population-level AI/digital literacy
- Proxy: General digital skills indices exist but not AI-specific
Institutional Adaptive Capacity
- Indicators: R&D investment in climate modeling/energy transition (analogous to AI preparedness)
- Infrastructure resilience: Building codes, disaster preparedness
- Limitation: No AI-specific resilience metrics
Conceptual Indicators
Labor Market Adaptability Index
- Conceptual metric: How quickly workers can reskill/transition as AI automates tasks
- Proxy data: Historical adjustment rates to automation, education system responsiveness
- Challenge: AI may disrupt faster than historical automation
Democratic Resilience to AI-Driven Polarization
- Conceptual metric: Ability of democratic institutions to function under AI-amplified disinformation
- Current concerns: Misinformation in 2024 elections (US, UK)
- No systematic tracking: Would require longitudinal study
9. Rate of AI-Caused Incidents/Accidents
Measured Incident Data
AI Incident Database (AIID)
- Total incidents: 2,000+ documented incidents (as of 2024)
- Coverage: "Intelligent systems causing safety, fairness, or other real-world problems"
- Growth: From 1,200+ reports to 2,000+ (rapid increase)
- Limitation: Voluntary reporting, variable severity, unclear baseline
- Source: AI Incident Database↗🔗 webAI Incident DatabaseThe AI Incident Database is a comprehensive collection of documented incidents revealing AI system failures across various domains, highlighting potential risks and learning opp...social-engineeringmanipulationdeceptionSource ↗
AIAAIC Repository
- Start date: June 2019
- Coverage: "Incidents and controversies driven by AI, algorithms, automation"
- Goal: "Systematically documenting incidents where AI systems cause or contribute to harms"
- Scope: Broader than AIID—includes technical failures and social impacts
- Source: AIAAIC Repository↗🔗 webAIAAIC RepositoryAn independent, grassroots initiative documenting AI incidents and controversies. Provides a comprehensive taxonomy for identifying and classifying AI-related harms and ethical ...Source ↗
OECD AI Incidents Monitor (AIM)
- Launch: Part of OECD AI Policy Observatory
- Focus: Policy-relevant cases aligned with governance interests
- Collaboration: Partnership on AI, Center for Advancement of Trustworthy AI
- Limitation: More selective than AIAAIC (policy focus vs comprehensive coverage)
- Source: OECD AIM↗🔗 web★★★★☆OECDOECD AIMAn independent public repository documenting AI-related incidents, controversies, and risks. The tool provides transparent insights into potential challenges with AI systems and...Source ↗
Interpretation Challenges
Incident Rate per AI System
- Conceptual metric: Incidents per 1,000 or 10,000 deployed AI systems
- Challenge: Unknown denominator—no comprehensive count of deployed systems
- Current: Absolute incident counts rising, but unclear if rate rising
Severity Distribution
- Available: Incident databases categorize by harm type (safety, fairness, rights)
- Missing: Standardized severity scales across databases
- Incompatibility: "Both databases have vastly different and incompatible structures"
- Source: Standardised Schema for AI Incident Databases↗📄 paper★★★☆☆arXivStandardised Schema for AI Incident DatabasesAvinash Agarwal, Manisha J. Nene (2025)governancesafetySource ↗
Baseline Comparison
- Question: Are AI incident rates high compared to other technologies at similar maturity?
- Challenge: No established baseline or reference class
- Aviation analogy: Aviation incident rates well-tracked, declining over time—AI lacks comparable infrastructure
10. Near-Miss Reporting Rate
Industry Position
AI Lab Support for Near-Miss Reporting
- Strong agreement: 76% strongly agree, 20% somewhat agree
- Statement: "AGI labs should report accidents and near misses to appropriate state actors and other AGI labs"
- Source mechanism: AI incident database
- Source: EA Forum: Incident Reporting for AI Safety↗🔗 web★★★☆☆EA ForumEA Forum: Incident Reporting for AI SafetyZach Stein-Perlman, SeLo, stepanlos et al. (2023)The document argues for developing a comprehensive incident reporting system for AI, emphasizing the importance of sharing information about AI system failures, near-misses, and...safetySource ↗
Regulatory Frameworks Emerging
US Executive Order 14110
- Provision: Addressed "safety" and "rights" protections
- Limitation: Not comprehensive near-miss framework
- State-level: New York State bill would require incident reporting to Attorney General (safety incidents only)
- Source: Designing Incident Reporting Systems↗📄 paper★★★☆☆arXivDesigning Incident Reporting SystemsKevin Wei, Lennart Heim (2025)governancesafetySource ↗
EU AI Act Incident Reporting
- Requirement: Single incident reporting requirement
- Definition: Includes both "rights incidents" and "safety incidents"
- Limitation: Does not explicitly distinguish near-misses from harms
- Source: EU AI Act
Proposed Framework Properties (Shrishak 2023)
- Voluntary reporting: Essential for capturing near-misses not covered by mandatory serious incident reporting
- Non-punitive: Consensus that self-reporting should not lead to punishment since no harm occurred
- Accessible: Low barriers to submission
- Actionable: Information useful for other developers
Current Reporting Rate
Actual Near-Miss Reporting Rate
- Conceptual metric: % of near-miss events that get reported to databases or regulators
- Current estimate: Unknown, likely very low
- Challenge: "Current systems fail to capture numerous near-miss incidents that narrowly avoid accidents"
- Comparison: Aviation near-miss reporting well-established; AI has no equivalent system yet
- Source: Developing Near-Miss Reporting System↗📋 reportDeveloping Near-Miss Reporting SystemA multi-pronged research project investigated near-miss reporting systems for roadside responders, examining existing platforms, stakeholder perspectives, and barriers to report...Source ↗
Culture Gap
- Aviation standard: Open, non-punitive reporting is norm
- AI current state: "Lack of comprehensive and reliable data regarding frequency and characteristics"
- Needed shift: "Building culture of safety for AI requires understanding failure modes, which starts with reporting past incidents"
Data Quality Summary
| Metric | Status | Data Quality | Update Frequency |
|---|---|---|---|
| Information environment quality | Measured | High (Freedom House, RSF) | Annual |
| Institutional decision-making | Proxy | Medium (WGI covers general governance, not AI-specific) | Annual |
| Elite/public opinion divergence | Measured | Medium (multiple polls, varying methods) | Quarterly-Annual |
| Policy response time | Measured | High (specific cases documented) | Case-by-case |
| Coordination failure rate | Conceptual | Low (qualitative assessments only) | Ad hoc |
| Democratic vs authoritarian adoption | Measured | Medium (surveillance tech tracked, general AI capabilities less clear) | Annual |
| AI capability concentration (HHI) | Measured | Medium (market share known, compute concentration estimated) | Quarterly-Annual |
| Societal resilience | Conceptual | Low (framework exists, no AI-specific index) | Annual (WEF) |
| AI incident rate | Measured | Medium (absolute counts good, rates unclear due to denominator problem) | Continuous |
| Near-miss reporting rate | Conceptual | Very low (frameworks proposed, actual reporting minimal) | Not measured |
Key Gaps and Limitations
Measurement Challenges
- Denominator problems: Incident rates require knowing # of deployed systems (unknown)
- Counterfactuals: Measuring "coordination failure rate" requires knowing what coordination was possible
- Lag indicators: Most metrics (incidents, trust, governance quality) are lagging, not leading
- Attribution: Hard to isolate AI's contribution to institutional qualityAi Transition Model ParameterInstitutional QualityThis page contains only a React component import with no actual content rendered. It cannot be evaluated for substance, methodology, or conclusions. or societal resilience
- Standardization: Different databases use incompatible schemas (incidents, governance)
Conceptual Gaps
- No unified resilience metric: Individual components exist but no composite "AI disruption resilience score"
- Weak coordination metrics: Qualitative assessments dominate; no quantitative coordination success rate
- Missing baselines: Few comparisons to other technologies at similar development stages
- Democratic processes: No metrics for how democratic institutions specifically handle AI (vs general governance)
Research Priorities
High-value additions:
- Standardized AI incident severity scale
- Near-miss reporting infrastructure and culture-building
- Democratic resilience to AI-specific challenges (not just general governance)
- Coordination success metrics (track multilateral agreements, implementation rates)
- AI-specific institutional capacity assessment (beyond general WGI)
Interpretation Guidance
Using These Metrics
For Risk Assessment:
- Low trust + weak institutions + high elite/public gap = governance failure more likely
- Rising incidents + low near-miss reporting = learning from failures inadequate
- High concentration + weak coordination = race dynamics and power concentration risks
For Forecasting:
- Policy lag times (1-3 years) inform timeline expectations for future risks
- Trust trends predict regulatory pressure and public backlash likelihood
- Coordination challenges suggest multilateral solutions face high barriers
For Intervention:
- Improving near-miss reporting culture = high-leverage, low-cost
- Building institutional AI literacy = addresses decision-making quality
- Bridging elite/public gap = essential for democratic legitimacy
Cautions
- Correlation ≠ causation: Weak governance may cause AI risks OR AI risks may weaken governance
- Selection effects: Reported incidents overrepresent visible, Western, English-language cases
- Gaming: Once metrics are targets, they can be manipulated (Goodhart's Law)
- Aggregation: Composite indices hide important variation across dimensions
Sources
Primary Data Sources
- Freedom House - Freedom on the Net↗🔗 web★★★★☆Freedom HouseFreedom on the Net 2025Source ↗
- Reporters Without Borders - World Press Freedom Index 2025↗🔗 webRSF World Press Freedom Index 2025The 2025 RSF World Press Freedom Index reveals a critical economic threat to journalism worldwide, with media outlets struggling financially and losing independence in most coun...economicSource ↗
- Edelman Trust Barometer 2024↗🔗 web★★★☆☆Edelman2024 Edelman Trust BarometerinstitutionsmediademocracySource ↗
- World Bank Worldwide Governance Indicators↗🔗 webWorld Bank WGI 2024The World Bank's Worldwide Governance Indicators (WGI) measure six key governance dimensions using perception data from multiple sources. The 2025 edition introduces methodologi...governanceSource ↗
- AI Policy Institute Polling↗🔗 webAI Policy Institute PollingA YouGov survey shows strong public support for AI regulation, with most voters worried about potential catastrophic risks and preferring a cautious approach to AI development.governancex-riskSource ↗
- Pew Research: Public and AI Experts↗🔗 web★★★★☆Pew Research CenterPew Research: Public and AI ExpertsA comprehensive study comparing perspectives of U.S. adults and AI experts on artificial intelligence's future, highlighting differences in optimism, job impacts, and regulatory...governanceeconomicSource ↗
- EU AI Act Implementation Timeline↗🔗 webEU AI Act Implementation TimelineThe EU AI Act implementation follows a gradual rollout with key dates from 2024 to 2031, establishing progressive regulatory milestones for AI systems and governance.governanceSource ↗
- OECD AI Principles 2024↗🔗 web★★★★☆OECDOECD AI Principles 2024 UpdateThe OECD has updated its AI Principles to address emerging challenges in AI technology, focusing on safety, ethics, and international cooperation across 47 jurisdictions.safetySource ↗
- AI Incident Database (AIID)↗🔗 webAI Incident DatabaseThe AI Incident Database is a comprehensive collection of documented incidents revealing AI system failures across various domains, highlighting potential risks and learning opp...social-engineeringmanipulationdeceptionSource ↗
- Partnership on AI - AI Incident Database↗🔗 webPartnership on AI - AI Incident DatabasePartnership on AI created the AI Incident Database to collect and learn from AI system failures across different domains. The database allows researchers, engineers, and product...Source ↗
- AIAAIC Repository↗🔗 webAIAAIC RepositoryAn independent, grassroots initiative documenting AI incidents and controversies. Provides a comprehensive taxonomy for identifying and classifying AI-related harms and ethical ...Source ↗
Analysis and Research
- Stanford AI Index 2025↗🔗 web★★★★☆Stanford HAIStanford AI Index 2025The 2025 AI Index Report from Stanford HAI offers a detailed analysis of AI's technological, economic, and social developments. It highlights key trends in performance, investme...capabilitieseconomicSource ↗
- State of AI Report 2025↗🔗 webState of AI Report 2025The annual State of AI Report examines key developments in AI research, industry, politics, and safety for 2025, featuring insights from a large-scale practitioner survey.safetyrisk-factorgame-theorycoordination+1Source ↗
- Menlo Ventures: State of Generative AI in Enterprise 2025↗🔗 web2025 State of Generative AI in Enterprise - Menlo VenturesMenlo Ventures (2025)A market analysis report examining the current state and future trajectory of generative AI technologies in enterprise settings, highlighting adoption trends and economic implic...economicSource ↗
- WEF Global Risks Report 2024↗🔗 web★★★★☆World Economic ForumWEF Global Risks Report 2024Source ↗
- Carnegie Endowment: Global Expansion of AI Surveillance↗🔗 web★★★★☆Carnegie EndowmentGlobal Expansion of AI SurveillanceA comprehensive study reveals the widespread adoption of AI surveillance technologies worldwide, with Chinese companies playing a major role in supplying these systems to govern...authoritarianismhuman-rightsdigital-repressionSource ↗
- NED: Data-Centric Authoritarianism↗🔗 webData-Centric AuthoritarianismThe report examines how China is developing advanced technologies like AI surveillance, neurotechnologies, quantum computing, and digital currencies that enable unprecedented da...x-riskgovernanceauthoritarianismSource ↗
- CSIS: G7 Hiroshima AI Process↗🔗 web★★★★☆CSISCSIS: G7 Hiroshima AI ProcessThe report examines the G7's emerging approach to AI governance, highlighting potential enhancements for international cooperation on AI development and regulation.governanceSource ↗
- IAPS: International Network of AI Safety Institutes↗🔗 webAISI Network AnalysisSumaya Nur Adan (2024)The document outlines a proposed structure for the International Network of AI Safety Institutes, focusing on prioritizing standards, information sharing, and safety evaluations...safetyevaluationSource ↗
- OECD: AI Safety Institute Network Role↗🔗 web★★★★☆OECDAISI International NetworkThe AISI Network, launched in May 2024, seeks to promote safe and trustworthy AI development through international collaboration, knowledge sharing, and coordinated governance a...governancesafetySource ↗
- Future of Life InstituteOrganizationFuture of Life Institute (FLI)Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause ...Quality: 46/100: AI Safety Index 2024↗🔗 web★★★☆☆Future of Life InstituteFuture of Life Institute: AI Safety Index 2024The Future of Life Institute's AI Safety Index 2024 evaluates six leading AI companies across 42 safety indicators, highlighting major concerns about risk management and potenti...safetyevaluationfield-buildingtraining-programs+1Source ↗
- EA Forum: Incident Reporting for AI Safety↗🔗 web★★★☆☆EA ForumEA Forum: Incident Reporting for AI SafetyZach Stein-Perlman, SeLo, stepanlos et al. (2023)The document argues for developing a comprehensive incident reporting system for AI, emphasizing the importance of sharing information about AI system failures, near-misses, and...safetySource ↗
- arXiv: Designing Incident Reporting Systems↗📄 paper★★★☆☆arXivDesigning Incident Reporting SystemsKevin Wei, Lennart Heim (2025)governancesafetySource ↗
- arXiv: Standardised Schema for AI Incident Databases↗📄 paper★★★☆☆arXivStandardised Schema for AI Incident DatabasesAvinash Agarwal, Manisha J. Nene (2025)governancesafetySource ↗
Regulatory and Policy Documents
- Biden Administration AI Executive Order 14110↗🏛️ government★★★★☆White HouseBiden Administration AI Executive Order 14110software-engineeringcode-generationprogramming-aidefense+1Source ↗
- European Commission: EU AI Act↗🔗 web★★★★☆European UnionEuropean Commission: EU AI ActThe EU AI Act is a pioneering legal framework classifying AI systems by risk levels and setting strict rules for high-risk and potentially harmful AI applications to protect fun...safetygame-theoryinternational-coordinationgovernance+1Source ↗
- G7 Hiroshima AI Process Code of Conduct↗🔗 web★★★★☆CSISCSIS: G7 Hiroshima AI ProcessThe report examines the G7's emerging approach to AI governance, highlighting potential enhancements for international cooperation on AI development and regulation.governanceSource ↗