International AI Coordination Game
International AI Coordination Game Model
Game-theoretic analysis demonstrating that US-China AI coordination defaults to mutual defection (racing) because defection dominates when cooperation probability falls below 50%, with current estimates at 60-70% likelihood of continued deterioration through 2030. Key intervention leverage points identified: Track 2 diplomacy ($10-20M annually), verification technology development ($50-200M over 5 years), and middle power coordination through EU regulatory frameworks.
Overview
International AI governance presents a critical coordination problem between major powers - primarily the United States and China. The strategic structure of this competition fundamentally shapes whether humanity achieves safe AI development or races toward catastrophic outcomes. Recent analysis by RAND Corporation↗🔗 web★★★★☆RAND CorporationThe AI and Biological Weapons ThreatA 2023 RAND empirical study directly relevant to catastrophic risk from AI misuse; provides early evidence on LLM dual-use risks in bioweapons contexts, informing debates about frontier model deployment safeguards and biosecurity policy.This RAND Corporation report examines the misuse risks of large language models (LLMs) in biological weapons development through a red-team methodology. Preliminary findings sho...biosecurityred-teamingcapabilitiesexistential-risk+6Source ↗ confirms this represents one of the defining geopolitical challenges of the 21st century, sitting at the intersection of technological competition, national security, and existential risk management.
The central tension emerges from a classic prisoner's dilemma: mutual cooperation on AI safety offers optimal collective outcomes (4,4 payoff), yet unilateral defection remains persistently tempting (5,1 advantage). Game-theoretic modeling by Georgetown's Center for Security and Emerging Technology↗🔗 web★★★★☆CSET GeorgetownGame-theoretic modeling by Georgetown's Center for Security and Emerging TechnologyPublished by CSET, a leading national security-focused AI policy think tank at Georgetown University; relevant for understanding how geopolitical competition shapes AI safety and governance challenges at the international level.This Georgetown CSET publication applies game-theoretic modeling to analyze competitive AI dynamics between major powers, exploring how strategic incentives shape national AI de...governancepolicycoordinationai-safety+3Source ↗ demonstrates why rational actors choose suboptimal racing dynamics even when superior cooperative alternatives exist. When cooperation probability falls below 50%, defection mathematically dominates, explaining persistent competitive patterns despite shared catastrophic risks.
Risk Assessment Framework
| Risk Category | Severity | Likelihood (2024-2030) | Timeline | Trend |
|---|---|---|---|---|
| Racing acceleration | Very High | 65% | 2-4 years | Worsening |
| Coordination breakdown | High | 40% | 1-3 years | Stable |
| Verification failure | Medium | 30% | 3-5 years | Uncertain |
| Technology decoupling | High | 25% | 2-5 years | Worsening |
| Crisis escalation | Very High | 20% | 1-2 years | Worsening |
Source: Synthesis of FHI surveys↗🔗 web★★★★☆Future of Humanity Institute**Future of Humanity Institute**FHI was a pioneering institution in AI safety and existential risk; this archived homepage is useful for historical context and understanding the institutional origins of the field, though the site is no longer actively updated following its April 2024 closure.The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk researc...ai-safetyexistential-riskalignmentgovernance+3Source ↗, CSET analysis↗🔗 web★★★★☆CSET GeorgetownCSET: AI Market DynamicsCSET is a prominent DC-based think tank whose research on AI governance, compute policy, and geopolitical competition is frequently cited in AI safety and policy discussions; this is their institutional homepage.CSET (Center for Security and Emerging Technology) at Georgetown University is a policy research organization focused on the security implications of emerging technologies, part...governancepolicyai-safetycoordination+2Source ↗, and expert elicitation
Strategic Player Analysis
Major Power Capabilities and Constraints
| Actor | AI Capabilities | Governance Advantages | Key Constraints | Coordination Incentives |
|---|---|---|---|---|
| United States | Leading labs (OpenAI, Anthropic, DeepMind), dominant compute infrastructure | Private sector innovation, democratic legitimacy | Fragmented policymaking, electoral cycles | Maintain lead while preventing catastrophe |
| China | Major tech giants (Baidu, Alibaba), centralized planning | Rapid policy implementation, state coordination | Chip access restrictions, brain drain | Catch up through safety cooperation |
| European Union | Smaller research base, regulatory leadership | Comprehensive AI Act framework, rights focus | Slower consensus building, limited tech giants | Set global norms, ensure safety standards |
| United Kingdom | DeepMind legacy, concentrated expertise | Research excellence, regulatory agility | Limited scale, post-Brexit isolation | Bridge US-EU coordination gaps |
The asymmetric structure creates fundamentally different strategic preferences. Analysis by the Atlantic Council↗🔗 web★★★★☆Atlantic CouncilAnalysis by the Atlantic CouncilThis URL returns a 404 error; the report may have been moved or deleted. Users should search the Atlantic Council website directly for 'The Algorithmics of Power' to locate the current version.This Atlantic Council report appears to have been removed or relocated, returning a 404 error. Based on the title and existing tags, it likely examined algorithmic systems in th...governancepolicycoordinationai-safetySource ↗ shows the US currently leads in most AI capabilities but faces democratic governance constraints that complicate long-term strategic planning. China's centralized system enables rapid policy implementation but confronts persistent technology access barriers through export controls.
Information Asymmetry Challenges
Critical uncertainty surrounds relative capabilities, with each side maintaining classified programs that generate "technological fog of war." CSIS intelligence assessments↗🔗 web★★★★☆CSISCSIS Critical QuestionsCSIS is a prominent Washington D.C. think tank; this program page is a hub for policy-oriented analysis on technology and national security, relevant to AI governance and international coordination discussions.The Center for Strategic and International Studies (CSIS) Strategic Technologies Program analyzes the intersection of technology, national security, and international competitio...governancepolicycoordinationinternational-coordination+4Source ↗ indicate both powers systematically exaggerate progress when seeking leverage while concealing breakthroughs to maintain surprise advantages. This information problem undermines trust-building and makes verification mechanisms essential for stable agreements.
Game Structure and Equilibrium Analysis
The Fundamental Coordination Dilemma
The strategic interaction exhibits classic prisoner's dilemma characteristics with the following payoff structure:
| Strategy Combination | US Payoff | China Payoff | Outcome |
|---|---|---|---|
| Both Cooperate | 4 | 4 | Safe AI development, shared benefits |
| US Cooperates, China Defects | 1 | 5 | China gains decisive advantage |
| US Defects, China Cooperates | 5 | 1 | US secures technological dominance |
| Both Defect | 2 | 2 | Racing dynamics, elevated catastrophic risk |
Expected utility calculations reveal why cooperation fails:
Defection dominates when , meaning cooperation requires confidence exceeding 50% that the adversary will reciprocate. Research by Stanford's Human-Centered AI Institute↗🔗 web★★★★☆Stanford HAIResearch by Stanford's Human-Centered AI InstituteStanford HAI's policy hub is a prominent institutional source for AI governance research; useful as a reference for tracking academic-to-policy translation efforts and concrete governance proposals from a leading AI research institution.Stanford's Human-Centered AI Institute (HAI) produces policy-oriented AI governance research, white papers, and briefs spanning health, international affairs, language inclusion...governancepolicyai-safetycoordination+3Source ↗ demonstrates this threshold remains unmet in current US-China relations.
Diagram (loading…)
graph TD
A[Current State: Mutual Suspicion] --> B{Crisis Event?}
A --> C{Capability Breakthrough?}
A --> D{Sustained Dialogue?}
B -->|Major AI Incident| E[Risk Salience Increases]
B -->|Military Close Call| F[Security Fears Intensify]
C -->|US Breakthrough| G[China Panic Response]
C -->|China Breakthrough| H[US Acceleration]
D -->|Track 2 Success| I[Technical Common Ground]
D -->|Failure| J[Status Quo Continues]
E --> K{Leadership Response}
F --> L[Racing Accelerates]
G --> L
H --> L
I --> M[Verification Framework?]
J --> N[Competitive Coexistence]
K -->|Cooperative| O[Safety Agreement Opportunity]
K -->|Competitive| L
M -->|Feasible| O
M -->|Infeasible| N
O --> P[Successful Coordination<br/>~15% probability]
L --> Q[Dangerous Racing<br/>~35% probability]
N --> R[Muddle Through<br/>~35% probability]
style P fill:#90EE90
style Q fill:#FF6B6B
style R fill:#FFE66DMultidimensional Coordination Complexity
Real-world coordination extends across multiple independent dimensions that complicate simple bilateral agreements:
| Coordination Dimension | Verifiability | Current Status | Cooperation Feasibility |
|---|---|---|---|
| Compute governance | High | Export controls active | Moderate - visible infrastructure |
| Safety research | Medium | Limited sharing | High - public good nature |
| Military applications | Low | Classified programs | Low - security classification |
| Deployment standards | Medium | Divergent approaches | Moderate - observable outcomes |
| Talent mobility | High | Increasing restrictions | High - visa/immigration policy |
MIT's Center for Collective Intelligence analysis↗🔗 webMIT's Center for Collective Intelligence analysisCCI is a multidisciplinary MIT research center whose work on collective problem-solving and human-AI collaboration has indirect but meaningful relevance to AI safety governance, coordination challenges, and countering influence operations.The MIT Center for Collective Intelligence researches how groups of people and computers can be organized to act more intelligently than individuals alone. CCI explores collecti...coordinationgovernanceai-safetypolicy+4Source ↗ reveals that progress occurs at different rates across dimensions, with algorithmic advances nearly impossible to monitor externally while compute infrastructure remains highly visible through satellite observation and power consumption analysis.
Current Trajectory and Warning Signs
Recent Developments (2023-2024)
The coordination landscape has deteriorated significantly over the past two years. Export control measures implemented in October 2022↗🏛️ government★★★★☆Bureau of Industry and SecurityExport control measures implemented in October 2022This BIS page provides official U.S. government guidance on the landmark October 2022 export controls targeting China's semiconductor and AI compute ecosystem, a pivotal policy intervention relevant to AI governance and compute governance discussions.The U.S. Bureau of Industry and Security (BIS) implemented sweeping export control measures in October 2022 targeting China's ability to acquire advanced semiconductors, semicon...governancepolicycomputecapabilities+2Source ↗ dramatically restricted China's access to advanced semiconductors, triggering reciprocal restrictions on critical minerals and escalating technological decoupling. Chinese investment in domestic chip capabilities has accelerated in response, while US lawmakers increasingly frame AI competition in zero-sum national security terms.
Scientific exchange has contracted substantially. Nature analysis of publication patterns↗📄 paper★★★★★Nature (peer-reviewed)Nature analysis of publication patternsWhile this Nature article covers a medical breakthrough in cystic fibrosis treatment, it is not directly relevant to AI safety research, policy, or governance.Priyanga Amarasekare (2010)A Nature article reports that the developers of Trikafta, a triple-drug combination treatment for cystic fibrosis, have won a US$3-million Breakthrough Prize. Trikafta has prove...game-theoryinternational-coordinationgovernanceSource ↗ shows US-China AI research collaboration declining 30% since 2022, with researchers reporting visa difficulties and institutional pressure to avoid Chinese partnerships. Academic conferences increasingly feature geographically segregated participation as political tensions constrain professional networks.
2025-2030 Trajectory Projections
| Scenario | Probability | Key Drivers | Expected Outcomes |
|---|---|---|---|
| Accelerating Competition | 35% | Taiwan crisis, capability breakthrough, domestic politics | Racing dynamics, safety shortcuts, high catastrophic risk |
| Competitive Coexistence | 35% | Muddle through, informal red lines | Moderate racing, parallel development, medium risk |
| Crisis-Driven Cooperation | 15% | Major AI incident, Track 2 breakthrough | Safety frameworks, slower timelines, reduced risk |
| Technology Decoupling | 15% | Complete export bans, alliance hardening | Parallel ecosystems, incompatible standards, unknown risk |
Forecasting analysis by Metaculus aggregates↗🔗 web★★★☆☆MetaculusForecasting analysis by Metaculus aggregatesMetaculus is a widely-used forecasting aggregation platform that AI safety researchers reference for probabilistic estimates on AI timelines, risk events, and policy outcomes; useful as a calibration and base-rate reference tool.Metaculus is a collaborative forecasting platform that aggregates probabilistic predictions from a community of forecasters on questions spanning AI, geopolitics, health, climat...governanceexistential-riskpolicyevaluation+3Source ↗ assign 60-70% probability to continued deterioration of coordination prospects through 2030 absent major catalyzing events.
Verification and Enforcement Challenges
Technical Feasibility Assessment
| Monitoring Target | Detection Confidence | Time Lag | Cost | Resistance Level |
|---|---|---|---|---|
| Large training runs | 85-95% | Days-weeks | Medium | Low |
| Data center construction | 90-99% | Months | Low | Very Low |
| Chip manufacturing | 70-85% | Weeks-months | High | Medium |
| Algorithm development | 5-15% | Unknown | Very High | Very High |
| Safety practices | 10-30% | N/A | Medium | High |
Source: RAND verification studies↗🔗 web★★★★☆RAND CorporationRAND verification studiesRAND's verification research is relevant to AI safety discussions about how international agreements on AI development could be monitored and enforced, drawing on decades of arms control verification experience.RAND Corporation's verification research hub covers methods and frameworks for verifying compliance with arms control, international agreements, and emerging technology governan...governancecoordinationpolicyinternational-coordination+3Source ↗ and expert elicitation
The fundamental asymmetry between visible and hidden aspects of AI development creates binding constraints on agreement design. Research by the Carnegie Endowment↗🔗 web★★★★☆Carnegie EndowmentResearch by the Carnegie EndowmentThis Carnegie Endowment special project page on AI security is currently returning a 404 error; users should search for updated URLs or check the main Carnegie Endowment site for current AI-related research and publications.The Carnegie Endowment for International Peace runs a dedicated AI security research initiative examining the intersection of artificial intelligence and national/international ...governancepolicycoordinationai-safety+1Source ↗ demonstrates that any stable framework must focus on observable dimensions, particularly compute governance where infrastructure requirements make concealment difficult.
Enforcement Mechanism Analysis
Economic enforcement tools have shown mixed effectiveness. Export controls successfully slowed Chinese semiconductor advancement but triggered significant retaliation and alternative supply chain development. CSIS economic security analysis↗🔗 web★★★★☆CSISCSIS economic security analysisThis is a CSIS think-tank program homepage focused on international economic security and governance; it has minimal direct relevance to AI safety but touches on international coordination and governance frameworks that may inform AI governance discussions.The CSIS Economics Program (now merged into the Economics Program and Scholl Chair in International Business) leads research on international economic policy, global economic go...governancepolicycoordinationinternational-coordinationSource ↗ indicates trade sanctions face diminishing returns against major economic powers with large domestic markets and alternative partnerships.
Diplomatic enforcement through alliance coordination offers promise but remains untested at scale. Brookings Institution research↗🔗 web★★★★☆Brookings InstitutionBrookings Institution researchThis is a general topic landing page for Brookings Institution's international affairs research; tangentially relevant to AI safety governance and international coordination on emerging technology risks, but not AI-specific content.The Brookings Institution's International Affairs topic page aggregates policy research, expert analysis, and recommendations on global geopolitical challenges including nuclear...governancepolicycoordinationexistential-risk+1Source ↗ on technology diplomacy suggests middle powers could play crucial mediating roles, with EU regulatory frameworks potentially creating global standards that facilitate coordination.
Key Uncertainties and Expert Disagreements
Critical Unknowns
Verification Technology Development: Current monitoring capabilities remain insufficient for comprehensive AI oversight. Projects like the AI Safety Institute's evaluation frameworks↗🏛️ government★★★★☆UK AI Safety InstituteUK AI Safety Institute (AISI)AISI is a key institutional actor in AI safety, representing one of the first government-led efforts to systematically evaluate frontier AI models; its work and publications are directly relevant to governance, evaluation methodology, and international AI safety coordination.The UK AI Safety Institute (AISI) is the UK government's dedicated body for evaluating and mitigating risks from advanced AI systems. It conducts technical safety research, deve...ai-safetygovernancepolicyevaluation+5Source ↗ aim to develop standardized assessment tools, but technical limitations persist. Whether breakthrough monitoring technologies emerge in the 2025-2030 timeframe determines agreement feasibility.
First-Mover Advantage Duration: Experts sharply disagree on whether early AI leaders achieve lasting dominance or face rapid catching-up dynamics. Analysis by Epoch AI↗🔗 web★★★★☆Epoch AIEpoch AI - AI Research and Forecasting OrganizationEpoch AI is a key reference organization for empirical data on AI scaling trends; their compute and training run databases are widely cited in AI safety and governance discussions.Epoch AI is a research organization focused on investigating and forecasting trends in artificial intelligence, particularly around compute, training data, and algorithmic progr...capabilitiescomputegovernancepolicy+4Source ↗ suggests capability gaps may prove temporary due to knowledge spillovers and talent mobility, while others argue↗🔗 web★★★★☆Future of Humanity InstituteRacing To The Precipice A Model Of Artificial IntelligenceA foundational FHI paper by Armstrong, Bostrom, and Shulman that formally models AI racing dynamics; frequently cited in discussions of AI governance, competitive risk, and why international coordination on AI safety is strategically difficult.This FHI paper by Stuart Armstrong, Nick Bostrom, and Carl Shulman models competitive AI development dynamics, showing how racing dynamics between developers can lead to reduced...ai-safetyexistential-riskgovernancecoordination+5Source ↗ that recursive self-improvement creates winner-take-all dynamics.
Crisis Response Patterns: Historical precedents for cooperation during technological competition remain limited. Studies of nuclear arms control↗🔗 webStudies of nuclear arms controlRelevant to AI safety researchers studying governance frameworks and international coordination precedents, especially those exploring analogies between nuclear arms control and AI regulation or treaty-based approaches to managing transformative technologies.The Arms Control Association is a nonprofit organization dedicated to promoting public understanding of and support for effective arms control policies. It provides analysis, ad...governancecoordinationexistential-riskpolicy+1Source ↗ provide mixed lessons, with cooperation emerging slowly after dangerous confrontations. Whether AI crises catalyze cooperation or intensify racing remains unpredictable.
Expert Opinion Divergence
| Question | Optimistic View (25%) | Middle Position (50%) | Pessimistic View (25%) |
|---|---|---|---|
| Coordination prospects | Track 2 breakthroughs enable cooperation | Muddle through with informal constraints | Racing inevitable due to security imperatives |
| Verification feasibility | Technical solutions emerging rapidly | Partial monitoring possible for some dimensions | Fundamental unverifiability of key capabilities |
| Crisis impact | AI incidents generate cooperation momentum | Mixed effects depending on attribution and timing | Crises accelerate racing as stakes become clear |
Surveys by the Center for AI Safety↗🔗 web★★★★☆Center for AI SafetyCenter for AI Safety (CAIS) – HomepageCAIS is one of the leading AI safety research organizations; this homepage provides an entry point to their research, public statements, and field-building initiatives relevant to anyone working in or entering AI safety.The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, pub...ai-safetyexistential-riskalignmentfield-building+4Source ↗ reveal persistent disagreement among experts, with confidence intervals spanning 30-80% probability ranges for key coordination scenarios.
Intervention Strategies and Leverage Points
High-Impact Intervention Categories
Track 2 Diplomatic Infrastructure: Investment in researcher exchanges, joint safety projects, and informal dialogue channels offers the highest return on investment for coordination building. Council on Foreign Relations analysis↗🔗 web★★★★☆Council on Foreign RelationsCouncil on Foreign Relations analysisCFR is a mainstream US foreign policy institution; its AI-related content covers governance and international coordination but is not technically focused on AI safety. Most relevant for policy and geopolitical context around AI deployment.The Council on Foreign Relations (CFR) homepage showcases current geopolitical analysis, with a heavy focus on the US-Iran conflict and its global ripple effects as of early 202...governancepolicycoordinationai-safety+1Source ↗ estimates $10-20M annually could maintain crucial technical communities across geopolitical divides.
Verification Technology Development: Compute monitoring systems, evaluation frameworks, and confidence-building measures require substantial technical investment. Estimates from AI governance organizations↗🏛️ government★★★★☆Centre for the Governance of AIGovAI helps decision-makers navigate the transition to a world with advanced AI, by producing rigorous research and fostering talent." name="description"/><meta content="GovAI | HomeGovAI is one of the most prominent AI governance research organizations globally; their publications on AI policy, international coordination, and existential risk governance are frequently cited in AI safety literature and policy discussions.The Centre for the Governance of AI (GovAI) is a leading research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI. It produ...governanceai-safetypolicyexistential-risk+4Source ↗ suggest $50-200M over five years could deliver breakthrough monitoring capabilities that enable verification.
Middle Power Coordination: EU, UK, and allied coordination could create alternative frameworks that facilitate eventual US-China engagement. European Council on Foreign Relations research↗🔗 webEuropean Council on Foreign Relations researchECFR is a mainstream European foreign policy think tank; its relevance to AI safety is indirect, primarily through coverage of EU tech governance, geopolitical competition, and international coordination challenges that shape the AI regulatory environment.ECFR is a pan-European think tank producing research, analysis, and commentary on European foreign policy, geopolitics, and international relations. It covers topics including E...governancepolicycoordinationinternational-coordinationSource ↗ indicates European regulatory frameworks may establish de facto global standards regardless of bilateral tensions.
Timeline-Dependent Strategy Shifts
| Time Horizon | Primary Focus | Success Metrics | Resource Allocation |
|---|---|---|---|
| 2024-2026 | Crisis prevention, Track 2 dialogue | Communication channels maintained, no major incidents | 60% diplomacy, 40% technical |
| 2026-2028 | Verification development, framework building | Monitoring systems deployed, informal agreements | 40% diplomacy, 60% technical |
| 2028-2030 | Formal agreements, implementation | Binding frameworks established, compliance verified | 50% diplomacy, 50% enforcement |
Current State Assessment
Coordination Climate Analysis
The current international climate exhibits significant deterioration from previous cooperation baselines. Pew Research polling↗🔗 web★★★★☆Pew Research CenterPew Research: Institutional TrustPew Research is a frequently cited empirical source for public opinion on AI and technology governance; useful for grounding policy arguments in measured public attitudes rather than assumptions.Pew Research Center is a nonpartisan fact tank providing data and analysis on public attitudes toward technology, AI, governance, media, and society. It conducts large-scale sur...governancepolicydeploymentai-safety+2Source ↗ shows public opinion in both countries increasingly views AI competition through zero-sum lenses, constraining political space for cooperation. Congressional hearings and Chinese policy documents frame technological leadership as existential national priorities, reducing flexibility for compromise.
However, countervailing forces maintain cooperation potential. Surveys of AI researchers↗🔗 web★★★☆☆AI ImpactsAI ImpactsAI Impacts is a key empirical research hub for AI safety; its expert surveys and wiki pages are frequently cited in discussions about AI timelines, risk probability, and strategic forecasting within the broader AI safety community.AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and exis...ai-safetyexistential-riskcapabilitiesevaluation+3Source ↗ reveal substantial cross-border agreement on safety priorities, with technical communities maintaining professional networks despite political tensions. Corporate interests in predictable regulatory environments create business constituencies for coordination, while shared economic dependencies constrain purely competitive approaches.
Near-Term Trajectory Indicators
Three key indicators will signal coordination direction over the next 12-18 months:
- Export control escalation: Further restrictions on AI-relevant technologies signal continued decoupling
- Academic collaboration patterns: Research partnership trends indicate scientific community resilience
- Crisis response coordination: How powers handle AI incidents reveals cooperation capacity under pressure
Related Analysis
This coordination game connects directly to racing dynamics between AI labs, which exhibits similar prisoner's dilemma structures at the organizational level. The broader multipolar trap model provides framework for understanding how multiple actors complicate bilateral coordination. AI governance responses depend fundamentally on whether international coordination succeeds or fails.
Critical dependencies include capabilities development timelines that determine available coordination windows, alignment difficulty that sets stakes for cooperation versus racing, and takeoff speeds that influence whether coordination can adapt to rapid capability changes.
Sources & Resources
Academic Sources
| Source | Type | Key Contribution |
|---|---|---|
| RAND AI Competition Analysis↗🔗 web★★★★☆RAND CorporationThe AI and Biological Weapons ThreatA 2023 RAND empirical study directly relevant to catastrophic risk from AI misuse; provides early evidence on LLM dual-use risks in bioweapons contexts, informing debates about frontier model deployment safeguards and biosecurity policy.This RAND Corporation report examines the misuse risks of large language models (LLMs) in biological weapons development through a red-team methodology. Preliminary findings sho...biosecurityred-teamingcapabilitiesexistential-risk+6Source ↗ | Research Report | Game-theoretic framework for US-China competition |
| Georgetown CSET Publications↗🔗 web★★★★☆CSET GeorgetownGame-theoretic modeling by Georgetown's Center for Security and Emerging TechnologyPublished by CSET, a leading national security-focused AI policy think tank at Georgetown University; relevant for understanding how geopolitical competition shapes AI safety and governance challenges at the international level.This Georgetown CSET publication applies game-theoretic modeling to analyze competitive AI dynamics between major powers, exploring how strategic incentives shape national AI de...governancepolicycoordinationai-safety+3Source ↗ | Policy Analysis | Empirical assessment of coordination prospects |
| Stanford HAI Governance Research↗🔗 web★★★★☆Stanford HAIResearch by Stanford's Human-Centered AI InstituteStanford HAI's policy hub is a prominent institutional source for AI governance research; useful as a reference for tracking academic-to-policy translation efforts and concrete governance proposals from a leading AI research institution.Stanford's Human-Centered AI Institute (HAI) produces policy-oriented AI governance research, white papers, and briefs spanning health, international affairs, language inclusion...governancepolicyai-safetycoordination+3Source ↗ | Academic Research | Technical verification and monitoring challenges |
| MIT CCI Coordination Studies↗🔗 webMIT's Center for Collective Intelligence analysisCCI is a multidisciplinary MIT research center whose work on collective problem-solving and human-AI collaboration has indirect but meaningful relevance to AI safety governance, coordination challenges, and countering influence operations.The MIT Center for Collective Intelligence researches how groups of people and computers can be organized to act more intelligently than individuals alone. CCI explores collecti...coordinationgovernanceai-safetypolicy+4Source ↗ | Research Center | Multidimensional coordination complexity analysis |
Policy Organizations
| Organization | Focus | Key Resources |
|---|---|---|
| Center for Strategic & International Studies↗🔗 web★★★★☆CSISCSIS Critical QuestionsCSIS is a prominent Washington D.C. think tank; this program page is a hub for policy-oriented analysis on technology and national security, relevant to AI governance and international coordination discussions.The Center for Strategic and International Studies (CSIS) Strategic Technologies Program analyzes the intersection of technology, national security, and international competitio...governancepolicycoordinationinternational-coordination+4Source ↗ | Strategic Analysis | Intelligence assessments, capability tracking |
| Atlantic Council↗🔗 web★★★★☆Atlantic CouncilScowcroft Center Experts on Defense Budget (Atlantic Council, 2014)This resource is a 2014 defense policy news item with no direct relevance to AI safety; it appears to have been tagged incorrectly and has minimal value for an AI safety knowledge base.A 2014 Atlantic Council news item in which Scowcroft Center experts comment on the Pentagon's proposed defense budget, addressing congressional battles, spending readjustments p...governancepolicySource ↗ | Policy Frameworks | Governance mechanisms, alliance coordination |
| Brookings Institution↗🔗 web★★★★☆Brookings InstitutionBrookings: AI CompetitionThis is a topic index page from the Brookings Institution, a major U.S. think tank; useful for policy-oriented AI governance perspectives but not focused on technical AI safety or alignment research.The Brookings Institution's AI topic hub aggregates policy analysis, research, and expert commentary on artificial intelligence across governance, regulation, economics, and soc...governancepolicyai-ethicsdeployment+2Source ↗ | Technology Diplomacy | Middle power roles, regulatory harmonization |
| Carnegie Endowment↗🔗 web★★★★☆Carnegie EndowmentResearch by the Carnegie EndowmentThis Carnegie Endowment special project page on AI security is currently returning a 404 error; users should search for updated URLs or check the main Carnegie Endowment site for current AI-related research and publications.The Carnegie Endowment for International Peace runs a dedicated AI security research initiative examining the intersection of artificial intelligence and national/international ...governancepolicycoordinationai-safety+1Source ↗ | International Relations | Verification mechanisms, confidence-building |
Government Resources
| Entity | Role | Documentation |
|---|---|---|
| US AI Safety Institute↗🏛️ government★★★★★NISTUS AI Safety InstituteCAISI/AISI is a key institutional actor in U.S. AI governance; relevant for understanding how the federal government approaches AI safety standards, voluntary frameworks, and international coordination on AI risk.The Center for AI Standards and Innovation (CAISI) at NIST is the U.S. government's primary body for AI safety standards and industry coordination. It develops voluntary guideli...ai-safetygovernancepolicyevaluation+4Source ↗ | Evaluation Standards | Technical frameworks for capability assessment |
| UK AI Safety Institute↗🏛️ government★★★★☆UK GovernmentAI Safety Institute - GOV.UKThis is the official UK government hub for AI safety policy and research; important for tracking state-level institutional responses to frontier AI risks and international safety coordination efforts.The UK AI Safety Institute (recently rebranded as the AI Security Institute) is a government body under the Department for Science, Innovation and Technology focused on minimizi...ai-safetygovernancepolicyevaluation+4Source ↗ | International Coordination | Bilateral cooperation mechanisms |
| EU AI Office↗🔗 web★★★★☆European UnionEuropean Commission: EU AI ActThe EU AI Act (fully in force 2024) is the primary binding AI regulation globally and a key reference for AI governance discussions; wiki users interested in policy, deployment standards, or international coordination should treat this as a foundational regulatory document.The EU AI Act is the world's first comprehensive legal framework for regulating artificial intelligence, classifying AI systems into risk tiers (unacceptable, high, limited, min...governancepolicyai-safetydeployment+3Source ↗ | Regulatory Framework | Global standard-setting through comprehensive legislation |
References
Epoch AI is a research organization focused on investigating and forecasting trends in artificial intelligence, particularly around compute, training data, and algorithmic progress. They produce empirical analyses and datasets to inform understanding of AI development trajectories and support better decision-making in AI governance and safety.
The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk research and AI safety. FHI closed on 16 April 2024 after approximately two decades of influential work. The site now serves as an archived record of the institution's history, research agenda, and legacy.
The MIT Center for Collective Intelligence researches how groups of people and computers can be organized to act more intelligently than individuals alone. CCI explores collective problem-solving, collaborative decision-making, and the design of systems that harness distributed human and machine intelligence. Their work is relevant to AI governance, coordination problems, and mitigating risks from disinformation and influence operations.
Stanford's Human-Centered AI Institute (HAI) produces policy-oriented AI governance research, white papers, and briefs spanning health, international affairs, language inclusion, and scientific discovery. The institute convenes policymakers and academics globally, bridging AI research and regulatory implementation. It serves as a central repository for Stanford's AI policy output.
The Carnegie Endowment for International Peace runs a dedicated AI security research initiative examining the intersection of artificial intelligence and national/international security. The project likely covers topics including AI in military applications, arms control, international coordination, and governance frameworks. However, the specific page appears to be unavailable or moved.
The Council on Foreign Relations (CFR) homepage showcases current geopolitical analysis, with a heavy focus on the US-Iran conflict and its global ripple effects as of early 2026. Topics covered include energy disruption, drone warfare, geoeconomics, and AI. While not AI-safety focused, CFR covers AI governance and international coordination topics relevant to the field.
This FHI paper by Stuart Armstrong, Nick Bostrom, and Carl Shulman models competitive AI development dynamics, showing how racing dynamics between developers can lead to reduced safety investment and catastrophic outcomes. It formalizes how competitive pressures incentivize cutting corners on safety, and explores conditions under which cooperation or regulation could prevent races to dangerous capability thresholds.
A 2014 Atlantic Council news item in which Scowcroft Center experts comment on the Pentagon's proposed defense budget, addressing congressional battles, spending readjustments post-surge, and the controversy around controlling military personnel costs.
Pew Research Center is a nonpartisan fact tank providing data and analysis on public attitudes toward technology, AI, governance, media, and society. It conducts large-scale surveys tracking American and global opinions on AI adoption, institutional trust, news habits, and emerging technology risks. Its AI-focused research tracks public perception of AI benefits and harms over time.
AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and existential risk arguments. It maintains a wiki and blog featuring expert surveys, historical analyses, and structured arguments about transformative AI development. Notable outputs include periodic expert surveys on AI progress timelines.
Metaculus is a collaborative forecasting platform that aggregates probabilistic predictions from a community of forecasters on questions spanning AI, geopolitics, health, climate, and nuclear risks. It provides crowd-sourced probability estimates on future events, serving as a reference for calibrated uncertainty quantification relevant to risk assessment and policy planning.
This Atlantic Council report appears to have been removed or relocated, returning a 404 error. Based on the title and existing tags, it likely examined algorithmic systems in the context of international power dynamics, governance, and coordination challenges.
A Nature article reports that the developers of Trikafta, a triple-drug combination treatment for cystic fibrosis, have won a US$3-million Breakthrough Prize. Trikafta has proven transformative for approximately 90% of cystic fibrosis patients by targeting the malformed proteins at the root of the disease. The award recognizes this significant medical breakthrough that has substantially improved outcomes for people with this inherited disorder affecting the lungs and other organs.
The Brookings Institution's AI topic hub aggregates policy analysis, research, and expert commentary on artificial intelligence across governance, regulation, economics, and societal impacts. It covers a wide range of AI-related issues including state-level legislation, consumer protection, energy infrastructure, and education. The hub serves as a clearinghouse for centrist policy research from a prominent Washington D.C. think tank.
The CSIS Economics Program (now merged into the Economics Program and Scholl Chair in International Business) leads research on international economic policy, global economic governance, and the domestic economic foundations of U.S. power. Key work includes analysis of China's economic coercion tactics and strategies for countering them through deterrence frameworks. The program produces reports, articles, and events on economic security topics relevant to U.S. foreign policy.
ECFR is a pan-European think tank producing research, analysis, and commentary on European foreign policy, geopolitics, and international relations. It covers topics including European defense, transatlantic relations, the Russia-Ukraine war, and geoeconomics. Its work is relevant to AI governance insofar as it addresses technology policy, international coordination, and the geopolitical context shaping AI regulation.
This RAND Corporation report examines the misuse risks of large language models (LLMs) in biological weapons development through a red-team methodology. Preliminary findings show that while LLMs haven't provided explicit weapon-creation instructions, they do offer guidance useful for planning biological attacks, including agent selection and acquisition strategies. The authors caution that AI's rapid advancement may outpace regulatory oversight, closing historical information gaps that previously hindered bioweapon development.
The Center for Strategic and International Studies (CSIS) Strategic Technologies Program analyzes the intersection of technology, national security, and international competition. It produces policy analysis on topics including AI governance, cybersecurity, and emerging technologies with geopolitical implications. The program informs policymakers and the public on technology strategy and regulation.
The UK AI Safety Institute (recently rebranded as the AI Security Institute) is a government body under the Department for Science, Innovation and Technology focused on minimizing risks from rapid and unexpected AI advances. It conducts and publishes safety research, international coordination reports, and policy guidance, while managing grants for systemic AI safety research.
This Georgetown CSET publication applies game-theoretic modeling to analyze competitive AI dynamics between major powers, exploring how strategic incentives shape national AI development decisions. It examines how geopolitical competition may accelerate AI development at the expense of safety considerations, and considers frameworks for international coordination. The work informs policy discussions on preventing dangerous AI races while maintaining national interests.
The Arms Control Association is a nonprofit organization dedicated to promoting public understanding of and support for effective arms control policies. It provides analysis, advocacy, and information on nuclear, chemical, biological, and conventional weapons treaties and negotiations. The organization serves as a key resource for policy research on international security and weapons nonproliferation.
The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, publishes surveys and statements, and supports field-building efforts across academia and industry. CAIS is notable for its broad coalition-building, including its widely-cited statement on AI extinction risk signed by leading researchers.
The EU AI Act is the world's first comprehensive legal framework for regulating artificial intelligence, classifying AI systems into risk tiers (unacceptable, high, limited, minimal) with corresponding obligations. It imposes strict requirements on high-risk AI applications including transparency, human oversight, and conformity assessments to protect fundamental rights and safety. The Act represents a landmark attempt at binding AI governance at a supranational level.
The Center for AI Standards and Innovation (CAISI) at NIST is the U.S. government's primary body for AI safety standards and industry coordination. It develops voluntary guidelines, evaluates AI systems for national security risks (cybersecurity, biosecurity), and represents U.S. interests in international AI standards efforts.
The Brookings Institution's International Affairs topic page aggregates policy research, expert analysis, and recommendations on global geopolitical challenges including nuclear security, energy security, regional conflicts, and international coordination. It serves as a hub for policymakers and stakeholders seeking evidence-based insights on peace and security issues.
RAND Corporation's verification research hub covers methods and frameworks for verifying compliance with arms control, international agreements, and emerging technology governance. The research applies game theory, inspection regimes, and policy analysis to verification challenges relevant to AI governance and international coordination on advanced technologies.
The U.S. Bureau of Industry and Security (BIS) implemented sweeping export control measures in October 2022 targeting China's ability to acquire advanced semiconductors, semiconductor manufacturing equipment, and supercomputing capabilities. These rules restrict the export of chips and related technology that could be used for AI and military applications. The controls represent a significant policy intervention aimed at limiting China's AI and advanced computing development.
CSET (Center for Security and Emerging Technology) at Georgetown University is a policy research organization focused on the security implications of emerging technologies, particularly AI. It produces research on AI policy, workforce, geopolitics, and governance. The content could not be fully extracted, limiting detailed analysis.
The Centre for the Governance of AI (GovAI) is a leading research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI. It produces rigorous research on AI governance, policy, and societal impacts, while fostering a global talent pipeline for responsible AI oversight. GovAI bridges technical AI safety concerns with practical policy recommendations.
The UK AI Safety Institute (AISI) is the UK government's dedicated body for evaluating and mitigating risks from advanced AI systems. It conducts technical safety research, develops evaluation frameworks for frontier AI models, and works with international partners to inform global AI governance and policy.