Skip to content
Longterm Wiki
Navigation
Updated 2025-12-27HistoryData
Page StatusContent
Edited 3 months ago1.9k words3 backlinksUpdated quarterlyOverdue by 9 days
59QualityAdequate •36ImportanceReference74.5ResearchHigh
Content8/13
SummaryScheduleEntityEdit historyOverview
Tables11/ ~8Diagrams1/ ~1Int. links42/ ~15Ext. links0/ ~10Footnotes0/ ~6References30/ ~6Quotes0Accuracy0RatingsN:4.2 R:6.5 A:5.5 C:7Backlinks3
Issues2
QualityRated 59 but structure suggests 80 (underrated by 21 points)
StaleLast edited 99 days ago - may need review
TODOs4
Complete 'Conceptual Framework' section
Complete 'Quantitative Analysis' section (8 placeholders)
Complete 'Strategic Importance' section
Complete 'Limitations' section (6 placeholders)

International AI Coordination Game

Analysis

International AI Coordination Game Model

Game-theoretic analysis demonstrating that US-China AI coordination defaults to mutual defection (racing) because defection dominates when cooperation probability falls below 50%, with current estimates at 60-70% likelihood of continued deterioration through 2030. Key intervention leverage points identified: Track 2 diplomacy ($10-20M annually), verification technology development ($50-200M over 5 years), and middle power coordination through EU regulatory frameworks.

Model TypeGame Theory
ScopeInternational Governance
Key InsightInternational AI coordination faces prisoner's dilemma dynamics with verification challenges
Related
Risks
Multipolar Trap (AI Development)AI Development Racing Dynamics
1.9k words · 3 backlinks

Overview

International AI governance presents a critical coordination problem between major powers - primarily the United States and China. The strategic structure of this competition fundamentally shapes whether humanity achieves safe AI development or races toward catastrophic outcomes. Recent analysis by RAND Corporation confirms this represents one of the defining geopolitical challenges of the 21st century, sitting at the intersection of technological competition, national security, and existential risk management.

The central tension emerges from a classic prisoner's dilemma: mutual cooperation on AI safety offers optimal collective outcomes (4,4 payoff), yet unilateral defection remains persistently tempting (5,1 advantage). Game-theoretic modeling by Georgetown's Center for Security and Emerging Technology demonstrates why rational actors choose suboptimal racing dynamics even when superior cooperative alternatives exist. When cooperation probability falls below 50%, defection mathematically dominates, explaining persistent competitive patterns despite shared catastrophic risks.

Risk Assessment Framework

Risk CategorySeverityLikelihood (2024-2030)TimelineTrend
Racing accelerationVery High65%2-4 yearsWorsening
Coordination breakdownHigh40%1-3 yearsStable
Verification failureMedium30%3-5 yearsUncertain
Technology decouplingHigh25%2-5 yearsWorsening
Crisis escalationVery High20%1-2 yearsWorsening

Source: Synthesis of FHI surveys, CSET analysis, and expert elicitation

Strategic Player Analysis

Major Power Capabilities and Constraints

ActorAI CapabilitiesGovernance AdvantagesKey ConstraintsCoordination Incentives
United StatesLeading labs (OpenAI, Anthropic, DeepMind), dominant compute infrastructurePrivate sector innovation, democratic legitimacyFragmented policymaking, electoral cyclesMaintain lead while preventing catastrophe
ChinaMajor tech giants (Baidu, Alibaba), centralized planningRapid policy implementation, state coordinationChip access restrictions, brain drainCatch up through safety cooperation
European UnionSmaller research base, regulatory leadershipComprehensive AI Act framework, rights focusSlower consensus building, limited tech giantsSet global norms, ensure safety standards
United KingdomDeepMind legacy, concentrated expertiseResearch excellence, regulatory agilityLimited scale, post-Brexit isolationBridge US-EU coordination gaps

The asymmetric structure creates fundamentally different strategic preferences. Analysis by the Atlantic Council shows the US currently leads in most AI capabilities but faces democratic governance constraints that complicate long-term strategic planning. China's centralized system enables rapid policy implementation but confronts persistent technology access barriers through export controls.

Information Asymmetry Challenges

Critical uncertainty surrounds relative capabilities, with each side maintaining classified programs that generate "technological fog of war." CSIS intelligence assessments indicate both powers systematically exaggerate progress when seeking leverage while concealing breakthroughs to maintain surprise advantages. This information problem undermines trust-building and makes verification mechanisms essential for stable agreements.

Game Structure and Equilibrium Analysis

The Fundamental Coordination Dilemma

The strategic interaction exhibits classic prisoner's dilemma characteristics with the following payoff structure:

Strategy CombinationUS PayoffChina PayoffOutcome
Both Cooperate44Safe AI development, shared benefits
US Cooperates, China Defects15China gains decisive advantage
US Defects, China Cooperates51US secures technological dominance
Both Defect22Racing dynamics, elevated catastrophic risk

Expected utility calculations reveal why cooperation fails:

Ui(Cooperate)=pj4+(1pj)1=3pj+1U_i(\text{Cooperate}) = p_j \cdot 4 + (1-p_j) \cdot 1 = 3p_j + 1

Ui(Defect)=pj5+(1pj)2=3pj+2U_i(\text{Defect}) = p_j \cdot 5 + (1-p_j) \cdot 2 = 3p_j + 2

Defection dominates when pj<12p_j < \frac{1}{2}, meaning cooperation requires confidence exceeding 50% that the adversary will reciprocate. Research by Stanford's Human-Centered AI Institute demonstrates this threshold remains unmet in current US-China relations.

Diagram (loading…)
graph TD
  A[Current State: Mutual Suspicion] --> B{Crisis Event?}
  A --> C{Capability Breakthrough?}
  A --> D{Sustained Dialogue?}

  B -->|Major AI Incident| E[Risk Salience Increases]
  B -->|Military Close Call| F[Security Fears Intensify]
  C -->|US Breakthrough| G[China Panic Response]
  C -->|China Breakthrough| H[US Acceleration]
  D -->|Track 2 Success| I[Technical Common Ground]
  D -->|Failure| J[Status Quo Continues]

  E --> K{Leadership Response}
  F --> L[Racing Accelerates]
  G --> L
  H --> L
  I --> M[Verification Framework?]
  J --> N[Competitive Coexistence]

  K -->|Cooperative| O[Safety Agreement Opportunity]
  K -->|Competitive| L
  M -->|Feasible| O
  M -->|Infeasible| N

  O --> P[Successful Coordination<br/>~15% probability]
  L --> Q[Dangerous Racing<br/>~35% probability]
  N --> R[Muddle Through<br/>~35% probability]

  style P fill:#90EE90
  style Q fill:#FF6B6B
  style R fill:#FFE66D

Multidimensional Coordination Complexity

Real-world coordination extends across multiple independent dimensions that complicate simple bilateral agreements:

Coordination DimensionVerifiabilityCurrent StatusCooperation Feasibility
Compute governanceHighExport controls activeModerate - visible infrastructure
Safety researchMediumLimited sharingHigh - public good nature
Military applicationsLowClassified programsLow - security classification
Deployment standardsMediumDivergent approachesModerate - observable outcomes
Talent mobilityHighIncreasing restrictionsHigh - visa/immigration policy

MIT's Center for Collective Intelligence analysis reveals that progress occurs at different rates across dimensions, with algorithmic advances nearly impossible to monitor externally while compute infrastructure remains highly visible through satellite observation and power consumption analysis.

Current Trajectory and Warning Signs

Recent Developments (2023-2024)

The coordination landscape has deteriorated significantly over the past two years. Export control measures implemented in October 2022 dramatically restricted China's access to advanced semiconductors, triggering reciprocal restrictions on critical minerals and escalating technological decoupling. Chinese investment in domestic chip capabilities has accelerated in response, while US lawmakers increasingly frame AI competition in zero-sum national security terms.

Scientific exchange has contracted substantially. Nature analysis of publication patterns shows US-China AI research collaboration declining 30% since 2022, with researchers reporting visa difficulties and institutional pressure to avoid Chinese partnerships. Academic conferences increasingly feature geographically segregated participation as political tensions constrain professional networks.

2025-2030 Trajectory Projections

ScenarioProbabilityKey DriversExpected Outcomes
Accelerating Competition35%Taiwan crisis, capability breakthrough, domestic politicsRacing dynamics, safety shortcuts, high catastrophic risk
Competitive Coexistence35%Muddle through, informal red linesModerate racing, parallel development, medium risk
Crisis-Driven Cooperation15%Major AI incident, Track 2 breakthroughSafety frameworks, slower timelines, reduced risk
Technology Decoupling15%Complete export bans, alliance hardeningParallel ecosystems, incompatible standards, unknown risk

Forecasting analysis by Metaculus aggregates assign 60-70% probability to continued deterioration of coordination prospects through 2030 absent major catalyzing events.

Verification and Enforcement Challenges

Technical Feasibility Assessment

Monitoring TargetDetection ConfidenceTime LagCostResistance Level
Large training runs85-95%Days-weeksMediumLow
Data center construction90-99%MonthsLowVery Low
Chip manufacturing70-85%Weeks-monthsHighMedium
Algorithm development5-15%UnknownVery HighVery High
Safety practices10-30%N/AMediumHigh

Source: RAND verification studies and expert elicitation

The fundamental asymmetry between visible and hidden aspects of AI development creates binding constraints on agreement design. Research by the Carnegie Endowment demonstrates that any stable framework must focus on observable dimensions, particularly compute governance where infrastructure requirements make concealment difficult.

Enforcement Mechanism Analysis

Economic enforcement tools have shown mixed effectiveness. Export controls successfully slowed Chinese semiconductor advancement but triggered significant retaliation and alternative supply chain development. CSIS economic security analysis indicates trade sanctions face diminishing returns against major economic powers with large domestic markets and alternative partnerships.

Diplomatic enforcement through alliance coordination offers promise but remains untested at scale. Brookings Institution research on technology diplomacy suggests middle powers could play crucial mediating roles, with EU regulatory frameworks potentially creating global standards that facilitate coordination.

Key Uncertainties and Expert Disagreements

Critical Unknowns

Verification Technology Development: Current monitoring capabilities remain insufficient for comprehensive AI oversight. Projects like the AI Safety Institute's evaluation frameworks aim to develop standardized assessment tools, but technical limitations persist. Whether breakthrough monitoring technologies emerge in the 2025-2030 timeframe determines agreement feasibility.

First-Mover Advantage Duration: Experts sharply disagree on whether early AI leaders achieve lasting dominance or face rapid catching-up dynamics. Analysis by Epoch AI suggests capability gaps may prove temporary due to knowledge spillovers and talent mobility, while others argue that recursive self-improvement creates winner-take-all dynamics.

Crisis Response Patterns: Historical precedents for cooperation during technological competition remain limited. Studies of nuclear arms control provide mixed lessons, with cooperation emerging slowly after dangerous confrontations. Whether AI crises catalyze cooperation or intensify racing remains unpredictable.

Expert Opinion Divergence

QuestionOptimistic View (25%)Middle Position (50%)Pessimistic View (25%)
Coordination prospectsTrack 2 breakthroughs enable cooperationMuddle through with informal constraintsRacing inevitable due to security imperatives
Verification feasibilityTechnical solutions emerging rapidlyPartial monitoring possible for some dimensionsFundamental unverifiability of key capabilities
Crisis impactAI incidents generate cooperation momentumMixed effects depending on attribution and timingCrises accelerate racing as stakes become clear

Surveys by the Center for AI Safety reveal persistent disagreement among experts, with confidence intervals spanning 30-80% probability ranges for key coordination scenarios.

Intervention Strategies and Leverage Points

High-Impact Intervention Categories

Track 2 Diplomatic Infrastructure: Investment in researcher exchanges, joint safety projects, and informal dialogue channels offers the highest return on investment for coordination building. Council on Foreign Relations analysis estimates $10-20M annually could maintain crucial technical communities across geopolitical divides.

Verification Technology Development: Compute monitoring systems, evaluation frameworks, and confidence-building measures require substantial technical investment. Estimates from AI governance organizations suggest $50-200M over five years could deliver breakthrough monitoring capabilities that enable verification.

Middle Power Coordination: EU, UK, and allied coordination could create alternative frameworks that facilitate eventual US-China engagement. European Council on Foreign Relations research indicates European regulatory frameworks may establish de facto global standards regardless of bilateral tensions.

Timeline-Dependent Strategy Shifts

Time HorizonPrimary FocusSuccess MetricsResource Allocation
2024-2026Crisis prevention, Track 2 dialogueCommunication channels maintained, no major incidents60% diplomacy, 40% technical
2026-2028Verification development, framework buildingMonitoring systems deployed, informal agreements40% diplomacy, 60% technical
2028-2030Formal agreements, implementationBinding frameworks established, compliance verified50% diplomacy, 50% enforcement

Current State Assessment

Coordination Climate Analysis

The current international climate exhibits significant deterioration from previous cooperation baselines. Pew Research polling shows public opinion in both countries increasingly views AI competition through zero-sum lenses, constraining political space for cooperation. Congressional hearings and Chinese policy documents frame technological leadership as existential national priorities, reducing flexibility for compromise.

However, countervailing forces maintain cooperation potential. Surveys of AI researchers reveal substantial cross-border agreement on safety priorities, with technical communities maintaining professional networks despite political tensions. Corporate interests in predictable regulatory environments create business constituencies for coordination, while shared economic dependencies constrain purely competitive approaches.

Near-Term Trajectory Indicators

Three key indicators will signal coordination direction over the next 12-18 months:

  1. Export control escalation: Further restrictions on AI-relevant technologies signal continued decoupling
  2. Academic collaboration patterns: Research partnership trends indicate scientific community resilience
  3. Crisis response coordination: How powers handle AI incidents reveals cooperation capacity under pressure

This coordination game connects directly to racing dynamics between AI labs, which exhibits similar prisoner's dilemma structures at the organizational level. The broader multipolar trap model provides framework for understanding how multiple actors complicate bilateral coordination. AI governance responses depend fundamentally on whether international coordination succeeds or fails.

Critical dependencies include capabilities development timelines that determine available coordination windows, alignment difficulty that sets stakes for cooperation versus racing, and takeoff speeds that influence whether coordination can adapt to rapid capability changes.

Sources & Resources

Academic Sources

SourceTypeKey Contribution
RAND AI Competition AnalysisResearch ReportGame-theoretic framework for US-China competition
Georgetown CSET PublicationsPolicy AnalysisEmpirical assessment of coordination prospects
Stanford HAI Governance ResearchAcademic ResearchTechnical verification and monitoring challenges
MIT CCI Coordination StudiesResearch CenterMultidimensional coordination complexity analysis

Policy Organizations

OrganizationFocusKey Resources
Center for Strategic & International StudiesStrategic AnalysisIntelligence assessments, capability tracking
Atlantic CouncilPolicy FrameworksGovernance mechanisms, alliance coordination
Brookings InstitutionTechnology DiplomacyMiddle power roles, regulatory harmonization
Carnegie EndowmentInternational RelationsVerification mechanisms, confidence-building

Government Resources

EntityRoleDocumentation
US AI Safety InstituteEvaluation StandardsTechnical frameworks for capability assessment
UK AI Safety InstituteInternational CoordinationBilateral cooperation mechanisms
EU AI OfficeRegulatory FrameworkGlobal standard-setting through comprehensive legislation

References

Epoch AI is a research organization focused on investigating and forecasting trends in artificial intelligence, particularly around compute, training data, and algorithmic progress. They produce empirical analyses and datasets to inform understanding of AI development trajectories and support better decision-making in AI governance and safety.

★★★★☆
2**Future of Humanity Institute**Future of Humanity Institute

The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk research and AI safety. FHI closed on 16 April 2024 after approximately two decades of influential work. The site now serves as an archived record of the institution's history, research agenda, and legacy.

★★★★☆

The MIT Center for Collective Intelligence researches how groups of people and computers can be organized to act more intelligently than individuals alone. CCI explores collective problem-solving, collaborative decision-making, and the design of systems that harness distributed human and machine intelligence. Their work is relevant to AI governance, coordination problems, and mitigating risks from disinformation and influence operations.

Stanford's Human-Centered AI Institute (HAI) produces policy-oriented AI governance research, white papers, and briefs spanning health, international affairs, language inclusion, and scientific discovery. The institute convenes policymakers and academics globally, bridging AI research and regulatory implementation. It serves as a central repository for Stanford's AI policy output.

★★★★☆

The Carnegie Endowment for International Peace runs a dedicated AI security research initiative examining the intersection of artificial intelligence and national/international security. The project likely covers topics including AI in military applications, arms control, international coordination, and governance frameworks. However, the specific page appears to be unavailable or moved.

★★★★☆
6Council on Foreign Relations analysisCouncil on Foreign Relations

The Council on Foreign Relations (CFR) homepage showcases current geopolitical analysis, with a heavy focus on the US-Iran conflict and its global ripple effects as of early 2026. Topics covered include energy disruption, drone warfare, geoeconomics, and AI. While not AI-safety focused, CFR covers AI governance and international coordination topics relevant to the field.

★★★★☆

This FHI paper by Stuart Armstrong, Nick Bostrom, and Carl Shulman models competitive AI development dynamics, showing how racing dynamics between developers can lead to reduced safety investment and catastrophic outcomes. It formalizes how competitive pressures incentivize cutting corners on safety, and explores conditions under which cooperation or regulation could prevent races to dangerous capability thresholds.

★★★★☆

A 2014 Atlantic Council news item in which Scowcroft Center experts comment on the Pentagon's proposed defense budget, addressing congressional battles, spending readjustments post-surge, and the controversy around controlling military personnel costs.

★★★★☆

Pew Research Center is a nonpartisan fact tank providing data and analysis on public attitudes toward technology, AI, governance, media, and society. It conducts large-scale surveys tracking American and global opinions on AI adoption, institutional trust, news habits, and emerging technology risks. Its AI-focused research tracks public perception of AI benefits and harms over time.

★★★★☆
10AI ImpactsAI Impacts

AI Impacts is a research organization that investigates empirical questions relevant to AI forecasting and safety, including AI timelines, discontinuous progress risks, and existential risk arguments. It maintains a wiki and blog featuring expert surveys, historical analyses, and structured arguments about transformative AI development. Notable outputs include periodic expert surveys on AI progress timelines.

★★★☆☆

Metaculus is a collaborative forecasting platform that aggregates probabilistic predictions from a community of forecasters on questions spanning AI, geopolitics, health, climate, and nuclear risks. It provides crowd-sourced probability estimates on future events, serving as a reference for calibrated uncertainty quantification relevant to risk assessment and policy planning.

★★★☆☆

This Atlantic Council report appears to have been removed or relocated, returning a 404 error. Based on the title and existing tags, it likely examined algorithmic systems in the context of international power dynamics, governance, and coordination challenges.

★★★★☆
13Nature analysis of publication patternsNature (peer-reviewed)·Priyanga Amarasekare·2010·Paper

A Nature article reports that the developers of Trikafta, a triple-drug combination treatment for cystic fibrosis, have won a US$3-million Breakthrough Prize. Trikafta has proven transformative for approximately 90% of cystic fibrosis patients by targeting the malformed proteins at the root of the disease. The award recognizes this significant medical breakthrough that has substantially improved outcomes for people with this inherited disorder affecting the lungs and other organs.

★★★★★
14Brookings: AI CompetitionBrookings Institution

The Brookings Institution's AI topic hub aggregates policy analysis, research, and expert commentary on artificial intelligence across governance, regulation, economics, and societal impacts. It covers a wide range of AI-related issues including state-level legislation, consumer protection, energy infrastructure, and education. The hub serves as a clearinghouse for centrist policy research from a prominent Washington D.C. think tank.

★★★★☆

The CSIS Economics Program (now merged into the Economics Program and Scholl Chair in International Business) leads research on international economic policy, global economic governance, and the domestic economic foundations of U.S. power. Key work includes analysis of China's economic coercion tactics and strategies for countering them through deterrence frameworks. The program produces reports, articles, and events on economic security topics relevant to U.S. foreign policy.

★★★★☆

ECFR is a pan-European think tank producing research, analysis, and commentary on European foreign policy, geopolitics, and international relations. It covers topics including European defense, transatlantic relations, the Russia-Ukraine war, and geoeconomics. Its work is relevant to AI governance insofar as it addresses technology policy, international coordination, and the geopolitical context shaping AI regulation.

This RAND Corporation report examines the misuse risks of large language models (LLMs) in biological weapons development through a red-team methodology. Preliminary findings show that while LLMs haven't provided explicit weapon-creation instructions, they do offer guidance useful for planning biological attacks, including agent selection and acquisition strategies. The authors caution that AI's rapid advancement may outpace regulatory oversight, closing historical information gaps that previously hindered bioweapon development.

★★★★☆

The Center for Strategic and International Studies (CSIS) Strategic Technologies Program analyzes the intersection of technology, national security, and international competition. It produces policy analysis on topics including AI governance, cybersecurity, and emerging technologies with geopolitical implications. The program informs policymakers and the public on technology strategy and regulation.

★★★★☆
19AI Safety Institute - GOV.UKUK Government·Government

The UK AI Safety Institute (recently rebranded as the AI Security Institute) is a government body under the Department for Science, Innovation and Technology focused on minimizing risks from rapid and unexpected AI advances. It conducts and publishes safety research, international coordination reports, and policy guidance, while managing grants for systemic AI safety research.

★★★★☆

This Georgetown CSET publication applies game-theoretic modeling to analyze competitive AI dynamics between major powers, exploring how strategic incentives shape national AI development decisions. It examines how geopolitical competition may accelerate AI development at the expense of safety considerations, and considers frameworks for international coordination. The work informs policy discussions on preventing dangerous AI races while maintaining national interests.

★★★★☆

The Arms Control Association is a nonprofit organization dedicated to promoting public understanding of and support for effective arms control policies. It provides analysis, advocacy, and information on nuclear, chemical, biological, and conventional weapons treaties and negotiations. The organization serves as a key resource for policy research on international security and weapons nonproliferation.

The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, publishes surveys and statements, and supports field-building efforts across academia and industry. CAIS is notable for its broad coalition-building, including its widely-cited statement on AI extinction risk signed by leading researchers.

★★★★☆

The EU AI Act is the world's first comprehensive legal framework for regulating artificial intelligence, classifying AI systems into risk tiers (unacceptable, high, limited, minimal) with corresponding obligations. It imposes strict requirements on high-risk AI applications including transparency, human oversight, and conformity assessments to protect fundamental rights and safety. The Act represents a landmark attempt at binding AI governance at a supranational level.

★★★★☆
24US AI Safety InstituteNIST·Government

The Center for AI Standards and Innovation (CAISI) at NIST is the U.S. government's primary body for AI safety standards and industry coordination. It develops voluntary guidelines, evaluates AI systems for national security risks (cybersecurity, biosecurity), and represents U.S. interests in international AI standards efforts.

★★★★★
25Brookings Institution researchBrookings Institution

The Brookings Institution's International Affairs topic page aggregates policy research, expert analysis, and recommendations on global geopolitical challenges including nuclear security, energy security, regional conflicts, and international coordination. It serves as a hub for policymakers and stakeholders seeking evidence-based insights on peace and security issues.

★★★★☆
26RAND verification studiesRAND Corporation

RAND Corporation's verification research hub covers methods and frameworks for verifying compliance with arms control, international agreements, and emerging technology governance. The research applies game theory, inspection regimes, and policy analysis to verification challenges relevant to AI governance and international coordination on advanced technologies.

★★★★☆
27Export control measures implemented in October 2022Bureau of Industry and Security·Government

The U.S. Bureau of Industry and Security (BIS) implemented sweeping export control measures in October 2022 targeting China's ability to acquire advanced semiconductors, semiconductor manufacturing equipment, and supercomputing capabilities. These rules restrict the export of chips and related technology that could be used for AI and military applications. The controls represent a significant policy intervention aimed at limiting China's AI and advanced computing development.

★★★★☆
28CSET: AI Market DynamicsCSET Georgetown

CSET (Center for Security and Emerging Technology) at Georgetown University is a policy research organization focused on the security implications of emerging technologies, particularly AI. It produces research on AI policy, workforce, geopolitics, and governance. The content could not be fully extracted, limiting detailed analysis.

★★★★☆

The Centre for the Governance of AI (GovAI) is a leading research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI. It produces rigorous research on AI governance, policy, and societal impacts, while fostering a global talent pipeline for responsible AI oversight. GovAI bridges technical AI safety concerns with practical policy recommendations.

★★★★☆
30UK AI Safety Institute (AISI)UK AI Safety Institute·Government

The UK AI Safety Institute (AISI) is the UK government's dedicated body for evaluating and mitigating risks from advanced AI systems. It conducts technical safety research, develops evaluation frameworks for frontier AI models, and works with international partners to inform global AI governance and policy.

★★★★☆

Related Wiki Pages

Top Related Pages

Approaches

AI Governance Coordination TechnologiesMulti-Agent Safety

Analysis

Worldview-Intervention MappingIntervention Timing WindowsRacing Dynamics Impact ModelMultipolar Trap Dynamics ModelAI Safety Culture Equilibrium ModelAI Safety Multi-Actor Strategic Landscape

Concepts

AGI TimelineElite Coordination InfrastructureInternational Coordination MechanismsGovernance-Focused WorldviewAutonomous Cooperative Agents

Organizations

Georgetown CSET

Key Debates

AI Structural Risk CruxesAI Governance and Policy