International Coordination Mechanisms
International Coordination Mechanisms
Comprehensive analysis of international AI coordination mechanisms shows growing but limited progress: 11-country AI Safety Institute network with ~$200M budget expanding to include India; Council of Europe treaty with 17 signatories and 3 ratifications; OECD Hiroshima framework with 13+ company pledges; Paris Summit drawing 61 nations (though US/UK abstained). Assessment finds high potential impact (40-60% racing risk reduction) if successful but low-medium tractability (25-40% probability), with information sharing most feasible (already active via AISI network) while capability restrictions face near-insurmountable geopolitical obstacles. UN Global Dialogue launch and India's 2026 AI Impact Summit mark expanding Global South engagement.
International coordination represents one of the most challenging yet potentially crucial approaches to AI safety, involving the development of global cooperation mechanisms to ensure advanced AI systems are developed and deployed safely across all major AI powers. As AI capabilities advance rapidly across multiple nations—particularly the United States, China, and the United Kingdom—the absence of coordinated safety measures could lead to dangerous race dynamics where competitive pressures override safety considerations.
The fundamental challenge stems from the global nature of AI development combined with the potentially catastrophic consequences of misaligned advanced AI systems. Unlike previous technological risks that could be contained nationally, advanced AI capabilities and their risks are inherently global, requiring unprecedented levels of international cooperation in an era of heightened geopolitical tensions. The stakes are particularly high given that uncoordinated AI development could lead to a "race to the bottom" where safety precautions are sacrificed for competitive advantage.
Current efforts at international coordination show both promise and significant limitations. The AI Safety Summit series, beginning with the UK's Bletchley Park summit↗🏛️ government★★★★☆UK Governmentgovernment AI policiesA foundational international policy document for AI governance; frequently cited as the first major intergovernmental acknowledgment of catastrophic AI risk, making it highly relevant to tracking the evolution of global AI safety policy.The Bletchley Declaration is a landmark multinational policy agreement signed at the AI Safety Summit 2023, committing participating nations to collaborative efforts on AI safet...governancepolicyai-safetyexistential-risk+3Source ↗ in November 2023, has brought together major AI powers but has largely remained at the level of symbolic commitments rather than substantive agreements. The Council of Europe's Framework Convention on AI↗🔗 webCouncil of Europe: AI Treaty PortalThis treaty marks a significant international governance milestone, complementing EU AI Act efforts with a broader multilateral framework; relevant to researchers studying AI policy, international coordination, and governance mechanisms for advanced AI systems.The Council of Europe adopted the Framework Convention on Artificial Intelligence, the first binding international treaty addressing AI's impact on human rights, democracy, and ...governancepolicyai-safetycoordination+2Source ↗, adopted in May 2024, represents the first legally binding international AI treaty. The emerging International Network of AI Safety Institutes↗🏛️ government★★★★★NISTInternational Network of AI Safety InstitutesThis government fact sheet documents a significant multilateral AI safety governance milestone; relevant for tracking international coordination efforts and the institutionalization of AI safety evaluation across major AI-developing nations.In November 2024, the U.S. Departments of Commerce and State launched the International Network of AI Safety Institutes, uniting ten countries and the EU to advance collaborativ...ai-safetygovernancepolicycoordination+4Source ↗ represents a more technical approach to coordination, though their effectiveness remains to be demonstrated. Meanwhile, bilateral dialogues between the US and China on AI safety have begun but operate within the broader context of strategic competition that limits trust and information sharing.
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Tractability | Low-Medium (25-40% success probability) | Geopolitical tensions between US and China limit substantive cooperation; Council of Europe treaty has 17 signatories but weak enforcement |
| Impact if Successful | Very High (potential 40-60% reduction in racing risk) | Could prevent racing dynamics, establish global safety standards, enable coordinated response to AI incidents |
| Current Progress | Limited (15-25% of needed infrastructure) | Three major summits held (2023-2025); 11-country AI Safety Institute network formed; first binding treaty signed |
| Key Barriers | Geopolitical competition | US-China strategic rivalry; AI framed as national security issue in both countries; US/UK declined Paris declaration |
| Verification Challenges | High (less than 10% of nuclear-style verification feasible) | AI capabilities harder to monitor than nuclear/chemical weapons; no equivalent to IAEA inspections |
| Time Horizon | 5-15 years | Building international institutions comparable to nuclear governance took 25 years; UN Global Dialogue launched 2025 |
| Resource Requirements | High ($200-250M annually) | AI Safety Institutes: UK ≈$65M, US $47.7M requested, Canada $36M; treaty secretariats require additional funding |
| Global Participation | Growing (61 countries at Paris 2025) | Paris Summit drew 61 signatories including China, India, EU; up from 29 at Bletchley |
Comparative National Approaches to AI Governance
The three major AI powers—the United States, European Union, and China—have adopted fundamentally different regulatory philosophies that reflect their distinct political systems, economic priorities, and cultural values. These divergent approaches create both challenges and opportunities for international coordination. Understanding these differences is essential for assessing the feasibility of various coordination mechanisms.
Regulatory Philosophy Comparison
| Dimension | European Union | United States | China |
|---|---|---|---|
| Regulatory Model | Comprehensive, risk-based framework | Decentralized, sector-specific | Centralized, state-led directives |
| Primary Legislation | EU AI Act (August 2024) | No unified federal law; NIST RMF, state laws, executive orders | Algorithmic Recommendation Rules (2022), Generative AI Measures (2023) |
| Risk Classification | Four tiers: unacceptable, high, limited, minimal | Varies by agency and sector | Aligned with national security and social stability priorities |
| Enforcement Body | European AI Office | Multiple agencies (FDA, FTC, NHTSA, etc.) | Cyberspace Administration of China (CAC) |
| Innovation Stance | Precautionary; ex-ante requirements | Permissive; sector-by-sector | Strategic; strong state support with content controls |
| Data Requirements | GDPR compliance, algorithmic impact assessments | Sector-specific; voluntary for most AI | Data localization; security reviews |
| Transparency | High; documentation and disclosure mandated | Variable; depends on sector | Limited; state oversight prioritized |
| Extraterritorial Reach | Strong (Brussels Effect) | Moderate (export controls) | Limited to domestic market |
Strengths and Weaknesses by Approach
| Approach | Strengths | Weaknesses | Coordination Implications |
|---|---|---|---|
| EU (Comprehensive) | Clear rules; strong rights protection; international influence via Brussels Effect | May slow innovation; compliance costs; complex implementation | Could set global standards; others may resist adoption |
| US (Decentralized) | Flexibility; innovation-friendly; rapid adaptation | Inconsistent coverage; gaps in protection; state fragmentation | Harder to negotiate unified positions; industry-led standards |
| China (State-Led) | Rapid implementation; strategic coherence; strong enforcement capacity | Limited transparency; privacy concerns; political controls | Different governance values complicate alignment |
According to recent analysis, "Each regulatory system reflects distinct cultural, political and economic perspectives. Each also highlights differing regional perspectives on regulatory risk-benefit tradeoffs, with divergent judgments on the balance between safety versus innovation and cooperation versus competition." The 2025 Government AI Readiness Index notes that the global AI leadership picture is "increasingly bipolar," with the United States and China emerging as the two dominant forces.
Major International Coordination Mechanisms
Current Framework Landscape
| Mechanism | Type | Participants | Status (Dec 2025) | Binding? |
|---|---|---|---|---|
| Council of Europe AI Treaty↗🔗 webCouncil of Europe: AI Treaty PortalThis treaty marks a significant international governance milestone, complementing EU AI Act efforts with a broader multilateral framework; relevant to researchers studying AI policy, international coordination, and governance mechanisms for advanced AI systems.The Council of Europe adopted the Framework Convention on Artificial Intelligence, the first binding international treaty addressing AI's impact on human rights, democracy, and ...governancepolicyai-safetycoordination+2Source ↗ | Multilateral treaty | 17 signatories (US, UK, EU, Canada, Japan, Switzerland, others) | Open for signature Sep 2024; ratified by UK, France, Norway | Yes (first binding AI treaty) |
| International Network of AI Safety Institutes↗🏛️ government★★★★★NISTInternational Network of AI Safety InstitutesThis government fact sheet documents a significant multilateral AI safety governance milestone; relevant for tracking international coordination efforts and the institutionalization of AI safety evaluation across major AI-developing nations.In November 2024, the U.S. Departments of Commerce and State launched the International Network of AI Safety Institutes, uniting ten countries and the EU to advance collaborativ...ai-safetygovernancepolicycoordination+4Source ↗ | Technical cooperation | 11 countries + EU | Inaugural meeting Nov 2024 | No |
| Bletchley Declaration↗🏛️ government★★★★☆UK Governmentgovernment AI policiesA foundational international policy document for AI governance; frequently cited as the first major intergovernmental acknowledgment of catastrophic AI risk, making it highly relevant to tracking the evolution of global AI safety policy.The Bletchley Declaration is a landmark multinational policy agreement signed at the AI Safety Summit 2023, committing participating nations to collaborative efforts on AI safet...governancepolicyai-safetyexistential-risk+3Source ↗ | Political declaration | 29 countries + EU | Signed Nov 2023 | No |
| Seoul Frontier AI Commitments↗🏛️ government★★★★☆UK GovernmentSeoul Frontier AI CommitmentsOfficial UK government publication documenting voluntary safety pledges from frontier AI companies at the 2024 Seoul AI Summit; a key milestone in international AI governance efforts following the 2023 Bletchley Park Summit.A collection of voluntary safety commitments made by leading AI companies at the AI Seoul Summit 2024, building on the Bletchley Declaration. Companies pledge to publish safety ...governancepolicyai-safetyevaluation+6Source ↗ | Industry pledges | 16 major AI companies | May 2024 | No |
| G7 Hiroshima AI Process↗🔗 webG7 Hiroshima AI ProcessThis is the official Japanese Ministry of Foreign Affairs page for the G7 Hiroshima AI Process, a landmark intergovernmental initiative establishing voluntary guiding principles and a code of conduct for advanced AI developers across G7 nations.The G7 Leaders issued a statement on October 30, 2023 establishing the Hiroshima AI Process, a multilateral framework for governing advanced AI systems. It includes two key docu...governancepolicycoordinationdeployment+2Source ↗ | Code of conduct | G7 members | Adopted Oct 2023 | No |
| US-China AI Dialogue | Bilateral | US, China | First meeting May 2024 | No |
| UN AI Advisory Body↗🔗 web★★★★☆United NationsUN AI Advisory Body – Official HomepageThe UN AI Advisory Body is a major intergovernmental initiative relevant to anyone tracking international AI governance efforts; its recommendations may shape national AI policies and global norms around safe and equitable AI development.The United Nations AI Advisory Body is an intergovernmental initiative focused on international governance frameworks for artificial intelligence. It brings together experts and...governancepolicycoordinationai-safety+2Source ↗ | Multilateral | UN Member States | Final report Sep 2024 | No |
AI Safety Institute Network
The International Network of AI Safety Institutes↗🏛️ government★★★★☆US Department of CommerceInternational Network of AI Safety InstitutesOfficial U.S. government fact sheet documenting the creation of a multilateral AI safety coordination body; relevant to understanding emerging international governance infrastructure for advanced AI systems as of late 2024.The U.S. Departments of Commerce and State launched the International Network of AI Safety Institutes in November 2024, uniting 11 nations to coordinate AI safety research, eval...ai-safetygovernancepolicycoordination+4Source ↗, launched in November 2024, represents the most concrete technical cooperation mechanism:
| Institute | Country | Annual Budget | Focus Areas | Status |
|---|---|---|---|---|
| UK AI Security Institute | United Kingdom | ≈$65M (£50M) | Near-term security risks, model evaluations | Rebranded Feb 2025 |
| US CAISI (NIST) | United States | $47.7M (FY2025 request) | Standards, evaluation frameworks | Renamed 2025 |
| EU AI Office | European Union | ≈$8M | AI Act enforcement, standards | Operational since 2024 |
| AISI Japan | Japan | ≈$5M | Evaluations, safety research | Building capacity |
| AISI Korea | Republic of Korea | ≈$5M | Safety evaluations | Building capacity |
| AISI Singapore | Singapore | ≈$3M | Governance, evaluations | Building capacity |
| AISI Canada | Canada | ≈$36M (C$50M) | Safety standards | Announced Apr 2024 |
| AISI Australia | Australia | TBD | Safety research, risk response | Operational early 2026 |
| AISI France | France | ≈$5M | Safety research, EU coordination | Building capacity |
| AISI Kenya | Kenya | ≈$1M | Global South representation | Early stage |
| IndiaAI Safety Institute | India | TBD | Safe AI model application | Announced Jan 2025 |
The network announced $11 million in funding↗🔗 webU.S. Launches International AI Safety Network with Global PartnersThis article from ANSI covers a significant 2024 milestone in international AI governance: the formal launch of a multilateral AI safety coordination network, relevant to understanding how governments are structuring cross-border AI safety collaboration and evaluation frameworks.The U.S. Department of Commerce and State co-hosted the inaugural convening of the International Network of AI Safety Institutes in November 2024, bringing together 10 member na...governancepolicyai-safetycoordination+4Source ↗ for synthetic content research and completed its first multilateral model testing exercise at the November 2024 San Francisco convening.
2025 Institutional Developments
The landscape of international AI governance institutions underwent significant changes in 2025, reflecting evolving priorities and geopolitical dynamics.
UK AI Safety Institute Rebranding (February 2025): In a significant shift, the UK renamed its AI Safety Institute to the "AI Security Institute" at the Munich Security Conference. Technology Secretary Peter Kyle stated: "This change brings us into line with what most people would expect an Institute like this to be doing." The rebranded institute now focuses on "serious AI risks with security implications"—including chemical and biological weapons development, cyber-attacks, and crimes such as fraud—rather than broader existential safety concerns. This pivot signals a potential divergence in international approaches, with the UK prioritizing near-term security threats over long-term alignment risks.
OECD G7 Hiroshima Reporting Framework (February 2025): The OECD launched the first global framework for companies to report on implementation of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems. Major AI developers—including Amazon, Anthropic, Fujitsu, Google, KDDI Corporation, Microsoft, NEC Corporation, NTT, OpenAI, Preferred Networks, Rakuten Group, Salesforce, and Softbank—pledged to complete the inaugural framework by April 15, 2025. First reports were published in June 2025. This represents the first standardized monitoring mechanism for voluntary AI safety commitments, though enforcement remains limited to reputational incentives.
UN Global Dialogue on AI Governance (September 2025): Building on the Global Digital Compact adopted in 2024, the UN launched the Global Dialogue on AI Governance—described as "the world's principal venue for collective focus on this transformative technology." The initiative complements existing efforts at the OECD, G7, and regional organizations while providing an inclusive forum for developing nations. The UN also established the Independent International Scientific Panel on AI, comprising 40 expert members who will provide evidence-based insights on AI opportunities, risks, and impacts—sometimes likened to an "IPCC for AI." Annual convenings are scheduled for the 2026 AI for Good Global Summit in Geneva and 2027 in New York.
G7 December 2025 Declaration: Meeting in Montreal, G7 Ministers responsible for industry, digital affairs, and technology adopted a joint declaration reaffirming commitment to risk-based approaches encompassing system transparency, technical robustness, and data quality. The declaration called for increased convergence of regulatory approaches at the international level through OECD work, aiming to limit fragmentation and secure cross-border investments.
| Development | Date | Significance | Limitations |
|---|---|---|---|
| UK AI Security Institute rebrand | Feb 2025 | Signals shift from existential to near-term security focus | May reduce coordination on alignment research |
| OECD Hiroshima Reporting Framework | Feb 2025 | First standardized monitoring; 13+ companies pledged | No enforcement mechanism; voluntary only |
| Switzerland signs CoE Treaty | Mar 2025 | 17th signatory; growing European consensus | US Senate ratification uncertain |
| EU proposes CoE Treaty ratification | Jun 2025 | Formal EU commitment to binding AI governance | Requires Council and Parliament approval |
| UN Global Dialogue launch | Sep 2025 | Inclusive global forum; 40-member Scientific Panel | US opposed multilateral mechanisms; non-binding |
| G7 Montreal Declaration | Dec 2025 | Regulatory convergence commitment | G7-only; excludes China |
| India AISI announcement | Jan 2025 | First major developing economy AISI | Budget and scope TBD |
Critical Cooperation Areas and Feasibility
The landscape of potential international coordination varies dramatically in feasibility across different domains. Information sharing on AI safety research represents perhaps the most tractable area for cooperation, as it provides mutual benefits without requiring countries to limit their capabilities development. The establishment of common safety standards and evaluation protocols offers medium feasibility, building on existing precedents in other technology sectors while allowing countries to maintain competitive positions.
Cooperation Feasibility Matrix
| Cooperation Area | Feasibility | Current Status | Key Enablers | Key Barriers |
|---|---|---|---|---|
| Safety research sharing | High | Active via AISI network | Mutual benefit; low competitive cost | Classification concerns; IP protection |
| Evaluation standards | Medium-High | OECD framework launched Feb 2025 | Technical objectivity; industry interest | Different risk priorities; enforcement gaps |
| Incident reporting | Medium | No formal mechanism | Shared interest in avoiding catastrophe | Attribution challenges; competitive sensitivity |
| Crisis communication | Medium | Biden-Xi nuclear AI agreement (Nov 2024) | Nuclear precedent; mutual deterrence | Trust deficit; limited scope |
| Deployment standards | Medium | EU AI Act extraterritorial reach | Brussels Effect; market access | Sovereignty concerns; innovation impact |
| Capability restrictions | Low | US export controls (unilateral) | Security imperatives | Zero-sum framing; verification impossible |
| Development moratoria | Very Low | No serious proposals | Catastrophic risk awareness | First-mover advantages; enforcement |
However, coordination on capability restrictions faces significant challenges due to the dual-use nature of AI research and the perceived strategic importance of AI leadership. Export controls on AI hardware↗🏛️ government★★★★★US CongressCongressional Research ServiceRelevant to AI governance discussions around compute controls; this official CRS report provides a factual legislative reference on U.S. chip export restrictions, a key policy lever for managing advanced AI development and proliferation risks.This Congressional Research Service report analyzes U.S. export control policies targeting China's access to advanced semiconductors since 2018, documenting government actions, ...governancepolicycomputecapabilities+2Source ↗, implemented primarily by the United States since 2022, illustrate both the potential and limitations of unilateral approaches—while they may slow capability development in target countries, they also reduce trust and may accelerate independent development efforts. According to RAND analysis↗🔗 web★★★★☆RAND CorporationRAND - Incentives for U.S.-China Conflict, Competition, and CooperationA RAND policy analysis relevant to AI governance researchers and policymakers concerned with how great-power competition shapes AGI development incentives and the prospects for international AI safety cooperation.This RAND report analyzes the strategic dynamics between the U.S. and China in the context of artificial general intelligence development, identifying key national security scen...governancepolicycoordinationexistential-risk+3Source ↗, China's AI ecosystem remains competitive despite US export controls, and DeepSeek's founder has stated that "bans on shipments of advanced chips are the problem" rather than funding constraints.
Crisis communication mechanisms represent another medium-feasibility area for cooperation, drawing parallels to nuclear-era hotlines and confidence-building measures. Such mechanisms could prove crucial if advanced AI systems begin exhibiting concerning behaviors or if there are near-miss incidents that require coordinated responses. The November 2024 Biden-Xi agreement that "humans, not AI" should control nuclear weapons↗🔗 web★★★★☆Brookings Institution"humans, not AI" should control nuclear weaponsPublished January 2024 by Brookings, this is a policy-oriented commentary relevant to international AI governance coordination, particularly the US-China relationship as a critical axis for managing global AI risks.This Brookings Institution article proposes a strategic framework for the US-China bilateral AI consultation channel established at the November 2023 Biden-Xi summit. The author...governancepolicycoordinationai-safety+1Source ↗ represents a modest but significant step in this direction.
International Coordination Landscape
The following diagram illustrates the multi-layered architecture of international AI governance, from binding treaties to voluntary commitments:
Diagram (loading…)
flowchart TD
subgraph BINDING["Binding Frameworks"]
COE[Council of Europe<br/>AI Treaty 2024]
EUACT[EU AI Act 2024]
end
subgraph MULTILATERAL["Multilateral Initiatives"]
AISI[AI Safety Institute<br/>Network - 11 countries]
UN[UN AI Advisory<br/>Body 2024]
G7[G7 Hiroshima<br/>Process]
end
subgraph SUMMITS["Summit Series"]
BLETCH[Bletchley 2023<br/>29 countries + EU]
SEOUL[Seoul 2024<br/>27 countries + EU]
PARIS[Paris 2025<br/>58 countries]
end
subgraph BILATERAL["Bilateral Dialogues"]
USCHINA[US-China AI<br/>Dialogue 2024]
UKCHINA[UK-China<br/>Discussions]
end
BLETCH --> SEOUL
SEOUL --> PARIS
SEOUL --> AISI
AISI --> COE
UN --> PARIS
style COE fill:#90EE90
style EUACT fill:#90EE90
style AISI fill:#87CEEB
style BLETCH fill:#FFE4B5
style SEOUL fill:#FFE4B5
style PARIS fill:#FFE4B5
style USCHINA fill:#FFB6C1The US-China Cooperation Dilemma
The central challenge for international AI coordination lies in US-China relations, as these two countries lead global AI development but operate within an increasingly adversarial strategic context. The feasibility of meaningful cooperation faces fundamental tensions between mutual interests in avoiding catastrophic outcomes and zero-sum perceptions of AI competition.
US-China AI Engagement Timeline
| Date | Event | Significance |
|---|---|---|
| Nov 2023 | Xi-Biden APEC meeting | Commitment to establish AI dialogue |
| Nov 2023 | Both sign Bletchley Declaration | First joint safety commitment |
| May 2024 | First intergovernmental AI dialogue↗🔗 webFirst intergovernmental AI dialogueCovers the inaugural US-China governmental AI dialogue, a milestone in international AI governance efforts; relevant context for understanding geopolitical dimensions of AI safety coordination between major powers.China and the United States held their first intergovernmental AI meeting in Geneva on May 14, 2024, following commitments made at the November 2023 Xi-Biden summit. The meeting...governancepolicycoordinationai-safety+1Source ↗ (Geneva) | Working-level technical discussions |
| Nov 2024 | Biden-Xi nuclear AI agreement | Agreement that humans control nuclear weapons |
| Jul 2025 | China publishes Global AI Governance Action Plan | Signals continued engagement interest |
Arguments for possible cooperation point to several factors: both countries have expressed concern about AI risks and have established government entities focused on AI safety; there are precedents for technical cooperation even during periods of broader competition, such as in climate research; and Chinese officials have engaged substantively in international AI safety discussions, suggesting genuine concern about risks rather than purely strategic positioning.
However, significant obstacles remain. The framing of AI as central to national security and economic competitiveness in both countries creates strong incentives against sharing information or coordinating on limitations. The broader deterioration in US-China relations since 2018 has created institutional barriers to cooperation, while mutual suspicions about intentions make verification and trust-building extremely difficult.
According to RAND researchers↗🔗 web★★★★☆RAND CorporationHow Might the United States Engage with China on AI Security Without Diffusing Technology?RAND commentary from January 2025 relevant to policymakers and researchers interested in US-China AI governance diplomacy and the challenge of risk communication without technology diffusion.This RAND commentary examines how the U.S. can engage China in dialogue on AI safety and security risks without inadvertently transferring sensitive AI capabilities or intellect...governancepolicycoordinationai-safety+3Source ↗, "scoping an AI dialogue is difficult because 'AI' does not mean anything specific in many U.S.-China engagements. It means everything from self-driving cars and autonomous weapons to facial recognition, face-swapping apps, ChatGPT, and a potential robot apocalypse."
The Biden administration's approach combined competitive measures (export controls, investment restrictions) with selective engagement on shared challenges, but progress remained limited. Chinese participation in international AI safety discussions has increased, but substantive commitments remain vague, and there are questions about whether engagement reflects genuine safety concerns or strategic positioning.
Lessons from Nuclear Governance
Historical comparisons to nuclear arms control offer both relevant precedents and important cautionary notes. According to RAND analysis on nuclear history and AI governance↗🔗 web★★★★☆RAND CorporationRAND analysis on nuclear history and AI governanceA RAND Corporation policy analysis using nuclear weapons governance history as a framework for thinking about AI oversight; useful for researchers and policymakers exploring institutional and regulatory approaches to advanced AI risks.This RAND analysis draws parallels between the governance challenges posed by nuclear weapons and those presented by advanced AI systems, examining historical precedents to info...governancepolicyexistential-riskcoordination+2Source ↗, the development of nuclear non-proliferation took approximately 25 years from the first atomic weapons to the NPT entering into force in 1970.
Transferable Lessons vs. Key Differences
| Dimension | Nuclear Governance | AI Governance | Implication |
|---|---|---|---|
| Verification | Physical inspections (IAEA) | No equivalent for AI capabilities | Harder to monitor compliance |
| Containment | Rare materials, specialized facilities | Widely distributed, software-based | Export controls less effective |
| State control | Governments control most capabilities | Private companies lead development | Different negotiating parties needed |
| Demonstrable harm | Hiroshima/Nagasaki demonstrated risks | AI harms remain speculative | Less urgency for cooperation |
| Timeline to develop | Years, billions of dollars | Months, millions of dollars | Faster proliferation |
| Dual-use nature | Clear weapons vs. energy distinction | Almost all AI research is dual-use | Harder to define restrictions |
According to the Finnish Institute of International Affairs↗🔗 webFinnish Institute of International AffairsPublished by Finland's premier foreign policy think tank, this analysis is relevant for those studying international AI governance frameworks and how historical arms control precedents might inform multilateral AI safety agreements.This Finnish Institute of International Affairs publication examines the intersection of nuclear arms control frameworks and AI safety governance, exploring whether lessons and ...governancepolicyexistential-riskcoordination+2Source ↗, "compelling arguments have been made to state why nuclear governance models won't work for AI: AI lacks state control, has no reliable verification tools, and is inherently harder to contain."
However, some lessons remain transferable. The GovAI research paper on the Baruch Plan↗🏛️ government★★★★☆Centre for the Governance of AIGovAI research paper on the Baruch PlanA historically grounded GovAI paper drawing direct parallels between nuclear governance failures and current AI governance challenges; useful for understanding why international coordination on dangerous technologies is so difficult.This GovAI paper analyzes the Baruch Plan (1944-1951), the early attempt at international nuclear control, to extract lessons for governing powerful technologies like AI today. ...governancepolicyexistential-riskcoordination+3Source ↗ notes that early cooperation attempts failed but built foundations for later success. Norm-building and stigmatization of dangerous practices can work even without enforcement, and crisis communication mechanisms (like nuclear hotlines) prove valuable during tensions.
Safety Implications and Risk Considerations
International coordination presents both promising and concerning implications for AI safety. On the positive side, coordinated approaches could prevent dangerous race dynamics that might otherwise pressure developers to cut safety corners in pursuit of competitive advantage. Shared safety research could accelerate the development of alignment techniques and safety evaluation methods, while coordinated deployment standards could ensure that safety considerations are maintained globally rather than just in safety-conscious jurisdictions.
However, coordination efforts also carry risks that must be carefully managed. Information sharing on AI capabilities could inadvertently accelerate dangerous capabilities development in countries with weaker safety practices. Coordination mechanisms might legitimize or strengthen authoritarian uses of AI by creating channels for technology transfer. There are also risks that coordination efforts could create false confidence or serve as cover for continued dangerous development practices.
The timing of coordination efforts matters significantly. Early coordination on safety research and standards may be more feasible and beneficial than attempts at capability restrictions, which become more difficult as strategic stakes increase. However, waiting too long to establish coordination mechanisms may mean they are unavailable when needed most urgently.
Current Trajectory and Near-Term Prospects
AI Summit Series Evolution
The international AI summit series has grown in scope but faces questions about substantive impact:
| Summit | Date | Signatories | Key Outcomes | Criticism |
|---|---|---|---|---|
| Bletchley (UK) | Nov 2023 | 29 countries + EU | Bletchley Declaration; AI Safety Institutes commitment | Symbolic only; no enforcement |
| Seoul (Korea) | May 2024 | 27 countries + EU | Frontier AI Safety Commitments↗🏛️ government★★★★☆UK GovernmentSeoul Frontier AI CommitmentsOfficial UK government publication documenting voluntary safety pledges from frontier AI companies at the 2024 Seoul AI Summit; a key milestone in international AI governance efforts following the 2023 Bletchley Park Summit.A collection of voluntary safety commitments made by leading AI companies at the AI Seoul Summit 2024, building on the Bletchley Declaration. Companies pledge to publish safety ...governancepolicyai-safetyevaluation+6Source ↗ (16 companies) | Industry self-regulation |
| Paris (France) | Feb 2025 | 61 countries | $100M Current AI endowment; environmental coalition | US and UK declined to sign joint declaration |
| New Delhi (India) | Feb 2026 | TBD | AI Impact Summit—first Global South host | Pending |
The Paris AI Action Summit↗🔗 webParis AI Action SummitOfficial French government page for the February 2025 Paris AI Action Summit, a key node in the evolving series of international AI governance summits following Bletchley and Seoul.The Paris AI Action Summit, hosted by France at the Élysée Palace in February 2025, brought together world leaders, governments, and industry stakeholders to advance internation...governancepolicycoordinationai-safety+2Source ↗ highlighted emerging tensions. While 58 countries signed a joint declaration on "Inclusive and Sustainable AI," the US and UK refused to sign, citing lack of "practical clarity" on global governance. According to the Financial Times↗🔗 web★★★☆☆TechPolicy.PressAt Paris AI Summit, US, EU, Other Nations Lay Out Divergent GoalsUseful for tracking the state of international AI governance diplomacy in early 2025, particularly the US-EU divergence and the geopolitical dynamics shaping global AI policy frameworks.Coverage of the February 2025 Paris AI Action Summit where US Vice President JD Vance and EU Commission President von der Leyen outlined competing visions for AI development, wi...governancepolicycoordinationcompute+2Source ↗, the summit "highlighted a shift in the dynamics towards geopolitical competition" characterized as "a new AI arms race" between the US and China.
Anthropic CEO Dario Amodei reportedly called the Paris Summit a "missed opportunity"↗🔗 webcalled the Paris Summit a "missed opportunity"Published by The Future Society in February 2025, this post-summit assessment is relevant to AI governance researchers tracking the gap between civil society priorities and intergovernmental AI summits, particularly the Paris AI Action Summit process.The Future Society assessed the Paris AI Action Summit against priorities from an unprecedented global consultation of 11,600+ citizens and 200+ expert organizations, finding th...governancepolicyai-safetycoordination+3Source ↗ for addressing AI risks, with similar concerns voiced by David Leslie of the Alan Turing Institute and Max Tegmark of the Future of Life Institute.
Near-Term Outlook (2025-2027)
The trajectory of international AI coordination appears to be following a pattern of incremental institutionalization amid persistent geopolitical constraints. Several trends from 2025 are likely to continue:
Observed 2025 developments shaping future trajectory:
- UK pivot from "safety" to "security" framing may influence other national institutes
- OECD reporting framework provides template for monitoring voluntary commitments
- UN Global Dialogue and Scientific Panel creating inclusive multilateral venues
- Singapore-Japan joint testing report demonstrates practical AISI network cooperation
Most likely developments (2026-2027):
- AI Safety Institute network expansion (India hosting February 2026 AI Impact Summit—first Global South host)
- Trump-Xi exchange of visits planned for 2026 could include AI biosecurity cooperation per 2025 UN address
- EU AI Act enforcement creating de facto international standards via Brussels Effect
- UN Global Dialogue convenings in Geneva (2026) and New York (2027) with Scientific Panel reports
- Possible convergence of US CAISI and UK Security Institute on near-term threats
Key uncertainties:
- Impact of US political changes on export controls and international engagement
- Whether China will deepen or reduce participation in Western-led initiatives
- Whether a major AI incident could create momentum for stronger coordination
- Trajectory of UK security-focused approach vs broader safety concerns
The European Union's AI Act↗🔗 webEU AI Act – Official Resource HubThis is the primary information hub for the EU AI Act, the landmark 2024 EU regulation that sets legally binding rules for AI development and deployment across the European Union, directly relevant to AI safety governance and policy discussions.The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, establishing a risk-based classification system for AI applications. It imposes var...governancepolicyai-safetydeployment+4Source ↗ enforcement, which began in phases from August 2024, may create additional coordination opportunities through regulatory alignment, as companies seeking EU market access adopt its requirements globally. According to CSET's analysis, understanding the underlying assumptions of different governance proposals is essential for navigating the increasingly complex international landscape.
Key Uncertainties and Research Questions
Several critical uncertainties shape the prospects for international AI coordination:
| Uncertainty | Current Assessment | Impact on Coordination |
|---|---|---|
| Is US-China cooperation possible? | Low probability of deep cooperation; working-level dialogue possible | Central to global coordination success |
| Can AI Safety Institutes influence development? | Unproven; budgets small relative to industry | Determines value of technical cooperation |
| Are verification mechanisms feasible? | Harder than nuclear/chemical; no good analogies | Limits enforceable agreements |
| Will AI incidents create cooperation windows? | Unknown; depends on incident severity/attribution | Could shift political feasibility rapidly |
| Will private sector or governments lead? | Currently mixed; companies have more technical capacity | Affects negotiating structures needed |
The effectiveness of technical cooperation through AI Safety Institutes is still being tested, with key questions about whether such cooperation can influence actual AI development practices or remains largely academic. The combined budget of the AI Safety Institute network (approximately $200-250 million annually following 2025 expansions) is dwarfed by private sector AI spending (over $100 billion annually), raising questions about their practical influence.
Questions about verification and compliance with international AI agreements remain largely theoretical but will become critical if more substantive agreements are attempted. According to research on AI treaty verification↗🔗 web★★★☆☆ResearchGateresearch on AI treaty verificationRelevant for researchers exploring international AI governance frameworks; draws on nuclear nonproliferation verification experience to address how binding AI agreements might be monitored and enforced across nations.This research examines the verification mechanisms developed for nuclear arms control treaties and draws lessons applicable to potential AI governance agreements. It explores ho...governancepolicycoordinationai-safety+2Source ↗, "substantial preparations are needed: (1) developing privacy-preserving, secure, and acceptably priced methods for verifying the compliance of hardware, given inspection access; and (2) building an initial, incomplete verification system, with authorities and precedents that allow its gaps to be quickly closed if and when the political will arises."
The broader question of whether international coordination is necessary for AI safety depends partly on unresolved technical questions about AI alignment and control. If alignment problems prove tractable through purely technical means, the importance of international coordination may diminish. However, if alignment remains difficult or if powerful AI systems create new forms of risk, international coordination may prove essential regardless of its current political feasibility.
Sources and Further Reading
Official Documents and Declarations
- The Bletchley Declaration↗🏛️ government★★★★☆UK Governmentgovernment AI policiesA foundational international policy document for AI governance; frequently cited as the first major intergovernmental acknowledgment of catastrophic AI risk, making it highly relevant to tracking the evolution of global AI safety policy.The Bletchley Declaration is a landmark multinational policy agreement signed at the AI Safety Summit 2023, committing participating nations to collaborative efforts on AI safet...governancepolicyai-safetyexistential-risk+3Source ↗ - UK Government (November 2023)
- Seoul Declaration for Safe, Innovative and Inclusive AI↗🏛️ government★★★★☆UK GovernmentSeoul Declaration for Safe, Innovative and Inclusive AIThe Seoul Declaration is a key intergovernmental document in the evolving series of international AI safety summits; it follows the 2023 Bletchley Declaration and precedes further summits, making it essential reference for tracking global AI governance commitments.The Seoul Declaration is an international agreement reached at the AI Seoul Summit on 21 May 2024, building on the Bletchley Park process, in which world leaders committed to sa...governancepolicyai-safetycoordination+2Source ↗ - AI Seoul Summit (May 2024)
- Frontier AI Safety Commitments↗🏛️ government★★★★☆UK GovernmentSeoul Frontier AI CommitmentsOfficial UK government publication documenting voluntary safety pledges from frontier AI companies at the 2024 Seoul AI Summit; a key milestone in international AI governance efforts following the 2023 Bletchley Park Summit.A collection of voluntary safety commitments made by leading AI companies at the AI Seoul Summit 2024, building on the Bletchley Declaration. Companies pledge to publish safety ...governancepolicyai-safetyevaluation+6Source ↗ - AI Seoul Summit (May 2024)
- Council of Europe Framework Convention on AI↗🔗 webCouncil of Europe: AI Treaty PortalThis treaty marks a significant international governance milestone, complementing EU AI Act efforts with a broader multilateral framework; relevant to researchers studying AI policy, international coordination, and governance mechanisms for advanced AI systems.The Council of Europe adopted the Framework Convention on Artificial Intelligence, the first binding international treaty addressing AI's impact on human rights, democracy, and ...governancepolicyai-safetycoordination+2Source ↗ - Council of Europe (May 2024)
- International Network of AI Safety Institutes Fact Sheet↗🏛️ government★★★★★NISTInternational Network of AI Safety InstitutesThis government fact sheet documents a significant multilateral AI safety governance milestone; relevant for tracking international coordination efforts and the institutionalization of AI safety evaluation across major AI-developing nations.In November 2024, the U.S. Departments of Commerce and State launched the International Network of AI Safety Institutes, uniting ten countries and the EU to advance collaborativ...ai-safetygovernancepolicycoordination+4Source ↗ - US Commerce Department (November 2024)
Analysis and Research
- A Roadmap for a US-China AI Dialogue↗🔗 web★★★★☆Brookings Institution"humans, not AI" should control nuclear weaponsPublished January 2024 by Brookings, this is a policy-oriented commentary relevant to international AI governance coordination, particularly the US-China relationship as a critical axis for managing global AI risks.This Brookings Institution article proposes a strategic framework for the US-China bilateral AI consultation channel established at the November 2023 Biden-Xi summit. The author...governancepolicycoordinationai-safety+1Source ↗ - Brookings Institution
- Potential for U.S.-China Cooperation on Reducing AI Risks↗🔗 web★★★★☆RAND CorporationRAND - Incentives for U.S.-China Conflict, Competition, and CooperationA RAND policy analysis relevant to AI governance researchers and policymakers concerned with how great-power competition shapes AGI development incentives and the prospects for international AI safety cooperation.This RAND report analyzes the strategic dynamics between the U.S. and China in the context of artificial general intelligence development, identifying key national security scen...governancepolicycoordinationexistential-risk+3Source ↗ - RAND Corporation
- Insights from Nuclear History for AI Governance↗🔗 web★★★★☆RAND CorporationRAND analysis on nuclear history and AI governanceA RAND Corporation policy analysis using nuclear weapons governance history as a framework for thinking about AI oversight; useful for researchers and policymakers exploring institutional and regulatory approaches to advanced AI risks.This RAND analysis draws parallels between the governance challenges posed by nuclear weapons and those presented by advanced AI systems, examining historical precedents to info...governancepolicyexistential-riskcoordination+2Source ↗ - RAND Corporation
- The AI Safety Institute International Network: Next Steps↗🔗 web★★★★☆CSISThe AI Safety Institute International Network: Next StepsPublished by CSIS, this policy analysis is relevant for understanding international efforts to institutionalize AI safety governance through coordinated national safety institutes, particularly following the UK Bletchley Declaration.This CSIS analysis examines the international network of AI Safety Institutes established across multiple countries and provides recommendations for strengthening their coordina...ai-safetygovernancepolicycoordination+3Source ↗ - CSIS
- International Control of Powerful Technology: Lessons from the Baruch Plan↗🏛️ government★★★★☆Centre for the Governance of AIGovAI research paper on the Baruch PlanA historically grounded GovAI paper drawing direct parallels between nuclear governance failures and current AI governance challenges; useful for understanding why international coordination on dangerous technologies is so difficult.This GovAI paper analyzes the Baruch Plan (1944-1951), the early attempt at international nuclear control, to extract lessons for governing powerful technologies like AI today. ...governancepolicyexistential-riskcoordination+3Source ↗ - GovAI
- Nuclear arms control policies and safety in AI↗🔗 webFinnish Institute of International AffairsPublished by Finland's premier foreign policy think tank, this analysis is relevant for those studying international AI governance frameworks and how historical arms control precedents might inform multilateral AI safety agreements.This Finnish Institute of International Affairs publication examines the intersection of nuclear arms control frameworks and AI safety governance, exploring whether lessons and ...governancepolicyexistential-riskcoordination+2Source ↗ - Finnish Institute of International Affairs
- U.S. Export Controls and China: Advanced Semiconductors↗🏛️ government★★★★★US CongressCongressional Research ServiceRelevant to AI governance discussions around compute controls; this official CRS report provides a factual legislative reference on U.S. chip export restrictions, a key policy lever for managing advanced AI development and proliferation risks.This Congressional Research Service report analyzes U.S. export control policies targeting China's access to advanced semiconductors since 2018, documenting government actions, ...governancepolicycomputecapabilities+2Source ↗ - Congressional Research Service
- AI Governance at the Frontier - CSET (November 2025)
- GovAI Research on International Governance - Centre for the Governance of AI
- Comparative Global AI Regulation - Policy perspectives from the EU, China, and the US
- 2025 Government AI Readiness Index - Oxford Insights
- Promising Topics for US-China Dialogues on AI Risks - ACM FAccT 2025
- How China and the US Can Make AI Safer for Everyone - The Diplomat (January 2026)
- Eight Ways AI Will Shape Geopolitics in 2026 - Atlantic Council
- Strengthening International Cooperation on AI - Brookings Institution
- The Annual AI Governance Report 2025 - ITU
Summit Coverage and News
- Paris AI Action Summit Official Site↗🔗 webParis AI Action SummitOfficial French government page for the February 2025 Paris AI Action Summit, a key node in the evolving series of international AI governance summits following Bletchley and Seoul.The Paris AI Action Summit, hosted by France at the Élysée Palace in February 2025, brought together world leaders, governments, and industry stakeholders to advance internation...governancepolicycoordinationai-safety+2Source ↗ - French Government
- Key Outcomes of the AI Seoul Summit↗🔗 webKey Outcomes of the AI Seoul SummitPublished by techUK (UK technology industry association), this summary covers the May 2024 AI Seoul Summit outcomes, useful for tracking the evolution of international AI governance frameworks and voluntary company safety commitments in the post-Bletchley era.This techUK resource summarizes the major outcomes of the AI Seoul Summit held in May 2024, covering international agreements, safety commitments, and policy developments among ...governancepolicycoordinationai-safety+3Source ↗ - techUK
- Did the Paris AI Action Summit Deliver?↗🔗 webcalled the Paris Summit a "missed opportunity"Published by The Future Society in February 2025, this post-summit assessment is relevant to AI governance researchers tracking the gap between civil society priorities and intergovernmental AI summits, particularly the Paris AI Action Summit process.The Future Society assessed the Paris AI Action Summit against priorities from an unprecedented global consultation of 11,600+ citizens and 200+ expert organizations, finding th...governancepolicyai-safetycoordination+3Source ↗ - The Future Society
- China and the United States Begin Official AI Dialogue↗🔗 webFirst intergovernmental AI dialogueCovers the inaugural US-China governmental AI dialogue, a milestone in international AI governance efforts; relevant context for understanding geopolitical dimensions of AI safety coordination between major powers.China and the United States held their first intergovernmental AI meeting in Geneva on May 14, 2024, following commitments made at the November 2023 Xi-Biden summit. The meeting...governancepolicycoordinationai-safety+1Source ↗ - China US Focus
- Paris AI Summit: Why Won't US, UK Sign Global AI Pact? - Al Jazeera
- UN Secretary-General Launches Global Dialogue on AI Governance - UN Press Release
- The UN's New AI Governance Bodies Explained - World Economic Forum
- OECD Launches Hiroshima AI Process Reporting Framework - OECD
- How the G7's New AI Reporting Framework Could Shape AI Governance - OECD.AI
- Global Landscape of AI Safety Institutes - All Tech Is Human
References
The Council of Europe's Framework Convention on Artificial Intelligence is the first international legally binding treaty addressing AI risks to human rights, democracy, and the rule of law. It establishes common standards for member states and non-member signatories to govern AI systems throughout their lifecycle, ensuring alignment with fundamental rights and democratic principles.
The US AI Safety Institute (AISI), housed within NIST, is the primary federal body responsible for AI safety research, standards development, and evaluation of advanced AI systems. The page is currently returning a 404 error, suggesting the URL has been moved or reorganized. AISI plays a central role in implementing the Biden-era Executive Order on AI and coordinating with international counterparts.
This article provides a comprehensive overview of AI Safety Institutes (AISIs) as a novel global governance model, cataloguing existing institutes worldwide and analyzing their core functions: evaluating frontier AI systems, conducting safety research, and facilitating stakeholder information exchange. It examines the historical development from the UK's 2023 Bletchley Park summit through a growing second wave of national institutes, and questions the recent shift in some jurisdictions from 'safety' to 'security' framing.
This CSET publication examines governance frameworks and policy approaches for advanced AI systems at the technological frontier, analyzing challenges in regulating cutting-edge AI capabilities. It explores institutional mechanisms, international coordination, and policy tools needed to manage risks from frontier AI development.
The Centre for the Governance of AI (GovAI) research hub aggregates policy-relevant technical and governance research on frontier AI systems, covering topics from biosecurity and cybercrime to labor market impacts and AI auditing. It serves as a comprehensive repository of GovAI's publications spanning multiple years and research themes. The page indexes papers addressing near-term and long-term risks from advanced AI systems.
The ITU's 2025 AI Governance Report provides a comprehensive overview of global AI governance developments, frameworks, and policy trends from an international telecommunications and ICT standards perspective. It examines how nations and international bodies are approaching AI regulation, safety standards, and coordination challenges. The report serves as a reference document for policymakers and stakeholders navigating the evolving AI governance landscape.