Longterm Wiki
HistoryData
Page StatusAI Transition Model
3 backlinks
2
Structure2/15
00000%0%
Summary

This page contains only a React component reference with no actual content rendered in the provided text. Unable to evaluate coordination capacity analysis without the component's output.

Issues1
StructureNo tables or diagrams - consider adding visual content

Coordination Capacity

Entry

Coordination Capacity

This page contains only a React component reference with no actual content rendered in the provided text. Unable to evaluate coordination capacity analysis without the component's output.

DirectionHigher is better
Current TrendFragile (voluntary commitments exist but lack enforcement)
Key MeasurementCommitment compliance, information sharing, standard adoption
Related
Risks
AI Development Racing Dynamics
ai-transition-model-parameters
International Coordination
ai-transition-model-metrics
Geopolitics
Models
Racing Dynamics Impact ModelInternational AI Coordination Game Model
3 backlinks

For comprehensive analysis, see International Coordination, which covers:

  • Current coordination status (AISI network, summits, treaties)
  • US-China cooperation prospects
  • Coordination mechanisms effectiveness
  • Historical precedents (Montreal Protocol, nuclear arms control)
  • Scenario analysis and trajectory projections

Coordination Capacity measures the degree to which AI developers, governments, and other stakeholders successfully cooperate on safety standards, information sharing, and development practices. This parameter is closely related to—and largely subsumed by—International Coordination.

Key aspects of coordination capacity include:

  • Voluntary commitments: Seoul, Bletchley declarations (10-30% effectiveness)
  • Information sharing: Currently 10-20% of safety findings shared
  • Standard adoption: 25-40% market share of compliant systems
  • Enforcement mechanisms: Currently minimal (no binding AI treaties with verification)

Coordination and Existential Risk

Low coordination directly increases existential risk through:

  • Racing to dangerous capabilities without collective pause mechanisms
  • Unilateral deployment of inadequately tested systems
  • Regulatory arbitrage undermining safety requirements
  • No global response capability for AI incidents

Research suggests uncoordinated development reduces safety investment by 30-60% compared to coordinated scenarios.

Parameter Network

Loading diagram...

Contributes to: Governance Capacity

Primary outcomes affected:

Related Pages


Related Pages

Top Related Pages

Parameters

Coordination (AI Uses)

Policy

Voluntary AI Safety CommitmentsInternational AI Safety Summit SeriesPause / Moratorium

Organizations

US AI Safety InstituteUK AI Safety Institute

Risks

Multipolar Trap (AI Development)

Labs

GovAI

Key Debates

AI Safety Solution CruxesAI Governance and Policy

Approaches

Multi-Agent SafetyAI Safety Cases

Transition Model

Pause and Redirect - The Deliberate PathLab Behavior

Historical

AI Safety Summit (Bletchley Park)

Analysis

OpenAI Foundation Governance ParadoxLong-Term Benefit Trust (Anthropic)

Concepts

Governance-Focused Worldview

People

Yoshua BengioStuart Russell