Skip to content
Longterm Wiki
Navigation
Updated 2025-12-28HistoryData
Page StatusContent
Edited 3 months ago1.9k wordsUpdated quarterlyOverdue by 8 days
59QualityAdequate •36ImportanceReference86.5ResearchHigh
Content8/13
SummaryScheduleEntityEdit historyOverview
Tables10/ ~8Diagrams2/ ~1Int. links28/ ~15Ext. links0/ ~10Footnotes0/ ~6References13/ ~6Quotes0Accuracy0RatingsN:4.5 R:6 A:5.5 C:7
Issues2
QualityRated 59 but structure suggests 80 (underrated by 21 points)
StaleLast edited 98 days ago - may need review
TODOs2
Complete 'Conceptual Framework' section
Complete 'Quantitative Analysis' section (8 placeholders)

Multi-Actor Strategic Landscape

Analysis

AI Safety Multi-Actor Strategic Landscape

Analyzes how AI x-risk depends on which actors develop TAI, finding US-China capability gap narrowed from 9.26% to 1.70% (2024-2025) while open-source closed to within 1.70% of frontier. Estimates actor identity determines 40-60% of total risk variance across four pathways (singleton 8%, conflict 6%, lock-in 5%, misuse 7%), with 25% combined x-risk.

Related
Analyses
Capability-Alignment Race Model
Organizations
OpenAIAnthropic
1.9k words

Core thesis: Risk is primarily determined by which actors develop TAI and their incentive structures. The strategic landscape of competition and cooperation shapes outcomes.

List View
Computing layout...
Legend
Node Types
Leaf Nodes
Causes
Intermediate
Effects
Arrow Strength
Strong
Medium
Weak

Overview

This model analyzes how AI existential risk depends on which actors—US frontier labs, Chinese developers, open-source communities, or malicious actors—develop transformative AI first, and under what competitive conditions. The core insight is that actor identity and incentive structures may matter as much as technical alignment progress in determining outcomes.

The strategic landscape shifted dramatically in 2024-2025. According to Recorded Future analysis, the gap in overall model performance between the best US and Chinese models narrowed from 9.26% in January 2024 to just 1.70% by February 2025. This was catalyzed by DeepSeek's R1 release in January 2025, which matched OpenAI's o1 performance while training for just $1.6 million—a fraction of US costs. Similarly, open-source models closed to within 1.70% of frontier closed models on Chatbot Arena, fundamentally changing proliferation dynamics.

Despite narrowing capability gaps, structural asymmetries persist. US private AI investment topped $109 billion in 2024—over 12 times China's figure. The US maintains roughly 4,049 data centers versus China's 379, representing the largest single advantage the US enjoys. Yet China leads in deployment: it installed approximately 295,000 industrial robots in 2024 alone—more than the rest of the world combined—and accounts for 69.7% of all AI patents.

Capability Gap Estimates (2024-2025)

The following table synthesizes publicly available data on relative AI capabilities across actor categories. Estimates draw from benchmark performance, investment levels, and expert assessments.

Actor CategoryCapability vs FrontierTrendKey EvidenceSource
US Frontier Labs100% (reference)StableGPT-4.5, Claude 3.5, Gemini 2.0 define frontierIndustry consensus
Chinese Labs (aggregate)98.3%Rapidly closingGap narrowed from 9.26% to 1.70% (Jan 2024 - Feb 2025)Recorded Future
DeepSeek specifically≈100% on benchmarksMatched frontierR1 matched o1 at $1.6M training cost; gold medal at IMO 2025CSIS
Open-Source (Llama, Qwen)98.3%Rapidly closingGap narrowed from 8.04% to 1.70% on Chatbot ArenaState of Open-Source AI
Malicious Actor Access≈40-60%IncreasingAccess via open-source, jailbreaks, or theftExpert estimate

Investment and Infrastructure Asymmetries

DimensionUnited StatesChinaRatioImplications
Private AI Investment (2024)$109 billion≈$1 billion12:1US leads funding despite capability parity
Data Centers4,04937911:1Largest structural US advantage
New Data Center Capacity (2024)5.8 GWLowerContinued infrastructure expansion
Industrial Robot Installations (2024)34,000295,0001:9China leads deployment/application
AI Patents (2023)13% of global69.7% of global1:5China dominates IP filings
AI Research Citations (2023)13% of global22.6% of global1:2China leads academic output

Sources: CFR, RAND, Stanford HAI

Key Dynamics

The following diagram illustrates how actor competition dynamics flow through to risk outcomes:

Diagram (loading…)
flowchart TD
  subgraph Actors["Actor Competition"]
      USL[US Frontier Labs]
      CNL[Chinese Labs]
      OSS[Open-Source]
      MAL[Malicious Actors]
  end

  subgraph Dynamics["Competitive Pressures"]
      RACE[Racing Dynamics]
      DIFF[Capability Diffusion]
      SAFE[Safety Investment]
  end

  subgraph Outcomes["Risk Pathways"]
      SING[Unaligned Singleton]
      MULT[Multi-Agent Conflict]
      AUTH[Authoritarian Lock-in]
      MISUSE[Catastrophic Misuse]
  end

  USL -->|competes with| CNL
  CNL -->|closes gap| RACE
  USL -->|pressure to lead| RACE

  OSS -->|enables access| DIFF
  DIFF -->|reaches| MAL

  RACE -->|reduces| SAFE
  SAFE -->|insufficient| SING

  CNL -->|if wins| AUTH
  MAL -->|enables| MISUSE
  RACE -->|increases| MULT

  style USL fill:#e6f3ff
  style CNL fill:#ffe6e6
  style OSS fill:#e6ffe6
  style MAL fill:#ffcccc
  style SING fill:#ffdddd
  style MULT fill:#ffdddd
  style AUTH fill:#ffdddd
  style MISUSE fill:#ffdddd

The key mechanisms are:

  1. Competition intensity → Safety shortcuts → Misalignment risk: As US-China competition intensifies (currently 0.75 on normalized scale), labs face pressure to accelerate timelines, potentially cutting safety corners.

  2. Capability diffusion → Malicious access → Misuse risk: Open-source releases (now within 1.70% of frontier) enable rapid proliferation to actors who may lack safety constraints or beneficial intent.

  3. First-mover advantage → Winner-take-all → Reduced caution: If decisive strategic advantage exists for first-mover, actors rationally accept higher alignment risk to capture it.

  4. Democratic oversight → Deployment delays → Capability gaps: Strong oversight in democratic nations may create windows where authoritarian actors gain advantages, creating perverse incentives against regulation.

  5. Transparency → Better coordination → Reduced racing: Conversely, capability transparency and safety research sharing (currently ~0.6 openness) can reduce competitive pressure.

Risk Pathways

PathwayDescriptionEstimate
Unaligned SingletonOne misaligned AI gains decisive advantage8%
Multi-Agent ConflictMultiple powerful AI systems in conflict6%
Authoritarian Lock-inAI enables permanent authoritarian control5%
Catastrophic MisuseIntentional misuse causes catastrophe7%
Combined X-RiskTotal from all pathways≈25%

Actor Categories

CategoryKey Actors
Leading USOpenAI, Anthropic, Google DeepMind, Meta
Leading ChinaDeepSeek, Baidu, Alibaba, ByteDance
Open-SourceMeta (Llama), Mistral, Hugging Face ecosystem
MaliciousCybercriminals, terrorists, rogue states
GovernmentsUS (NSA, DARPA), China (PLA, MSS), EU

Full Variable List

This diagram simplifies the full model. The complete Multi-Actor Strategic Landscape includes:

Actor Capabilities (15 variables): Leading US lab, leading Chinese lab, US government AI, Chinese government AI, open-source ecosystem, second-tier corporate labs, academic research, cybercriminal AI, terrorist access, authoritarian regime AI, democratic allies AI, corporate espionage, state IP theft, insider threat, supply chain security.

Actor Incentives (12 variables): US-China competition, profit pressure, academic openness, classification levels, democratic accountability, authoritarian control, geopolitical crises, economic desperation, military doctrine, regulatory arbitrage, talent mobility, public-private partnerships.

Information & Transparency (7 variables): Capability disclosure, safety sharing, incident reporting, capability intelligence, dual-use publication norms, evaluation standards, third-party verification.

Alignment & Control (8 variables): US actor alignment, China actor alignment, Constitutional AI effectiveness, human oversight scalability, kill switch reliability, containment protocols, red-teaming, post-deployment monitoring.

Strategic Outcomes (8 variables): First-mover advantage, winner-take-all dynamics, diffusion speed, multipolar vs bipolar, offense-defense balance, escalation control, governance lock-in, misuse probability.

Existential Risk Paths (5 variables): Unaligned singleton, multi-agent conflict, authoritarian lock-in, economic/social collapse, combined risk.

Strategic Importance

Magnitude Assessment

The multi-actor landscape determines whether AI development is coordinated or conflictual. Actor heterogeneity creates both risks (racing, proliferation) and opportunities (diverse approaches).

DimensionAssessmentQuantitative Estimate
Potential severityHigh - multipolar dynamics drive racing and proliferationActor landscape contributes 40-60% of total risk variance
Probability-weighted importanceHigh - currently in competitive multipolar phase75% probability of continued multipolar competition through 2030
Comparative rankingEssential context for governance and coordination strategies#2 priority behind technical alignment
MalleabilityMedium - actor incentives partially shiftable20-30% of racing dynamics addressable via policy

Actor Safety Assessment

Actor CategorySafety InvestmentSafety CultureTransparencyOverall Safety Grade
Anthropic≈30% of budgetStrongHighA-
OpenAI≈15% of budgetDecliningMediumB-
Google DeepMind≈20% of budgetStrongMediumB+
Meta AI≈10% of budgetModerateHigh (open-source)B-
Chinese Labs≈5% of budgetUnknownLowC- (estimated)
Open-Source EcosystemMinimalVariableVery highC

Diffusion Timeline Estimates

Capability LevelUS LabsChinese LabsOpen-SourceMalicious Actors
GPT-4 class20232024-20252024-20252025-2026
GPT-5 class (projected)20252026-20272027-20282028-2030
Autonomous agents (dangerous)2025-20262026-20272027-20282028-2029

Key Finding: The open-source lag has collapsed. As of late 2025, the center of gravity for open-weight models has shifted toward China, with DeepSeek and Qwen becoming household names. US firms released fewer open-weight models citing commercial and safety constraints, while Chinese labs treated open-weight leadership as a deliberate catch-up strategy. Meta—long a champion of frontier open models—has delayed release of Llama Behemoth and suggested it may keep future "superintelligence" models behind paywalls.

First-Mover Advantage: Evidence Assessment

The model's risk estimates depend critically on the magnitude of first-mover advantage. Strong first-mover advantages create racing incentives; weak ones reduce them. Current evidence suggests first-mover advantages are significant but not overwhelming:

Evidence TypeFindingImplication for FMA
Historical analysisFirst movers have 47% failure rate; only 11% become market leaders (Golder & Tellis)Weak FMA
AI competitive landscape2,011 companies in 2024 ML/AI landscape, 578 new entrants since 2023Weak FMA
Model replication11 different developers globally achieved GPT-4-level models in 2024Weak FMA
Cloud marketAWS and Azure trading leadership position; "more than one winner" possibleModerate FMA
Network effectsAI systems less network-effect-driven than social platformsWeak FMA
TAI-specific dynamicsDecisive strategic advantage at TAI level remains uncertainUnknown

Key insight: Evidence from the Abundance Institute suggests "no signs of winner-take-all dynamics" in the current AI ecosystem. However, TAI (transformative AI) may differ qualitatively if it enables rapid capability improvements or strategic advantages not available to followers. The model's 0.7 first-mover advantage estimate may be too high based on current evidence, but TAI-level dynamics remain highly uncertain.

Resource Implications

Understanding actor landscape enables:

  • Targeted engagement with highest-leverage actors: Focus on top 3-4 US labs could cover 70% of frontier capability
  • Coalition-building for safety standards: Anthropic-OpenAI-DeepMind coalition would set de facto standards
  • Monitoring of capability diffusion: $50-100M/year for comprehensive capability intelligence
  • Anticipation of strategic behavior and reactions: Game-theoretic modeling investment ≈$10-20M/year

Recommended investment: $100-200M/year in actor-focused governance work (vs. ≈$20-30M current).

Key Cruxes

CruxIf TrueIf FalseCurrent Probability
Leading coalition is stableTop 3 can set normsRacing to bottom45%
Safety can be coordination pointVoluntary standards viableRegulation required35%
China is engageable on safetyGlobal coordination possibleBifurcated governance30%
Diffusion to malicious actors is slowWindow for governanceProliferation dominates50%

Multipolar vs Unipolar Governance Considerations

A crucial variable in this model is whether AI development converges toward unipolar (single dominant actor or coalition) or multipolar (distributed power among multiple actors) outcomes. Each presents distinct risk profiles:

Governance StructureKey RisksKey Advantages
Unipolar (single dominant actor)Value lock-in, institutional stagnation, internal corruption, single points of failureCoordination easier, racing reduced, unified safety standards
Multipolar (distributed power)Unchecked proliferation, system instability, coordination failures, racing dynamicsDiversity of approaches, no single point of failure, competitive pressure for safety

Current research from AI Impacts identifies key research questions: What "considerations might tip us between multipolar and unipolar scenarios"? What "risks [are] distinctive to a multipolar scenario"? The CO/AI analysis notes that while current AI safety discussions often default to unipolar frameworks, "exploring decentralized governance structures could address key risks like value lock-in and institutional stagnation."

Current assessment: The model estimates 55% probability of continued multipolar development, with the US-China bifurcation appearing increasingly stable. Geopolitical tensions, divergent regulatory approaches, and the collapse of open-source lags all point toward a world with multiple competing AI powers rather than a single dominant actor.

Limitations

  1. Capability estimates rapidly outdating: The 2024-2025 data showing near-parity may not persist; breakthrough capabilities could restore gaps.

  2. Safety investment data opaque: Lab safety budgets are not publicly disclosed; estimates are inferential.

  3. TAI dynamics uncertain: Current competitive patterns may not predict TAI-level dynamics where decisive advantages could differ fundamentally.

  4. Geopolitical volatility: US-China relations, export control effectiveness, and regulatory trajectories are highly uncertain.

  5. Malicious actor access hard to estimate: Underground markets and state-sponsored theft create significant uncertainty in capability diffusion.

Sources

  • Recorded Future: US-China AI Gap Analysis (2025)
  • RAND: China's AI Models Closing the Gap (2025)
  • Council on Foreign Relations: China, the United States, and the AI Race
  • Boston University: DeepSeek and AI Frontier (2025)
  • State of Open-Source AI 2025
  • CSIS: DeepSeek, Huawei, and US-China AI Race
  • Abundance Institute: AI Competitive Landscape
  • AI Impacts: Multipolar Research Projects
  • Frontier Model Forum: Progress Update 2024

References

This resource summarizes findings from the Stanford Human-Centered AI (HAI) Institute's AI Index Report comparing the United States and China across key AI metrics in 2024. It covers competitiveness in research output, talent, investment, and technological capabilities between the two leading AI powers. The analysis highlights areas where each country leads and the implications for global AI development.

A business strategy guide explaining first-mover advantage, distinguishing between being 'first-to-market' (technical achievement) and being a true 'first mover' (establishing a market category and shaping expectations). Uses examples like Amazon and Uber to illustrate how transformative market positioning differs from simply releasing a product first.

3largest single advantageRAND Corporation·2025

RAND analyst Lennart Heim argues that while China is rapidly closing the gap on AI model capabilities, the U.S. retains a decisive advantage in total compute capacity and advanced chip infrastructure. The commentary highlights critical failures in U.S. export control enforcement—including TSMC producing chips for Huawei via a proxy—and warns that U.S. policymakers risk squandering the compute advantage by fixating on benchmark comparisons rather than strategic infrastructure leverage.

★★★★☆

A 2025 year-end analysis of open-model AI trends showing China surpassing the US in Hugging Face downloads for the first time, with Chinese models like Qwen and DeepSeek gaining significant ground. The piece examines shifts in open-weight vs. open-source dynamics, the rise of small language models, and geopolitical implications for AI governance and export controls.

CNBC reports that Meta is pursuing a new proprietary frontier AI model codenamed 'Avocado,' marking a significant strategic shift away from its open-source Llama models. The company spent $14.3 billion acquiring Scale AI's founder and top researchers to compete with OpenAI and Google, but the rapid pivot has created internal culture clashes and confusion. The delayed release of Llama Behemoth and the possible move away from open-source signals a fundamental rethinking of Meta's AI positioning.

★★★☆☆
6Frontier Model ForumFrontier Model Forum

The Frontier Model Forum (FMF), an industry consortium of leading AI labs, provides a 2024 progress update on its AI safety initiatives, including workstreams addressing biosecurity, cybersecurity, model security, safety evaluations, and an AI Safety Fund. The update details early best practices development, expert workshops, and participation in international AI safety governance events like the AI Seoul Summit.

★★★☆☆

This analysis summarizes a LessWrong post arguing that multipolar AI governance frameworks—featuring decentralized control among diverse AI agents and human actors—offer a compelling alternative to unipolar (centralized) AI control models. It outlines both the risks of decentralized approaches (instability, coordination failures) and the risks of centralized ones (value lock-in, corruption), while proposing pathways like modular AI services and cooperative AI research.

An analysis of the competitive dynamics in the AI industry, examining the landscape of companies, capabilities, and market forces shaping AI development. The piece likely covers key players, investment trends, and implications for the pace and direction of AI progress.

This CSIS analysis examines how DeepSeek's emergence and Huawei's chip development challenge the effectiveness of U.S. export controls on advanced semiconductors. It assesses whether restricting China's access to cutting-edge chips can sustainably constrain Chinese AI capabilities, and considers implications for the broader U.S.-China AI competition.

★★★★☆

Recorded Future's intelligence analysis concludes that China is unlikely to sustainably surpass the US in AI capabilities by 2030, examining competitive dynamics across government funding, talent pipelines, technology development, and semiconductor supply chains. The report provides a structured comparison of the two nations' AI ecosystems and identifies key chokepoints in China's development trajectory.

This Boston University article examines DeepSeek, a Chinese AI model that reportedly matched OpenAI's o1 performance at a fraction of the cost, analyzing its implications for AI competition between the US and China. It explores how DeepSeek achieved high performance with limited compute resources, potentially undermining US export controls on advanced chips. The piece discusses broader geopolitical and safety implications of this development.

12\$109 billion in 2024Council on Foreign Relations

A Council on Foreign Relations analysis examining the competitive dynamics between China and the United States in artificial intelligence development, with the $109 billion figure referring to U.S. AI investment in 2024. The piece explores how the two superpowers are competing across AI capabilities, infrastructure, and policy.

★★★★☆

This AI Impacts page outlines proposed research projects focused on multipolar AI scenarios, where multiple powerful AI systems or actors compete rather than a single dominant system emerging. It identifies open questions and research directions relevant to understanding coordination failures, competitive dynamics, and safety implications of multipolar futures.

★★★☆☆

Related Wiki Pages

Top Related Pages

Risks

Compute ConcentrationMultipolar Trap (AI Development)

Approaches

MAIM (Mutually Assured AI Malfunction)

Analysis

Authoritarian Tools Diffusion ModelIntervention Timing WindowsInternational AI Coordination Game ModelAI Risk Feedback Loop & Cascade ModelAI Capability Proliferation Model

Concepts

International Coordination Mechanisms

Policy

China AI Regulatory Framework

Other

Stella BidermanArthur Mensch