Longterm Wiki
Updated 2026-01-04HistoryData
Page StatusAI Transition Model
Edited 6 weeks ago553 words
5
Structure5/15
101700%42%

Ultimate Scenarios

Ultimate Scenarios are the intermediate pathways that connect root factors to ultimate outcomes. They describe how parameter changes lead to catastrophe (or success)—the specific mechanisms and pathways that determine what kind of future we get.

The AI Transition Model uses three main ultimate scenarios:

  1. AI Takeover — AI gains decisive control
  2. Human-Caused Catastrophe — Humans use AI for mass harm
  3. Long-term Lock-in — Permanent entrenchment of values/power

Each ultimate scenario has sub-variants that describe more specific pathways (e.g., "rapid" vs "gradual" AI takeover, "state" vs "rogue actor" catastrophe).


The Three-Layer Model

Color coding:

  • Red: Ultimate negative outcome (existential catastrophe)
  • Green: Ultimate trajectory measure (could be good or bad)
  • Pink: Negative ultimate scenarios (catastrophes)
  • Orange: Symmetric ultimate scenario (could entrench good or bad values)

Ultimate Scenarios Summary


How Ultimate Scenarios Differ from Other Concepts

ConceptWhat It IsExample
Root FactorsAggregate variables that shape scenarios"Misalignment Potential"
ParametersSpecific measurable factors"Alignment Robustness"
RisksThings that could go wrong"Deceptive Alignment"
Ultimate ScenariosIntermediate pathways connecting factors to outcomes"AI Takeover"
Ultimate OutcomesHigh-level goals we care about"Existential Catastrophe", "Long-term Trajectory"

Key distinction: A risk like "deceptive alignment" is a mechanism that could happen. An ultimate scenario like "AI Takeover" is the outcome that results if such mechanisms play out. Multiple risks can contribute to a single ultimate scenario.


Why This Layer Matters

1. Clarifies Causal Chains

Without this layer, the connection between "Misalignment Potential increasing" and "Existential Catastrophe increasing" is abstract. Ultimate scenarios show the specific pathway: alignment fails → AI develops misaligned goals → AI takes over → catastrophe.

2. Enables Different Intervention Strategies

Different ultimate scenarios require different interventions:

  • AI Takeover: Technical alignment, capability restrictions
  • Human-Caused Catastrophe: International coordination, misuse prevention
  • Long-term Lock-in: Power distribution, institutional design

3. Supports Scenario Planning

Ultimate scenarios map directly onto scenarios that organizations can plan for. Rather than asking "what if Existential Catastrophe increases?", planners can ask "what if we're heading toward a Human-Caused Catastrophe?"

4. Connects to Existing Threat Models

Each ultimate scenario corresponds to threat models discussed in the AI safety literature:

  • Carlsmith's six-premise argument → AI Takeover scenarios
  • Christiano's "What Failure Looks Like" → Gradual AI Takeover
  • Ord's "The Precipice" risk categories → Multiple ultimate scenarios
  • Kasirzadeh's decisive vs. accumulative → Rapid vs. Gradual takeover

Using This Section

For Analysts

  • Map specific risks to the ultimate scenarios they could produce
  • Estimate which ultimate scenarios are most likely given current parameter trends
  • Identify which parameters to prioritize based on which ultimate scenarios concern you most

For Policymakers

  • Design interventions targeted at preventing specific ultimate scenarios
  • Coordinate across domains (a single ultimate scenario may require multiple types of intervention)
  • Track early warning signs for each ultimate scenario

For Researchers

  • Use ultimate scenarios to frame research priorities
  • Connect technical work to concrete scenarios it addresses
  • Identify gaps in our understanding of specific pathways

Related Sections

  • Root Factors — The parameter groupings that feed into ultimate scenarios
  • Ultimate Outcomes — The high-level goals ultimate scenarios affect
  • Interactive Model — Full interactive visualization
  • Models — Analytical frameworks for understanding pathways

Related Pages

Top Related Pages

Concepts

Gradual AI TakeoverLong-term TrajectoryE671E670E669E673