Ultimate Scenarios
Ultimate Scenarios are the intermediate pathways that connect root factorsE667The seven root factors that shape AI transition outcomes: Misalignment Potential, AI Capabilities, AI Uses, AI Ownership, Civilizational Competence, Transition Turbulence, and Misuse Potential. to ultimate outcomesE669The two ultimate outcomes of the AI transition: avoiding existential catastrophe and ensuring a positive long-term trajectory.. They describe how parameter changes lead to catastrophe (or success)—the specific mechanisms and pathways that determine what kind of future we get.
The AI Transition Model uses three main ultimate scenarios:
- AI TakeoverE670Scenarios where AI gains decisive control over human affairs - either rapidly or gradually. — AI gains decisive control
- Human-Caused CatastropheE671Scenarios where humans use AI to cause mass harm - through state actors or rogue actors. — Humans use AI for mass harm
- Long-term Lock-inE673Scenarios involving permanent entrenchment of values, power structures, or epistemic conditions. — Permanent entrenchment of values/power
Each ultimate scenario has sub-variants that describe more specific pathways (e.g., "rapid" vs "gradual" AI takeover, "state" vs "rogue actor" catastrophe).
The Three-Layer Model
Color coding:
- Red: Ultimate negative outcome (existential catastropheAi Transition Model ScenarioExistential CatastropheThis page contains only a React component placeholder with no actual content visible for evaluation. The component would need to render content dynamically for assessment.)
- Green: Ultimate trajectory measure (could be good or bad)
- Pink: Negative ultimate scenarios (catastrophes)
- Orange: Symmetric ultimate scenario (could entrench good or bad values)
Ultimate Scenarios Summary
How Ultimate Scenarios Differ from Other Concepts
| Concept | What It Is | Example |
|---|---|---|
| Root Factors | Aggregate variables that shape scenarios | "Misalignment Potential" |
| Parameters | Specific measurable factors | "Alignment RobustnessAi Transition Model ParameterAlignment RobustnessThis page contains only a React component import with no actual content rendered in the provided text. Cannot assess importance or quality without the actual substantive content." |
| Risks | Things that could go wrong | "Deceptive AlignmentRiskDeceptive AlignmentComprehensive analysis of deceptive alignment risk where AI systems appear aligned during training but pursue different goals when deployed. Expert probability estimates range 5-90%, with key empir...Quality: 75/100" |
| Ultimate Scenarios | Intermediate pathways connecting factors to outcomes | "AI Takeover" |
| Ultimate Outcomes | High-level goals we care about | "Existential Catastrophe", "Long-term TrajectoryAi Transition Model ScenarioLong-term TrajectoryThis page contains only a React component reference with no actual content loaded. Cannot assess substance as no text, analysis, or information is present." |
Key distinction: A risk like "deceptive alignment" is a mechanism that could happen. An ultimate scenario like "AI Takeover" is the outcome that results if such mechanisms play out. Multiple risks can contribute to a single ultimate scenario.
Why This Layer Matters
1. Clarifies Causal Chains
Without this layer, the connection between "Misalignment Potential increasing" and "Existential Catastrophe increasing" is abstract. Ultimate scenarios show the specific pathway: alignment fails → AI develops misaligned goals → AI takes over → catastrophe.
2. Enables Different Intervention Strategies
Different ultimate scenarios require different interventions:
- AI Takeover: Technical alignment, capability restrictions
- Human-Caused Catastrophe: International coordinationAi Transition Model ParameterInternational CoordinationThis page contains only a React component placeholder with no actual content rendered. Cannot assess importance or quality without substantive text., misuse prevention
- Long-term Lock-inRiskAI Value Lock-inComprehensive analysis of AI lock-in scenarios where values, systems, or power structures become permanently entrenched. Documents evidence including Big Tech's 66-70% cloud control, AI surveillanc...Quality: 64/100: Power distribution, institutional design
3. Supports Scenario Planning
Ultimate scenarios map directly onto scenarios that organizations can plan for. Rather than asking "what if Existential Catastrophe increases?", planners can ask "what if we're heading toward a Human-Caused Catastrophe?"
4. Connects to Existing Threat Models
Each ultimate scenario corresponds to threat models discussed in the AI safety literature:
- Carlsmith's six-premise argumentModelCarlsmith's Six-Premise ArgumentCarlsmith's framework decomposes AI existential risk into six conditional premises (timelines, incentives, alignment difficulty, power-seeking, disempowerment scaling, catastrophe), yielding ~5% ri...Quality: 65/100 → AI Takeover scenarios
- Christiano's "What Failure Looks Like" → Gradual AI TakeoverParameterGradual AI TakeoverThis page contains only a React component import with no actual content rendered in the provided text. Cannot assess importance or quality without the content that would be dynamically loaded by th...
- Ord's "The Precipice" risk categories → Multiple ultimate scenarios
- Kasirzadeh's decisive vs. accumulative → Rapid vs. Gradual takeover
Using This Section
For Analysts
- Map specific risks to the ultimate scenarios they could produce
- Estimate which ultimate scenarios are most likely given current parameter trends
- Identify which parameters to prioritize based on which ultimate scenarios concern you most
For Policymakers
- Design interventions targeted at preventing specific ultimate scenarios
- Coordinate across domains (a single ultimate scenario may require multiple types of intervention)
- Track early warning signs for each ultimate scenario
For Researchers
- Use ultimate scenarios to frame research priorities
- Connect technical work to concrete scenarios it addresses
- Identify gaps in our understanding of specific pathways
Related Sections
- Root FactorsE667The seven root factors that shape AI transition outcomes: Misalignment Potential, AI Capabilities, AI Uses, AI Ownership, Civilizational Competence, Transition Turbulence, and Misuse Potential. — The parameter groupings that feed into ultimate scenarios
- Ultimate OutcomesE669The two ultimate outcomes of the AI transition: avoiding existential catastrophe and ensuring a positive long-term trajectory. — The high-level goals ultimate scenarios affect
- Interactive Model — Full interactive visualization
- Models — Analytical frameworks for understanding pathways