Longterm Wiki
HistoryData
Page StatusAI Transition Model
5 backlinks
2
Structure2/15
00000%0%
Summary

This page contains only a React component placeholder with no actual content visible for evaluation. The component would need to render content dynamically for assessment.

Issues1
StructureNo tables or diagrams - consider adding visual content

Existential Catastrophe

Entry

Existential Catastrophe

This page contains only a React component placeholder with no actual content visible for evaluation. The component would need to render content dynamically for assessment.

Model RoleUltimate Outcome
Primary DriversMisalignment Potential, Misuse Potential
Risk CharacterTail risk, irreversible
Related
ai-transition-model-factors
Misalignment PotentialMisuse Potential
ai-transition-model-scenarios
AI TakeoverHuman-Caused Catastrophe
ai-transition-model-parameters
Alignment Robustness
5 backlinks

Existential Catastrophe measures the probability and potential severity of catastrophic AI-related events. This is about the tail risks—the scenarios we most urgently want to avoid because they could cause irreversible harm at civilizational scale.

Unlike Transition Smoothness (which concerns the journey) or Steady State Quality (which concerns the destination), Existential Catastrophe is about avoiding catastrophe entirely. A world with high existential catastrophe might navigate a smooth transition to a good steady state—or might not make it there at all.

Sub-dimensions

DimensionDescriptionKey Parameters
Loss of ControlAI systems pursuing goals misaligned with humanity; inability to correct or shut down advanced systemsAlignment Robustness, Human Oversight Quality
Misuse CatastropheDeliberate weaponization of AI for mass harm—bioweapons, autonomous weapons, critical infrastructure attacksBiological Threat Exposure, Cyber Threat Exposure
Accident at ScaleUnintended large-scale harms from deployed systems; cascading failures across interconnected AISafety-Capability Gap, Safety Culture Strength
Lock-in RiskIrreversible commitment to bad values, goals, or power structuresAI Control Concentration, Institutional Quality
Concentration CatastropheSingle actor gains decisive AI advantage and uses it harmfullyAI Control Concentration, Racing Intensity

What Contributes to Existential Catastrophe

Scenario Impact Scores

Primary Contributing Aggregates

AggregateRelationshipMechanism
Misalignment Potential↓↓↓ Decreases riskAligned, interpretable, overseen systems are less likely to cause catastrophe
Misuse Potential↑↑↑ Increases riskHigher bio/cyber exposure, concentration, and racing all elevate existential catastrophe
Civilizational Competence↓↓ Decreases riskEffective governance can slow racing, enforce safety standards, coordinate responses

Key Individual Parameters

ParameterEffectStrength
Alignment Robustness↓ Reduces↓↓↓ Critical
Safety-Capability Gap↑ Increases↑↑↑ Critical
Racing Intensity↑ Increases↑↑↑ Strong
Human Oversight Quality↓ Reduces↓↓ Strong
Interpretability Coverage↓ Reduces↓↓ Strong
AI Control Concentration↑/↓ Depends↑↑ Context-dependent
Biological Threat Exposure↑ Increases↑↑ Direct
Cyber Threat Exposure↑ Increases↑↑ Direct

Why This Matters

Existential catastrophe is the most time-sensitive outcome dimension:

  • Irreversibility: Many catastrophic scenarios cannot be undone
  • Path dependence: High existential catastrophe can foreclose good steady states entirely
  • Limited recovery: Unlike transition disruption, catastrophe may preclude recovery
  • Urgency: Near-term capability advances increase near-term existential catastrophe

This is why much AI safety work focuses on existential catastrophe reduction—it's the outcome where failure is most permanent.

Related Outcomes


Related Pages

Top Related Pages

Transition Model

Alignment RobustnessMisaligned Catastrophe - The Bad EndingLong-term TrajectoryAI CapabilitiesAI Uses

Risks

Bioweapons RiskAI-Driven Concentration of PowerAI-Induced IrreversibilityAI Value Lock-inAI-Enabled Authoritarian Takeover

Models

Carlsmith's Six-Premise ArgumentLock-in Mechanisms Model

Labs

Center for AI Safety

People

Geoffrey HintonToby OrdHolden KarnofskyYoshua BengioEliezer YudkowskyNick Bostrom

Concepts

Fast Takeoff