Longterm Wiki
Updated 2026-02-09HistoryData
Page StatusRisk
Edited 4 days ago7 words2 backlinks
2
Structure2/15
00000%0%
Issues1
StructureNo tables or diagrams - consider adding visual content

AI Takeover

Entry

AI Takeover

Scenarios where AI systems seize control from humans

Model RoleCatastrophic Scenario
Primary DriversMisalignment Potential
Sub-scenariosGradual takeover, Rapid takeover
Related
ai-transition-model-scenarios
Existential Catastrophe
ai-transition-model-factors
Misalignment Potential
ai-transition-model-parameters
Alignment Robustness
7 words · 2 backlinks

This page is a stub. Content needed.

Related Pages

Top Related Pages

Transition Model

Human-Caused CatastropheSlow Takeoff Muddle - Muddling ThroughPause and Redirect - The Deliberate PathMultipolar Competition - The Fragmented WorldAligned AGI - The Good EndingLong-term Lock-in

Risks

Bioweapons RiskAI-Driven Concentration of PowerAI-Induced IrreversibilityAI Value Lock-inAI-Enabled Authoritarian Takeover

Models

Carlsmith's Six-Premise ArgumentLock-in Mechanisms Model

Labs

Center for AI Safety

People

Geoffrey HintonToby OrdHolden Karnofsky

Concepts

Fast TakeoffSuperintelligence