Skip to content
Longterm Wiki
Back

Stable Totalitarianism: An Overview

web

Authors

80000_Hours·poppinfresh

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

Published on the EA Forum by 80,000 Hours in October 2024, this piece is part of a broader cause prioritization effort and is relevant to AI safety researchers concerned with power concentration and lock-in risks from advanced AI systems.

Forum Post Details

Karma
36
Comments
1
Forum
eaforum
Forum Tags
Cause prioritizationAI safetyExistential riskPolicyTotalitarianismAI governanceDemocracyGlobal governanceLong-term future

Metadata

Importance: 62/100blog postanalysis

Summary

An overview by 80,000 Hours analyzing the risk of 'stable totalitarianism'—a scenario where a totalitarian regime achieves permanent global dominance, potentially enabled by AI—as a pressing existential or civilizational risk. The piece evaluates the problem using the scale, neglectedness, and solvability framework, and outlines actions including AI governance and researching global coordination risks.

Key Points

  • Stable totalitarianism—a permanent, world-spanning authoritarian regime—is framed as a potential existential or near-existential risk to humanity's long-term future.
  • AI could enable totalitarian regimes by providing unprecedented surveillance, enforcement, and coordination capabilities that eliminate historical instabilities.
  • The risk is assessed as highly neglected compared to its potential scale, with few researchers focused specifically on preventing long-term permanent totalitarianism.
  • Three key conditions are analyzed: whether totalitarian regimes will arise, whether one could dominate globally, and whether such dominance could be permanent.
  • Recommended interventions include AI governance work and research into risks from dangerous concentrations of global power.

Cited by 1 page

PageTypeQuality
AI Value Lock-inRisk64.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202650 KB
Hide table of contents

# [Stable totalitarianism: an overview](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview)

by [80000\_Hours](https://forum.effectivealtruism.org/users/80000_hours?from=post_header), [poppinfresh](https://forum.effectivealtruism.org/users/poppinfresh?from=post_header)

Oct 29 202424 min read 1

# 36

[Cause prioritization](https://forum.effectivealtruism.org/topics/cause-prioritization)[AI safety](https://forum.effectivealtruism.org/topics/ai-safety)[Existential risk](https://forum.effectivealtruism.org/topics/existential-risk)[Policy](https://forum.effectivealtruism.org/topics/policy)[Totalitarianism](https://forum.effectivealtruism.org/topics/totalitarianism)[AI governance](https://forum.effectivealtruism.org/topics/ai-governance)[Democracy](https://forum.effectivealtruism.org/topics/democracy)[Global governance](https://forum.effectivealtruism.org/topics/global-governance)[Long-term future](https://forum.effectivealtruism.org/topics/long-term-future) [Frontpage](https://forum.effectivealtruism.org/about#Finding_content)

Show all topics

[Stable totalitarianism: an overview](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#)

[Summary](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Summary)

[Our overall view](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Our_overall_view)

[Scale](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Scale__)

[Neglectedness](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Neglectedness__)

[Solvability](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Solvability__)

[Why might the risk of stable totalitarianism be an especially pressing problem?](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Why_might_the_risk_of_stable_totalitarianism_be_an_especially_pressing_problem_)

[Could totalitarianism be an existential risk?](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Could_totalitarianism_be_an_existential_risk_)

[Is any of this remotely plausible?](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Is_any_of_this_remotely_plausible_)

[Will totalitarian regimes arise in future?](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Will_totalitarian_regimes_arise_in_future_)

[Could a totalitarian regime dominate the world?](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitarianism-an-overview#Could_a_totalitarian_regime_dominate_the_world_)

[Could a totalitarian regime last forever?](https://forum.effectivealtruism.org/posts/Ljae6jJEwifD3QCr2/stable-totalitariani

... (truncated, 50 KB total)
Resource ID: 510fbddaf17ab0f9 | Stable ID: ZGZhODdkND