Skip to content
Longterm Wiki
Back

Author

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

A widely-cited and debated 2022 post by Eliezer Yudkowsky representing the strongest public statement of his doom thesis; essential reading for understanding the pessimistic wing of AI safety discourse and the arguments that motivate MIRI's research priorities.

Metadata

Importance: 90/100blog postprimary source

Summary

Eliezer Yudkowsky's comprehensive argument for why AGI development is likely to result in human extinction, presented as a list of distinct failure modes and reasons why alignment is extremely difficult. The post systematically addresses why standard proposed solutions are insufficient and why the default outcome of unaligned AGI is catastrophic. It serves as a canonical statement of Yudkowsky's pessimistic position on humanity's ability to navigate the AGI transition safely.

Key Points

  • Lists dozens of independent 'lethalities'—reasons why AGI development leads to doom even if individual problems seem solvable, emphasizing cumulative difficulty.
  • Argues that outer alignment, inner alignment, and interpretability are each individually insufficient and collectively still likely to fail under real AGI development conditions.
  • Contends that current ML paradigms produce systems whose internals are opaque, making verification of alignment nearly impossible before deployment at dangerous capability levels.
  • Challenges optimistic views that iterative deployment, scaling feedback, or governance can compensate for fundamental alignment uncertainty.
  • Represents Yudkowsky's explicit claim that without major breakthroughs in alignment theory, AGI timelines imply near-certain catastrophe regardless of developer intent.

Cited by 2 pages

PageTypeQuality
Sharp Left TurnRisk69.0
AI Doomer WorldviewConcept38.0

Cached Content Preview

HTTP 200Fetched Feb 23, 202698 KB
![Background Image](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/splashArtImagePromptfragile%20chess%20pieces%20on%20a%20board%2C%20with%20the%20shadow%20of%20a%20huge%20hand%20looming/cl7wfh0gdyy7m9mddszg)

[AGI Ruin: A List of Lethalities](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#)

36 min read

•

[Preamble:](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Preamble_)

•

[Section A:](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_A_)

•

[Section B:](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_B_)

•

[Section B.1:  The distributional leap.](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_B_1___The_distributional_leap_)

•

[Section B.2:  Central difficulties of outer and inner alignment.](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_B_2___Central_difficulties_of_outer_and_inner_alignment_)

•

[Section B.4:  Miscellaneous unworkable schemes.](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_B_4___Miscellaneous_unworkable_schemes_)

•

[Section C:](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_C_)

[2022 MIRI Alignment Discussion](https://www.alignmentforum.org/s/v55BhXbpJuaExkpcD)

[AI Risk](https://www.alignmentforum.org/w/ai-risk)[Threat Models (AI)](https://www.alignmentforum.org/w/threat-models-ai)[AI Questions Open Threads](https://www.alignmentforum.org/w/ai-questions-open-threads)[Double-Crux](https://www.alignmentforum.org/w/double-crux)[Fuzzies](https://www.alignmentforum.org/w/fuzzies)[Language Models (LLMs)](https://www.alignmentforum.org/w/language-models-llms)[Meetups & Local Communities (topic)](https://www.alignmentforum.org/w/meetups-and-local-communities-topic)[AI](https://www.alignmentforum.org/w/ai) [Curated](https://www.alignmentforum.org/recommendations)

# 147

- [spotify-podcast-badge-wht-blk-165x40Created with Sketch.](https://open.spotify.com/show/7vqBzO0ejqiLiXyTECEeBY)

# [AGI Ruin: A List of Lethalities](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)

by [Eliezer Yudkowsky](https://www.alignmentforum.org/users/eliezer_yudkowsky?from=post_header)

5th Jun 2022

36 min read

[713](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#comments)

# 147

[Review by\\
\\
Ben Pace](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#hh5Pamc9LbMkEyEJK)

### **Preamble:**

(If you're already familiar with all basics and don't want any preamble, skip ahead to [Section B](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_B_) for technical difficulties of alignment proper.)

I h

... (truncated, 98 KB total)
Resource ID: 0aea2d39b8284ab1 | Stable ID: MDlhOTdmMT