Skip to content
Longterm Wiki
Back

"On AGI Ruin: A List of Lethalities"

blog

Credibility Rating

2/5
Mixed(2)

Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.

Rating inherited from publication venue: Substack

This is Zvi Mowshowitz's response to Eliezer Yudkowsky's influential 'AGI Ruin' post; essential reading for understanding the MIRI-aligned pessimist case for why current AI development is considered existentially dangerous.

Metadata

Importance: 72/100blog postcommentary

Summary

Zvi Mowshowitz provides a detailed commentary and analysis of Eliezer Yudkowsky's 'AGI Ruin: A List of Lethalities,' breaking down the core arguments for why misaligned AGI poses an existential threat. The post examines specific failure modes and reasons why current AI development trajectories are considered extremely dangerous by MIRI-adjacent thinkers. It serves as both an accessible entry point and a critical engagement with Yudkowsky's pessimistic alignment thesis.

Key Points

  • Yudkowsky's original post enumerates many independent reasons ('lethalities') why AGI development is likely to end in human extinction without major alignment breakthroughs.
  • Key concerns include the difficulty of specifying human values precisely, deceptive alignment, and the competitive pressure to deploy unsafe systems.
  • The post highlights that solving any single problem is insufficient—all failure modes must be addressed simultaneously, making alignment extremely hard.
  • Zvi's commentary offers clarifications, agreements, and pushback on specific points, making the dense original more navigable.
  • The discussion reflects a deeply pessimistic view of current AI safety trajectories, arguing that mainstream ML approaches are insufficient.

Cited by 1 page

PageTypeQuality
Sharp Left TurnRisk69.0

Cached Content Preview

HTTP 200Fetched Feb 23, 202698 KB
# [Don't Worry About the Vase](https://thezvi.substack.com/)

SubscribeSign in

![User's avatar](https://substackcdn.com/image/fetch/$s_!8FQ8!,w_64,h_64,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4e61e08-4086-4cba-a82c-d31d64270804_48x48.png)

Discover more from Don't Worry About the Vase

A world made of gears. Doing both speed premium short term updates and long term world model building. Currently focused on weekly AI updates. Explorations include AI, policy, rationality, medicine and fertility, education and games.

Over 31,000 subscribers

Subscribe

By subscribing, you agree Substack's [Terms of Use](https://substack.com/tos), and acknowledge its [Information Collection Notice](https://substack.com/ccpa#personal-data-collected) and [Privacy Policy](https://substack.com/privacy).

Already have an account? Sign in

# On AGI Ruin: A List of Lethalities

[![Zvi Mowshowitz's avatar](https://substackcdn.com/image/fetch/$s_!8FQ8!,w_36,h_36,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe4e61e08-4086-4cba-a82c-d31d64270804_48x48.png)](https://substack.com/@thezvi)

[Zvi Mowshowitz](https://substack.com/@thezvi)

Jun 13, 2022

12

26

Share

Response to (Eliezer Yudkowsky): [A List of Lethalities](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities).

_Author’s Note: I do not work in AI Safety, lack technical domain knowledge and in many ways am going to be wrong. I wasn’t going to write this to avoid potentially wasting too much time all around without having enough to offer, and for fear of making stupid errors, but it was clear that many people thought my response would be valuable. I thank those whose anonymous sponsorship of this post both paid for my time and made me update that the post was worth writing. I would be happy for this to happen again in the future._

Eliezer has at long last delivered the definitive list of Eliezer Rants About Why AGI Will Definitely Absolutely For Sure Kill Everyone Unless Something Very Unexpected Happens.

This is excellent. In the past we had to make do with makeshift scattershot collections of rants. Now they are all in one place, with a helpful classification system. Key claims are in bold. We can refer, consider and discuss them.

It would be an even better post if it were more logically organized, with dependencies pointed out and mapped and so on.

One could also propose making it not full of rants, but I _don’t_ think that would be an improvement. The rants are _important._ The rants contain _data._ They reveal Eliezer’s cognitive state and his assessment of the state of play. _Not_ ranting would leave important bits out and give a meaningfully misleading impression.

I am reminded of [this comment of mine](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/?commentId=HB3BL3Sa6M

... (truncated, 98 KB total)
Resource ID: 15bb97bb725f1f6a | Stable ID: ZWU1NDYwNT