Why Do AI Researchers Rate the Probability of Doom So Low?
blogAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: LessWrong
A 2022 LessWrong discussion post useful for illustrating the debate over P(Doom) estimates and the epistemic gap between AI safety advocates and mainstream AI researchers, but limited in analytical depth.
Forum Post Details
Metadata
Summary
A LessWrong question post exploring the disconnect between mainstream AI researchers' low P(Doom) estimates (5-10%) and the much higher estimates held by AI safety advocates like Eliezer Yudkowsky. The author shares their own reasoning for high doom probability based on the orthogonality thesis, the ease of building misaligned AI, and the political infeasibility of a global AI ban, while seeking to understand what mainstream researchers know that leads to lower estimates.
Key Points
- •Mainstream AI researchers estimate ~5-10% probability of AI-caused human extinction, while safety-focused researchers like Yudkowsky estimate >50%.
- •Author's case for high P(Doom) rests on orthogonality thesis, misaligned AI being easier to build than aligned AI, and impossibility of global AI ban.
- •Author initially estimated 80% P(Doom), later revised to 20-40% after distinguishing 'human extinction' from 'bad outcome I dislike'.
- •The post highlights the psychological difficulty of committing to explicit probability estimates for catastrophic outcomes.
- •The core question—what do mainstream AI researchers know that lowers their risk estimates—remains largely unanswered in the post itself.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Why Alignment Might Be Hard | Argument | 69.0 |
Cached Content Preview
[Why Do AI researchers Rate the Probability of Doom So Low?](https://www.lesswrong.com/posts/727sAH7RWsxgg93Xz/why-do-ai-researchers-rate-the-probability-of-doom-so-low#)
4 min read
•
[What do AI researchers know, or think they know, that their aggregate P(Doom) is only at 5-10%?](https://www.lesswrong.com/posts/727sAH7RWsxgg93Xz/why-do-ai-researchers-rate-the-probability-of-doom-so-low#What_do_AI_researchers_know__or_think_they_know__that_their_aggregate_P_Doom__is_only_at_5_10___)
[AI Risk](https://www.lesswrong.com/w/ai-risk)[Deconfusion](https://www.lesswrong.com/w/deconfusion)[Orthogonality Thesis](https://www.lesswrong.com/w/orthogonality-thesis)[AI](https://www.lesswrong.com/w/ai) [Frontpage](https://www.lesswrong.com/posts/5conQhfa4rgb4SaWx/site-guide-personal-blogposts-vs-frontpage-posts)
# 7
## [\[ Question \]](https://www.lesswrong.com/questions)
# [Why Do AI researchers Rate the Probability of Doom SoLow?](https://www.lesswrong.com/posts/727sAH7RWsxgg93Xz/why-do-ai-researchers-rate-the-probability-of-doom-so-low)
by [Aorou](https://www.lesswrong.com/users/adnll?from=post_header)
23rd Sep 2022
4 min read
[A\\
2](https://www.lesswrong.com/posts/727sAH7RWsxgg93Xz/why-do-ai-researchers-rate-the-probability-of-doom-so-low#answers) [6](https://www.lesswrong.com/posts/727sAH7RWsxgg93Xz/why-do-ai-researchers-rate-the-probability-of-doom-so-low#comments)
# 7
I recently read [What do ML researchers think about AI in 2022](https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022).
The probability of Doom is sub-10%. Which is _high_, but as I understand it, in the minds of people like Eliezer Yudkowsky, we're _more likely_ doomed than not.
I personally lean towards Yudkowsky's views, because
\- I don't believe human/evolution-selected minds have thinking power that a machine could not have
\- I believe in the Orthogonality Thesis
(I think that those two questions can be defended empirically)
\- I think it is _easier_ to make a non-aligned machine than an aligned one
(I believe that research currently being carried out strongly hints at the fact that this is true)
\- I believe that _more people_ are working on non-aligned AI than on aligned AI
\- I think it would be very hard politically to stop all AI research and successfully prevent anyone from researching it / to implement a worldwide ban on AI R&D.
Given all this (and probably other observations that I made), I think we're doomed.
I feel my heart beating **hard**, when I think to myself I have to give a number.
I imagine I'm bad at it, it'll be wrong, it's more uncomfortable/inconvenient than just saying "we're fucked" without any number, but here goes anyway-
I'd say that we're
(my brain KEEPS on flinching away from coming up with a number, I don't WANT to _actually_ follow through on all my thoughts and observations about the state of AI and what it means for the Future)-
(I think of all the possible Deus-Ex-Machina that cou
... (truncated, 15 KB total)361870712c6c16e3 | Stable ID: NmZlZjkzOT