Back
AI Panic News - The Rationality Trap
webaipanic.news·aipanic.news/p/the-rationality-trap
A critical Substack essay by Nirit Weiss-Blatt examining psychological harms within the MIRI/rationalist community; relevant to understanding sociological critiques of the AI safety movement and its institutional culture.
Metadata
Importance: 38/100opinion piececommentary
Summary
A critical investigative essay examining the psychological and social harms within the rationalist community centered around MIRI and CFAR, documenting cases of psychosis, suicide, and cult-like dynamics. The piece explores how the community's extreme commitment to rationality and AI existential risk created environments harmful to mental health. It raises broader questions about whether the cultural pathologies of the rationalist community undermine its credibility on AI safety.
Key Points
- •Multiple MIRI/CFAR-adjacent community members experienced psychotic episodes; at least two died by suicide, suggesting systemic psychological harm in the community.
- •The rationalist community exhibited cult-like behaviors: isolating 'normies,' confrontational 'debugging' sessions, psychedelic experimentation, and pushing social norm boundaries.
- •The piece questions whether the same epistemic community driving AI existential risk discourse is itself epistemically and psychologically dysfunctional.
- •Bloomberg's 2023 reporting on EA and rationalist community harms is cited as evidence these issues extend beyond isolated incidents.
- •The essay uses the term 'rationality trap' to suggest that extreme rationalist frameworks can paradoxically produce irrational and harmful outcomes.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Center for Applied Rationality | Organization | 62.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202644 KB
[](https://www.aipanic.news/)
# [](https://www.aipanic.news/)
SubscribeSign in
# THE RATIONALITY TRAP
[](https://substack.com/@aipanicnews)
[Nirit Weiss-Blatt](https://substack.com/@aipanicnews)
Sep 12, 2025
29
6
7
Share
[](https://substackcdn.com/image/fetch/$s_!W19o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd82feda9-319c-4370-92cc-693bddcc924c_1204x756.jpeg)
* * *
You can listen to the 6,000 words via ElevenLabs
0:00
-42:48
Audio playback is not supported on your browser. Please upgrade.
* * *
# **Prologue**
Jessica Taylor joined the [Machine Intelligence Research Institute](https://web.archive.org/web/20161020014516/https:/intelligence.org/team/) (MIRI) in August 2015.
Located in Berkeley, California, it was the epicenter of a small community that calls itself the [rationalist](https://web.archive.org/web/20250829015849/https:/en.wikipedia.org/wiki/Rationalist_community) community. Its leader is writer Eliezer Yudkowsky, the founder of the LessWrong forum and MIRI, who hired her as a research fellow. MIRI was preoccupied with the idea of “friendly” artificial intelligence (AI). Taylor had an MSc in computer science from Stanford and wanted to work on software agents “that can acquire human concepts.”
Two years later, she had a psychotic break. “I believed that I was intrinsically evil,” she would later write, “\[and that I\] had destroyed significant parts of the world with my demonic powers.”
After she [shared the details](https://web.archive.org/web/20250827015637/https:/www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe) of her experience on the community’s forum in 2021, other testimonies surfaced.
In 2023, Bloomberg gave the wider public a glimpse with “ [The Real-Life Consequences](https://web.archive.org/web/20230523012825/https:/www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-f
... (truncated, 44 KB total)Resource ID:
67b1f575de456581 | Stable ID: YjE5MGU2YW