Rationality: A-Z (The Sequences) — LessWrong
blogCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: LessWrong
Foundational reading for the rationalist and early AI safety community; provides the epistemic and philosophical groundwork underlying much of MIRI's research agenda and early LessWrong-adjacent AI safety thinking, though not itself a technical AI safety resource.
Metadata
Summary
A comprehensive collection of Eliezer Yudkowsky's blog posts on human rationality, epistemology, cognitive biases, and value theory, originally written 2006-2009 and reorganized into themed sequences. It serves as the foundational intellectual background for LessWrong, MIRI, CFAR, and much of the effective altruist community, introducing Bayesian reasoning, motivated cognition, reductionism, and the nature of goal-directed systems.
Key Points
- •Covers Bayesian epistemology, cognitive biases, and motivated reasoning as foundational tools for clear thinking about complex problems including AI risk.
- •Introduces key concepts about goal-directed systems and how minds can 'lose their purpose', directly relevant to AI alignment thinking.
- •Explains evolutionary psychology and cognitive architecture to help identify systematic errors in human and potentially AI reasoning.
- •Explores value theory and ethics, laying groundwork for thinking about how to specify and preserve human values in intelligent systems.
- •Serves as the primary intellectual foundation for the LessWrong and rationalist community from which much early AI safety work emerged.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Machine Intelligence Research Institute | Organization | 50.0 |
Cached Content Preview
x Rationality: A-Z — LessWrong This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Rationality: A-Z Map and Territory Predictably Wrong Fake Beliefs Noticing Confusion Mysterious Answers How to Actually Change Your Mind Overly Convenient Excuses Politics and Rationality Against Rationalization Against Doublethink Seeing with Fresh Eyes Death Spirals Letting Go The Machine in the Ghost The Simple Math of Evolution Fragile Purposes A Human's Guide to Words Mere Reality Lawful Truth Reductionism 101 Joy in the Merely Real Physicalism 201 Quantum Physics and Many Worlds Science and Rationality Mere Goodness Fake Preferences Value Theory Quantified Humanism Becoming Stronger Yudkowsky's Coming of Age Challenging the Difficult The Craft and the Community Rationality: A-Z
Rationality: A-Z (or "The Sequences") is a series of blog posts by Eliezer Yudkowsky on human rationality and irrationality in cognitive science. It is an edited and reorganized version of posts published to Less Wrong and Overcoming Bias between 2006 and 2009. This collection serves as a long-form introduction to formative ideas behind Less Wrong, the Machine Intelligence Research Institute, the Center for Applied Rationality, and substantial parts of the effective altruist community. Each book also comes with an introduction by Rob Bensinger and a supplemental essay by Yudkowsky.
The first two books, Map and Territory and How to Actually Change Your Mind , are available on Amazon (printed and e-book version).
The entire collection is available as an e-book and audiobook . A number of alternative reading orders for the essays can be found here , and a compilation of all of Eliezer's blogposts up to 2010 can be found here .
Start Reading Map and Territory
What is a belief, and what makes some beliefs work better than others? These four sequences explain the Bayesian notions of rationality, belief, and evidence. A running theme: the things we call “explanations” or “theories” may not always function like maps for navigating the world. As a result, we risk mixing up our mental maps with the other objects in our toolbox.
How to Actually Change Your Mind
This truth thing seems pretty handy. Why, then, do we keep jumping to conclusions, digging our heels in, and recapitulating the same mistakes? Why are we so bad at acquiring accurate beliefs, and how can we do better? These seven sequences discuss motivated reasoning and confirmation bias, with a special focus on hard-to-spot species of self-deception and the trap of “using arguments as soldiers”.
The Machine in the Ghost
Why haven’t we evolved to be more rational? Even taking into account resource constraints, it seems like we could be getting a lot more epistemic bang for our evidential buck. To get a realistic picture of how and why our minds execute their biological functions, we need to crack open the hood and see how evoluti
... (truncated, 22 KB total)34b0f4b24a86eac1 | Stable ID: MGU5NDBjNz