Back
How Effective Altruism Lost Its Way - Quillette
webA Quillette critique of the EA movement relevant to understanding debates around longtermism and AI safety community dynamics; represents a skeptical outside perspective on x-risk prioritization.
Metadata
Importance: 35/100opinion piececommentary
Summary
A critical analysis arguing that the Effective Altruism movement drifted from evidence-based near-term charity toward speculative longtermist and AI existential risk concerns, undermining its original empirical rigor. The piece contends that EA's focus on hypothetical far-future scenarios and its entanglement with figures like Sam Bankman-Fried damaged its credibility and practical impact. It calls for a return to measurable, near-term humanitarian interventions.
Key Points
- •EA originally focused on evidence-based near-term interventions like global health, but shifted toward speculative longtermist priorities including AI x-risk.
- •The movement's association with Sam Bankman-Fried and the FTX collapse exposed problems with 'earn to give' and ends-justify-means reasoning.
- •Longtermism is criticized for being unfalsifiable and for diverting resources from proven, tractable humanitarian causes.
- •The article argues EA's embrace of AI existential risk concerns reflects ideological drift rather than rigorous empirical reasoning.
- •Author advocates returning to EA's roots in measurable impact and cost-effectiveness over speculative future-oriented philanthropy.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| EA and Longtermist Wins and Losses | -- | 53.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202659 KB
## Sign up for Quillette
Sydney. London. Toronto.
Subscribe
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
### **I. The Quest to Maximize Human Well-Being**
A decade and a half ago, the founders of two small Oxford-based nonprofits couldn’t have anticipated that they were launching one of the most significant philanthropic movements in a generation. [Giving What We Can](https://www.givingwhatwecan.org/?ref=quillette.com) was created in 2009 to help people identify the most effective charities and commit to donating a substantial portion of their income. Two years later, [80,000 Hours](https://80000hours.org/?ref=quillette.com)—a reference to the average amount of time people spend on their careers throughout their working lives—was founded to explore which careers have the maximum positive impact. In October 2011, Will MacAskill (the co-founder of both organizations who was then working toward his philosophy PhD at Oxford) [emailed](https://forum.effectivealtruism.org/posts/9a7xMXoSiQs3EYPA2/the-history-of-the-term-effective-altruism?ref=quillette.com) the 80,000 Hours team: “We need a name for ‘someone who pursues a high impact lifestyle,’” he wrote. “‘Do-gooder’ is the current term, and it sucks.”
MacAskill would later [explain](https://forum.effectivealtruism.org/posts/9a7xMXoSiQs3EYPA2/the-history-of-the-term-effective-altruism?ref=quillette.com) that his team was “just starting to realize the importance of good marketing, and \[was\] therefore willing to put more time into things like choice of name.” He and over a dozen other do-gooders set out to choose a name that would encompass all the elements of their movement to direct people toward high-impact lives. What followed was a “period of brainstorming—combining different terms like ‘effective’, ‘efficient’, ‘rational’ with ‘altruism’, ‘benevolence’, ‘charity’.” After two months of internal polling and debate, there were 15 final options, including the Alliance for Rational Compassion, Effective Utilitarian Community, and Big Visions Network. The voters went with the Center for Effective Altruism.
Over the past decade, Effective Altruism (EA) has grown from a small project led by Oxford academics (MacAskill co-founded Giving What We Can with his fellow Oxford philosopher, Toby Ord) to one of the largest philanthropic movements in the world. EA has attracted the attention of a wide and diverse array of influential people—from the philosopher Derek Parfit to Elon Musk—and the movement has directed billions of dollars toward causes such as global health and poverty, biosecurity, and animal welfare. EA has also made plenty of enemies, who have variously [described](https://www.carnegiecouncil.org/media/article/long-termism-ethical-trojan-horse?ref=quillette.com) the movement as a “Trojan horse for the vested interests of a select few,” [dismissed](https://unherd.com/2022/09/effective-altruism-is-the-new-woke/?=refinnar&ref=quillette.com) it as an “a
... (truncated, 59 KB total)Resource ID:
b3d2c92c3d2c7097 | Stable ID: M2FiMTkxZW