Skip to content
Longterm Wiki
Back

"Effective Altruism Funded the 'AI Existential Risk' Ecosystem with Half a Billion Dollars" (Dec 5, 2023)

web

A critical, skeptical account of EA's role in building the AI safety field; useful for understanding external critiques and the political economy of AI safety funding, but reflects a strong anti-EA editorial stance and should be read alongside primary sources.

Metadata

Importance: 38/100opinion piececommentary

Summary

This article by Nirit Weiss-Blatt argues that the AI existential risk field was not organically developed but was deliberately constructed through approximately $500 million in funding from the Effective Altruism movement, primarily via Open Philanthropy. It traces how EA-backed funding created and sustained AI safety institutions, research groups, and think tanks, and connects this funding ecosystem to the November 2023 OpenAI board drama involving EA-affiliated members.

Key Points

  • Open Philanthropy, founded by Dustin Moskovitz and Holden Karnofsky, is identified as the largest EA funder of AI existential risk research, disbursing ~$500M to the ecosystem.
  • EA funders explicitly claim credit for 'founding the field of AI Safety' and 'creating the field of AI alignment research' through deliberate field-building strategies.
  • The article argues the scale of funding artificially inflated a fringe subculture, moving AI extinction scenarios from science fiction into mainstream policy discourse.
  • The November 2023 OpenAI board coup is framed as evidence of EA movement's outsized influence, with EA-affiliated board members Helen Toner and Tasha McCauley implicated.
  • The piece is critical in tone, positioning EA's AI safety funding as ideologically motivated 'extinction alarmism' rather than evidence-based philanthropy.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202618 KB
Effective Altruism Funded the “AI Existential Risk” Ecosystem with Half a Billion Dollars 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 

 

 
 
 
 
 

 

 

 

 

 
 

 
 

 

 

 

 
 

 Subscribe Sign in Effective Altruism Funded the “AI Existential Risk” Ecosystem with Half a Billion Dollars

 The “AI Existential Safety” field did not arise organically. Effective Altruism invested $500 million in its growth and expansion. 

 Nirit Weiss-Blatt Dec 05, 2023 33 8 7 Share 

 “I was a fan of Effective Altruism. But it became cultish. Happy to donate to save the most lives in Africa, but not to pay techies to fret about AI turning us into paperclips .” 

 Steven Pinker, Harvard.

 OpenAI’s Turmoil 

 On November 17, 2023, Sam Altman was fired by OpenAI’s Board of Directors : Ilya Sutskever , Adam D’Angelo , and two members with clear Effective-Altruism ties, Tasha McCauley and Helen Toner . 1 Their vague letter left everyone with more questions than answers. It sparked speculations , interim CEOs (e.g., Emmett Shear ), and an employee revolt (of more than 700 OpenAI employees). The board’s reaction was total silence . 

 A week after, on November 24, Steven Pinker linked to a Wall Street Journal article on how the OpenAI drama “showed the influence of effective altruism .” 

 The events that led to the coup saga are still unexplained. Nonetheless, it became a wake-up call to the power of the Effective Altruism movement, which is “ supercharged by hundreds of millions of dollars ” and focuses on how advanced artificial intelligence (AI) “could destroy mankind.” It looked clearer that “Effective Altruism degenerated into extinction alarmism .” 

 

 To make this reckoning more data-driven : 

 1.     This article breaks down how the “AI Existential Risk” movement grew to be so influential.

 TL;DR - The magnitude of the funding created this inflated ecosystem.

 2.     The next article, Your Ultimate Guide , describes the many players involved. 

 

 Effective Altruism, AI Safety, and AI Existential Risk 

 In a nutshell, according to the Effective Altruism movement, the most pressing problem in the world is preventing an apocalypse where an Artificial General Intelligence (AGI) exterminates humanity . 2 

 With billionaires' backing , this movement funded numerous institutes, research groups , think tanks, grants, and scholarships under the brand of AI Safety. Effective Altruists tend to brag about their “field-building”: 1. They “Founded the field of AI Safety, and incubated it from nothing ” up to this point.  2. They “ created the field of AI alignment research” (“aligning future AI systems with our interests ”/“human values”). 3 

 The overlap between Effective Altruism, Existential Risk, and A

... (truncated, 18 KB total)
Resource ID: 21dcc6ead99ba26e | Stable ID: NWM3MTdhMj