Skip to content
Longterm Wiki
Back

Long-termism: An Ethical Trojan Horse — Carnegie Council

web

A critical perspective from Carnegie Council's AI & Equality Initiative, relevant for understanding pushback against longtermist frameworks that underpin much of mainstream AI safety prioritization and EA-aligned funding decisions.

Metadata

Importance: 45/100opinion piececommentary

Summary

Anja Kaspersen and Wendell Wallach critique longtermism as articulated by William MacAskill, arguing that while protecting future generations is intuitively appealing, the framework raises serious practical and ethical problems around trade-offs, fairness, and the concentration of decision-making power. The article questions who decides how much present generations sacrifice for speculative future threats and who bears those costs.

Key Points

  • Longtermism's core claim—that future billions outweigh present generations morally—can justify extreme resource allocation to low-probability, high-impact risks.
  • The framework raises fairness concerns: who bears sacrifices for speculative future threats, and who holds decision-making authority over these priorities?
  • The asteroid defense analogy illustrates how longtermist logic, applied rigidly, can produce absurd policy prescriptions.
  • Critics argue longtermism may serve as ideological cover for concentrating power among wealthy tech elites under the guise of altruism.
  • The article challenges AI safety advocates to reckon with how longtermist assumptions shape prioritization of existential risks over present harms.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202616 KB
Long-termism: An Ethical Trojan Horse | Carnegie Council for Ethics in International Affairs 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 

 
 
 

 
 
 
 
 
 Skip to content 
 
 
 
 
 
 
 
 
 

 
 
 Long-termism: An Ethical Trojan Horse
 

 
 
 
 Sep 29, 2022 
 

 
 
 
 
 
 
 
 
 Authors
 

 
 
 
 Anja Kaspersen 
 
 
 
 
 
 
 
 
 Anja Kaspersen
 

 
 
 Former Carnegie Council Senior Fellow, Artificial Intelligence & Equality Initiative (AIEI); IEEE
 

 
 
 
 
 
 Wendell Wallach 
 
 
 
 
 
 
 
 
 Wendell Wallach
 

 
 
 Former Carnegie-Uehiro Fellow, Artificial Intelligence & Equality Initiative (AIEI); Yale Interdisciplinary Center for Bioethics
 

 
 
 
 
 
 
 About the Initiative
 

 
 The Artificial Intelligence & Equality Initiative (AIEI) is an impact-oriented community of practice seeking to understand how AI impacts equality for better or worse. AIEI works to empower ethics in AI so that it is deployed in a just, responsible, and inclusive manner.
 

 
 
 
 
 
 
 
 
 Share
 

 

 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 
 
 
 

 
 
 
 
 
 Stay updated on news, events, and more
 

 
 Join our mailing list
 

 
 
 
 

 
 
 
 
 
 
 
 
 
 Recently the philosopher William MacAskill, with his book What We Owe The Future , has been popularizing the idea that the fate of humanity should be our top moral priority. His core proposition is that today's 8 billion humans are vastly outweighed in importance by the hundreds of billions of humans who could live in future generations if we can avoid wiping out humanity in the near term.

 MacAskill's argument is known by the slogan "longtermism," (often written as long-termism) and it has already been sharply criticized. For example, columnist Christine Emba has written in The Washington Post : "It's compelling at first blush, but as a value system, its practical implications are worrisome." In practice, she explains, it implies seeing "preventing existential threats to humanity as the most valuable philanthropic cause"—which means we should invest far more in addressing risks that threaten humanity's very long-term existence.

 As Emba says, this can seem impossible to disagree with. Think of climate change: Most of us would now agree that for decades we have been underestimating the threat of environmental collapse, and that with hindsight we should have been more willing to sacrifice some of our living standards to speed the transition away from burning fossil fuels.

 The difficulty comes when you consider trade-offs. Exactly how much should be sacrificed today to increase the chances of future generations surviving various perceived threats? How speculative of a future threat can justify current sacrifices? Who should sacrifice and when? And who gets to decide when and who sacrifices, and what they will have to sacrifice?

 To take an extreme example, consider a hypothetical proposal to devote a quarter of all human resources to minimizing the risk of a large asteroid or comet striking earth. Imag

... (truncated, 16 KB total)
Resource ID: 1ec5de6f53de7890 | Stable ID: YzMyY2EwNW