Skip to content
Longterm Wiki
Back

Effective Altruism, Longtermism, and the Problem of Arbitrary Power — The Philosopher 1923 (2023)

web

A critical philosophical essay from The Philosopher journal challenging the political implications of EA and longtermism; useful context for understanding external critiques of the ideological environment surrounding AI safety work.

Metadata

Importance: 38/100opinion piececommentary

Summary

A philosophical critique of Effective Altruism and longtermism, arguing that these movements risk concentrating arbitrary power in the hands of a small technocratic elite. The piece examines how longtermist justifications can be used to rationalize undemocratic decision-making in the name of humanity's long-term future. It raises concerns about the political and ethical implications of tech-sector philanthropy guided by these ideologies.

Key Points

  • Longtermism's utilitarian calculus over vast future populations can be used to justify bypassing democratic accountability in present decision-making.
  • EA and longtermism tend to concentrate moral and financial authority among a small, wealthy, technically-oriented elite, raising concerns about arbitrary power.
  • The piece critiques the assumption that a small group can reliably identify and act on what is best for all of humanity across deep time.
  • Philosophical analysis suggests these movements may function as ideological cover for existing power structures in the tech industry.
  • Raises questions about whether longtermist AI safety priorities reflect genuine global concerns or the parochial interests of Silicon Valley.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 202622 KB
top of page

![](https://static.wixstatic.com/media/74354a_e7b71ac9ec9248a585cabe6ccb0fbe5f~mv2.jpg/v1/fill/w_740,h_411,al_c,q_80,usm_0.66_1.00_0.01,enc_avif,quality_auto/74354a_e7b71ac9ec9248a585cabe6ccb0fbe5f~mv2.jpg)

**From** **_The Philosopher,_** **vol. 111, no. 2 ("** [**Where is Public Philosophy Going?**](https://www.thephilosopher1923.org/product-page/autumn-2023-where-is-public-philosophy-going) **").**

**If you enjoy reading this, please consider** [**becoming a patron**](https://www.patreon.com/thephilosopher1923) **or** [**making a small donation**](https://www.thephilosopher1923.org/supportus) **.**

**We are unfunded and your support is greatly appreciated.**

Impact is a word that gets bandied around a lot in academia. It is the metric by which success is measured and managers appeased, but for anyone working in philosophy it can appear almost impossible for one’s work to reach outside of the small network of professional academics and passionate students. Philosophy seems trapped in the ivory tower. That is what makes William MacAskill exceptional. His work as a philosopher has had a real impact outside of academia, not once but twice.

First there was effective altruism. Inspired by Peter Singer’s work on ethics, its most visible advocates are MacAskill and Toby Ord. Effective altruists argue that not only do people have an obligation to ‘do good’ unconstrained by the borders of their states, but they must also do good in the most effective way possible. You may love stamp collecting, for example, and would feel extraordinarily happy if you gave a £1 million to the Royal Philatelic Society London. Yet, with apologies to the philatelists reading this, effective altruism would say that this would be the wrong way to use your money; it would be unethical. Instead, you should take your money and give it to an organisation that has a strong track record in providing treatment for river-blindness in Sub-Saharan Africa.

This is very commonsensical and helps explain effective altruism’s transformation into a major social movement. From its origins in Oxford, it has spread across the world, and effective altruist-aligned organisations, such as MacAskill’s Centre for Effective Altruism, have annual expenditures approaching $400 million with an estimated $46 billion more in funding commitments. That is serious impact.

Then came longtermism. Whereas effective altruism extends the spatial horizons of our ethical obligations beyond borders, longtermism extends the temporal horizons of these obligations to future people. It posits that a major ethical priority, perhaps even _the_ major ethical priority, is to circumvent ‘existential risks’ to humanity. These threats range from planet-killing asteroids to nuclear war to rogue artificial intelligence going ‘full Skynet’ and annihilating humanity. Some are incipient, others may be some way off. Even if some of the risks seem a bit fantastical, it is easy to see the appeal of longtermism, give

... (truncated, 22 KB total)
Resource ID: d2d5d834b691cd54 | Stable ID: Yjk0OTk3Mz