Skip to content
Longterm Wiki
Back

Minimal and Expansive Longtermism — Hilary Greaves and Christian Tarsney, Oxford University Press (2025)

web

A 2025 academic monograph from Oxford's Global Priorities Institute providing philosophical foundations for longtermism, directly relevant to understanding why AI existential risk reduction is prioritized in the effective altruism and AI safety communities.

Metadata

Importance: 62/100bookprimary source

Summary

This Oxford University Press volume by Hilary Greaves and Christian Tarsney systematically examines longtermism, distinguishing between minimal versions (giving significant weight to future generations) and expansive versions (prioritizing the long-run future above all else). The work provides philosophical foundations for evaluating how much moral weight should be assigned to future people and the implications for policy and action. It is a key academic treatment of longtermism as it relates to existential risk reduction and AI safety priorities.

Key Points

  • Distinguishes 'minimal longtermism' (future matters significantly) from 'expansive longtermism' (future dominates all moral considerations).
  • Provides rigorous philosophical grounding for why reducing existential and catastrophic risks may be among the most important moral priorities.
  • Engages with population ethics, uncertainty, and the tractability of improving long-run outcomes.
  • Authored by leading academic philosophers at Oxford's Global Priorities Institute, making it a high-credibility academic reference.
  • Relevant to AI safety insofar as AI-related existential risks are a primary motivation for longtermist prioritization.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
[Applied Ethics](https://philpapers.org/browse/applied-ethics/) > [Environmental Ethics](https://philpapers.org/browse/environmental-ethics/) > [Topics in Environmental Ethics](https://philpapers.org/browse/topics-in-environmental-ethics/) > [Future Generations](https://philpapers.org/browse/future-generations/) > [Longtermism](https://philpapers.org/browse/longtermism/)

# [Longtermism](https://philpapers.org/browse/longtermism)

Edited by [Hayden Wilkinson](https://philpeople.org/profiles/hayden-wilkinson)(University of Western Australia)

About this topic


|     |     |
| --- | --- |
| _Summary_ | _Longtermism_, as its proponents define it, is that the claim that: at least in some of the most important decisions facing agents today, which options are morally best are those that are best for the long-term future. For instance, whenever present-day policymakers face a decision of whether to a) greatly reduce carbon emissions and thereby improve quality of life for the coming hundreds or thousands of years or to b) instead enrich present-day people, it may be better to do (a). As a philosophical principle, longtermism can be interpreted as a claim about _value_ or as one about what agents _ought_ to do. Both interpretations have been the subject of recent debate. |

|     |     |
| --- | --- |
| _Key works_ | The canonical articulations and defences of longtermism are [Greaves&MacAskill's "The case for strong longtermism"](https://philpapers.org/rec/GRETCF-4) and [MacAskill's What We Owe the Future](https://philpapers.org/rec/MACWWO-2). Various other treatments of longtermism discuss whether it holds on particular moral and decision-theoretic views, such as person-affecting views ( [Thomas 2019](https://philpapers.org/rec/THOTAU)), views that allow for risk aversion ( [Buchak 2023](https://philpapers.org/rec/BUCHSR); [Pettigrew 2024](https://philpapers.org/rec/PETEAR-4)), views of partial aggregation ( [Curran 2025](https://philpapers.org/rec/CURLAT-2)), views that endorse a harm-benefit asymmetry ( [Mogensen & MacAskill 2021](https://philpapers.org/rec/MOGTPA-5)), and non-additive theories of value such as averageism and egalitarianism ( [Tarsney & Thomas 2020](https://philpapers.org/rec/TARNAI)). For key objections to longtermism, see: [Mogensen 2020](https://philpapers.org/rec/MOGMDA-2), [Plant forthcoming](https://philpapers.org/rec/PLAWMW), [Thorstad 2023](https://philpapers.org/rec/THOHRL), [Mogensen 2021](https://philpapers.org/rec/MOGMC), and [Tarsney 2023](https://philpapers.org/rec/TARTEC-4). Note also the distinction between longtermism and the claim that preventing human extinction is particularly valuable (as is discussed by [Ord 2020](https://philpapers.org/rec/ORDTPE), [Bostrom 2013](https://philpapers.org/rec/BOSERP), and various entries under the Existential Risk category)--while the latter view may imply longtermism, longtermism can hold without it, as what is 'best for the long-term future' may simply be for future generations

... (truncated, 98 KB total)
Resource ID: f6bf95acaf2e86d4 | Stable ID: ZjE5NGU4NG