Skip to content
Longterm Wiki
Back

On the Fundamental Incoherence of Longtermism — Mind Your Metaphysics (Substack)

blog

Credibility Rating

2/5
Mixed(2)

Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.

Rating inherited from publication venue: Substack

A philosophical critique relevant to AI safety prioritization debates, particularly for those examining the ethical and mathematical foundations of longtermist arguments that underpin much of mainstream AI existential-risk work.

Metadata

Importance: 38/100blog postcommentary

Summary

This blog post argues that longtermism is mathematically incoherent because its core method—multiplying tiny probabilities of future events by astronomically large numbers of potential future people—becomes logically unstable when applied universally as an ethical system. The author contends that while consequentialist reasoning has practical utility for specific decisions, extending it into a comprehensive longtermist framework generates paradoxes that undermine its own foundations.

Key Points

  • Longtermism relies on expected value calculations involving tiny probabilities and enormous numbers of future people, which the author argues produces logical instability.
  • The critique is philosophical rather than empirical: the mathematical structure of longtermist reasoning breaks down when applied universally rather than to specific decisions.
  • Consequentialism and utilitarianism work reasonably well for concrete, bounded decisions but generate paradoxes as complete ethical systems.
  • The article challenges a foundational assumption of EA-aligned AI safety prioritization: that vast future stakes justify present resource allocation toward speculative risks.
  • The piece contributes to a growing philosophical literature questioning whether longtermism's ethical calculus is coherent on its own terms.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202626 KB
On The Fundamental Incoherence of Longtermism 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 

 
 
 
 
 

 

 
 
 
 
 

 

 

 

 

 
 

 
 

 

 

 

 
 Mind Your Metaphysics 

 Subscribe Sign in On The Fundamental Incoherence of Longtermism

 Some very basic math

 Scott Lipscomb Jul 04, 2025 5 2 Share I haven’t written much on ethics on this substack, and there’s a reason for that: my academic work, both in seminary and in graduate school, focused mostly on systematic theology and then philosophy, especially on fundamental questions of epistemology and philosophy of mind. I just didn’t study ethics much in detail, so my insight in ethics is, well, a lot more limited.

 That said, I certainly do think about ethics quite a bit, and of course ethics (like everything else) is something we can (and should!) submit to careful philosophical reflection—especially since, in many ways, ethics is basically philosophy as applied to the actual business of living. And so, I have a number of ethics-related articles in the pipeline, and today’s is the first. I am going to start with a narrowly-focused critique of one aspect of one particular ethical system, before hopefully addressing ethical questions more broadly in future pieces.

 That said, my comment above is important context for what follows—I am certainly no expert on these matters, and am certainly happy for folks to offer correction or counter-arguments if they see fit (of course, I am always happy to hear such rejoinders, even on topics where I have more expertise, but that applies even more here, where I am wandering a bit further afield).

 A Brief Introduction to Consequentialist Ethics 

 ( If you are already familiar with consequentialism, utilitarianism, and their specific application in longtermism, you may want to skip down to the next section where I engage my principal critique of longtermism directly: “The Problem with the Long View”. ) Longtermism is a new ethical perspective, an outgrowth of effective altruism , which itself is really just a very market-oriented outgrowth of utilitarianism. Utilitarianism is the primary mode of so-called consequentialist ethics, which argues that the moral rightness of any given action is to be calculated solely on the basis of what the consequences of that action will be. So, for example, if we ask whether it is acceptable to lie to someone, we must ask: what will the consequences of lying be in this particular circumstance? So, if we are considering lying to someone who has a peanut allergy about whether there are any peanuts in our peanut butter cheesecake, consequentialism would argue that such a lie would be wrong specifically because it would cause harm to that person. Meanwhile, if we are considering whet

... (truncated, 26 KB total)
Resource ID: f0ae2d940e2e3841 | Stable ID: OTI5YjczMm