Skip to content
Longterm Wiki
Back

Centre for Effective Altruism — Longtermism

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Centre for Effective Altruism

This CEA page serves as an accessible entry point to longtermism for newcomers to effective altruism and AI safety; it situates AI existential risk within a broader ethical and philosophical framework rather than providing technical depth.

Metadata

Importance: 55/100wiki pageeducational

Summary

An introductory overview of longtermism as an ethical framework, presenting definitions from Will MacAskill and Toby Ord, and arguing that future people's moral status, their potentially vast numbers, and our ability to shape long-run outcomes make addressing existential risks a top priority. The page highlights risks like misaligned AGI and engineered pandemics as key focus areas for longtermist action.

Key Points

  • Longtermism holds that future people matter morally and that positively influencing the long-term future is a key (or the key) moral priority of our time.
  • The scale argument: if humanity lasts one million years at current population, future people will outnumber present people ten thousand to one.
  • Strong longtermism elevates long-run impact to the top moral priority, while weaker longtermism treats it as a key priority among others.
  • Existential risks—particularly misaligned AGI and engineered pandemics—are highlighted as tractable levers for securing humanity's long-term future.
  • The current generation may be uniquely positioned to prevent catastrophes that could permanently foreclose humanity's potential.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 20265 KB
# Longtermism

‘Longtermism' refers to a set of ethical views concerned with protecting and improving the long-run future.

This view rests on the idea that future people matter morally, that there could be a very large number of future people, and that there are actions we can take now to affect how good or bad the future is.

There are multiple more precise definitions of longtermism.

In his forthcoming book “ [What We Owe the Future](https://www.whatweowethefuture.com/),” [Will MacAskill](http://williammacaskill.com/) offers two distinct definitions of longtermism:

> Longtermism: the view that positively influencing the longterm future is a key moral priority of our time.
>
> Strong Longtermism: the view that positively influencing the longterm future is the key moral priority of our time.

[Toby Ord](http://www.tobyord.com/) offers a longer discussion of longtermism in his book “ [The Precipice](https://theprecipice.com/)”:

> “the possibility of preventable existential risks in our lifetimes shows that there are issues where our actions can have sustained positive effects over the whole longterm future, and where we are the only generation in a position to produce those effects. So the view that people in the future matter just as much as us has deep practical implications. We have a long way to go if we are to understand these and integrate them fully into our moral thinking. Considerations like these suggest an ethic we might call longtermism, which is especially concerned with the impacts of our actions upon the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity’s potential is one avenue for such a lasting impact and there may be others too.”

Ord writes that “longtermism is animated by a moral re-orientation toward the vast future that existential risks threaten to foreclose.”

These definitions are not exhaustive, and how best to define longtermism is a live discussion. What all these definitions share is a recognition that the long-run future matters morally, and that this should bear on our priorities today. The future seems like it could be vast: if humanity stays at its current size and lasts as long as the typical mammalian species — one million years — then future people will outnumber us ten thousand to one.

It seems there are some things we can do to ensure the long run future goes well. For instance, we can try to avoid catastrophes that could cause human extinction or otherwise lock humanity into a negative future, such as mitigating risks posed by artificial general intelligence that is not aligned with human values or synthetic pathogens that could cause destructive pandemics.

These points together are powerful, and have led many researchers, advocates, entrepreneurs, and policymakers to focus their efforts on ensuring that the long term future goes well

... (truncated, 5 KB total)
Resource ID: 2316f361df1b570e | Stable ID: MWE2MWRlZW