Longtermism's Philosophical Credibility After FTX
Quick Assessment
| Dimension | Assessment |
|---|---|
| Philosophical Status | Contested; core claims remain defended but face renewed scrutiny |
| Reputational Damage | Severe in late 2022–2023; partial recovery ongoing |
| Funding Impact | $160M in FTX Future Fund commitments lost; Open Philanthropy paused then resumed longtermist grantmaking in early 2023 |
| Institutional Status | Mixed; Future of Humanity Institute closed April 2024; Global Priorities Institute continues with Open Philanthropy support |
| Ongoing Credibility | Defended by MacAskill and others; challenged on independent philosophical grounds in peer-reviewed literature |
Key Links
| Source | Link |
|---|---|
| Radical Philosophy critique (Crary, 2023) | radicalphilosophy.com |
| Wikipedia (Effective Altruism) | en.wikipedia.org |
| Wikipedia (Longtermism) | en.wikipedia.org |
| 80,000 Hours introduction | 80000hours.org |
| Torres, Aeon critique (Oct 2021) | aeon.co |
| MacAskill, EA Forum statement (Nov 2022) | forum.effectivealtruism.org |
| Open Philanthropy 2023 progress report | openphilanthropy.org |
| FHI Final Report (Apr 2024) | forum.effectivealtruism.org |
Overview
Longtermism is the ethical view that positively influencing the long-term future of humanity constitutes a key moral priority of our time. Developed most prominently by philosophers William MacAskill and Toby Ord within the Centre for Effective Altruism ecosystem, the philosophy holds that future generations deserve equal moral consideration to those alive today, that the potential scale of future human lives dwarfs the present population, and that actions reducing existential risks—such as AI misalignment, engineered pandemics, or nuclear conflict—carry substantial expected value as a result.1 Its institutional expression included the Future of Humanity Institute at Oxford, the Global Priorities Institute, and a network of researchers and grantmakers funded substantially through Open Philanthropy and, critically, through Sam Bankman-Fried's FTX Future Fund.2
The collapse of FTX in November 2022 and Bankman-Fried's subsequent conviction on fraud charges precipitated a significant crisis for the longtermist movement. Bankman-Fried had been one of longtermism's most prominent public funders, pledging the bulk of his fortune—which peaked at approximately $16 billion—to effective altruist and longtermist causes.3 The FTX Future Fund had committed $160 million to longtermist researchers and organizations, funding that became unrecoverable when the exchange filed for bankruptcy.4 The scandal prompted intense public debate about whether longtermism's philosophical framework had, in some sense, licensed or enabled the misconduct—or whether the ideology simply suffered guilt by association with a fraudulent donor.
The question of philosophical credibility after FTX splits along two axes. The first concerns reputational damage: the movement's public standing, institutional health, and ability to attract funding and talent. The second is more fundamental: whether critics had identified genuine theoretical weaknesses in longtermism that the scandal brought into sharper relief, or whether the philosophy's core arguments remain intact regardless of who funds them. Both debates remain unresolved, and academic scrutiny of longtermism's foundations—including a Leverhulme Trust–funded research project at the University of Bristol announced in 2024 and multiple peer-reviewed articles published in Ethics, Philosophy and Public Affairs, and Synthese—continues.5
History and Background
Origins in Effective Altruism
Longtermism emerged from the effective altruism (EA) movement, which applies utilitarian-influenced reasoning to maximize the good achievable through philanthropy and career choices. Peter Singer's work on the moral obligations of affluent individuals toward distant strangers provided the philosophical foundation. During the 2010s, EA expanded its focus to incorporate longer-horizon concerns alongside near-term interventions such as global health and poverty alleviation.6 MacAskill is credited with coining the term "longtermism" in 2017, and the movement crystallized around two major texts: Toby Ord's The Precipice: Existential Risk and the Future of Humanity (2020) and MacAskill's What We Owe the Future (2022).7
The institutional infrastructure that developed around this philosophical shift was substantial. By 2021, the EA movement had accumulated an estimated $46 billion in dedicated funding, much of it directed toward existential risk research.8 The Future of Humanity Institute and the Global Priorities Institute (GPI) at Oxford served as academic homes for longtermist research, while 80,000 Hours steered early-career professionals toward longtermist cause areas including AI safety and biosecurity.
Sam Bankman-Fried and FTX's Role
Bankman-Fried's relationship with EA began in 2012 when MacAskill advised him to pursue "earning to give"—a strategy of maximizing income in order to donate at scale—as an effective altruistic path.9 Bankman-Fried went on to found the cryptocurrency exchange FTX and became one of the movement's most prominent funders. He pledged to donate the overwhelming majority of his fortune to EA and longtermist causes and established the FTX Future Fund as the vehicle for this giving.
Bankman-Fried described himself primarily as a consequentialist utilitarian rather than a longtermist specifically. By his own account, he was "a total, act, hedonistic/one level (as opposed to high and low pleasure), classical (as opposed to negative) utilitarian."10 The FTX Future Fund, however, was explicitly organized around longtermist grant priorities: it set out to make grants based on longtermist ideas, with humanity's overriding ethical aim framed as protecting future generations.11 By 2022, approximately 40% of EA's funding was directed toward longtermist causes, much of it toward AI safety.11 This distinction matters for evaluating how directly Bankman-Fried's conduct implicates longtermism specifically versus EA's broader utilitarian framework—critics and defenders have drawn different conclusions from the same facts.
The FTX Future Fund committed $160 million to longtermist projects before the exchange's collapse.4 MacAskill served on the Future Fund's advisory board and had directed $36.5 million to organizations he co-founded.12 When FTX filed for bankruptcy in November 2022 amid revelations that customer funds had been misappropriated—allegedly to support affiliated trading firm FTX-linked Alameda Research—the Future Fund team, including its director Nick Beckstead, resigned. FTX's bankruptcy CEO John J. Ray III compared the collapse to Enron, citing failures of corporate controls.13 Bankman-Fried was subsequently convicted on multiple counts of wire fraud and securities fraud.
The Reputational Fallout
The immediate consequences for longtermism's public standing were substantial. On November 12, 2022, MacAskill published a statement on the EA Forum acknowledging the damage directly. He wrote that he did not know which emotion was stronger: his "utter rage at Sam (and others?) for causing such harm to so many people," or his "sadness and self-hatred for falling for this deception." He also tweeted "I cannot in words convey how strongly I condemn what they did," then went largely silent on the platform until June 2023.14 MacAskill argued in his statement that What We Owe the Future explicitly opposed "ends justify the means" reasoning, citing relevant passages in his own defense.14
Critics contested this framing, arguing it "admits that they were duped by an unethical huckster, but denies that there is any serious flaw in the movement itself," treating SBF's conduct as "simply an unhappy coincidence that Samuel Bankman-Fried was associated with them."15 80,000 Hours expressed regret at having placed trust in Bankman-Fried and acknowledged the organization was grappling with the lessons of the collapse.16
Critics argued the scandal revealed something structural rather than merely incidental. Longtermism's movement had grown to depend on a small number of extremely wealthy donors—a concentration of philanthropic power that some argued left it vulnerable to the ethical failures of individual actors and insulated from the democratic accountability that might otherwise have constrained them.17 The movement's funding apparatus, critics noted, channeled substantial resources into EA institution-building and movement growth; Émile Torres observed that the EA movement had "$46.1 billion in committed funding" before the FTX collapse, with billions remaining dedicated to longtermist efforts even after.18
A TIME Magazine investigation reported in March 2023 that some EA leaders had received warnings about concerns regarding Bankman-Fried's conduct years before the collapse.19 EA community responses disputed the significance of those warnings, arguing they were not specific enough to warrant action; the degree of prior knowledge and its implications for institutional accountability remain contested.19
Peter Singer, whose philosophical work had inspired EA's founding generation, acknowledged that the reputational damage from the FTX collapse was substantial and expressed uncertainty about the movement's near-term recovery prospects.20
Institutional Consequences
The Future of Humanity Institute closed on April 16, 2024. The closure resulted primarily from decisions by Oxford's Faculty of Philosophy, which in 2020 imposed a freeze on FHI's fundraising and hiring. That freeze led to the loss of lead researchers and a promising cohort of junior researchers. In late 2023, the Faculty announced that the contracts of remaining FHI staff would not be renewed. Anders Sandberg, a senior FHI researcher, described the process as "a gradual suffocation by Faculty bureaucracy."21 The institute's largest team, the Governance of AI Program, had already spun out of the university in 2021 to escape bureaucratic restrictions and became an independent organization.22 Staff also suspected that an interdepartmental transfer plan—intended to move FHI out of the Faculty of Philosophy into a more hospitable administrative home—had been blocked internally.22 Additional contributing factors included controversies surrounding FHI figures: a 1996 email with racist content by Nick Bostrom that resurfaced and prompted a university investigation, and separate allegations of misconduct involving FHI-adjacent individuals.23 Open Philanthropy had been FHI's most important funder, making grants of £1.6 million in 2017 and £13.3 million in 2018; a significant portion of the latter remained unspent at the time of closure due to the hiring freeze.22 While the FTX collapse disrupted broader EA funding flows, the proximate causes of FHI's closure were Oxford administrative decisions rather than the loss of FTX funding specifically.
Nick Beckstead, who had served as CEO of the FTX Foundation after joining in November 2021, co-signed the Future Fund team's resignation statement on November 11, 2022, expressing that the team was "shocked and immensely saddened" by the events at FTX and concerned for "thousands of customers whose finances may have been jeopardized."24 By August 2023, Beckstead had stepped down from the boards of Effective Ventures UK and Effective Ventures US, with the board noting that his ongoing recusal from all FTX-related matters had made it difficult for him to contribute effectively.25 No further public philosophical statements by Beckstead defending or critiquing longtermism have been published since his resignation.
The Global Priorities Institute at Oxford has continued to operate. Open Philanthropy recommended a grant of approximately $3.3 million to GPI for general support in the 2023–2024 period, and GPI produced active research output including working papers on population ethics and epistemic challenges to longtermism.26
Funding Disruption and Recovery
Open Philanthropy paused most new longtermist funding commitments in November 2022 following the FTX collapse. That pause was lifted in late January 2023, after the organization conducted an internal review and established new grant assessment guidance.27 In 2023, Open Philanthropy directed over $750 million in grants across its portfolios.28 Notably, Open Philanthropy renamed its "Longtermism" portfolio to "Global Catastrophic Risks (GCR)" in 2023–2024, stating that AI risk and biorisk are not only long-term concerns and could threaten many lives in the near future; the rebranding was also intended to provide better symmetry with the "Global Health and Wellbeing" portfolio.28
The FTX collapse caused a reduction in expected longtermist funding estimated in the hundreds of millions of dollars annually. Other funders partially filled the gap: the Survival and Flourishing Fund increased giving in 2023, and Longview Philanthropy moved over $55 million since its founding in 2018, with further increases expected in 2024.29 The Long-Term Future Fund paid out approximately $5.36 million in grants between May 2023 and March 2024.29 On the commercial side, Google invested $500 million in Anthropic in October 2023, committing up to $1.5 billion total—replacing FTX's earlier $500 million investment in the company.29
The Near-Term vs. Long-Term Rift Within EA
The FTX collapse also intensified pre-existing tensions within EA between advocates focused on global health and poverty alleviation and those prioritizing longtermist cause areas. Critics from the global health side of EA argued that longtermism's dominance of EA funding had distorted priorities and that the FTX scandal validated concerns about speculative cause prioritization. The Devex reporting from late 2022 noted that global health and development organizations funded by EA were uncertain whether their funding pipelines would survive the collapse.30 Through 2022, roughly 70% of Open Philanthropy's total funding had gone toward global health and wellbeing, with approximately 30% toward longtermist areas; the subsequent rebranding to "Global Catastrophic Risks" further blurred the near-term/long-term distinction at the organizational level.27
Philosophical Critiques Independent of FTX
It is important to distinguish between reputational damage arising from association with fraud and substantive philosophical critiques—some of which predate FTX and continue to develop on independent grounds.
The Prioritization Problem
A central objection to longtermism concerns its treatment of present versus future welfare. Because longtermist calculations assign equal moral weight to future individuals, and because the potential future human population vastly outnumbers those alive today—potentially by ratios of thousands to one if humanity persists for millions of years—longtermist reasoning systematically tends to direct resources toward speculative future risks rather than concrete contemporary suffering.6 Critics argue that when this logic is applied consistently, it renders nearly every immediate problem less important than existential risk mitigation, effectively licensing the de-prioritization of poverty, climate change, and other present injustices.31
Philosopher Alice Crary, writing in Radical Philosophy in 2023, argued that longtermists give existential threats such weight that they deprioritize actual suffering in the world we live in, and that the FTX collapse brought this structural feature of the ideology into public view.32 David Thorstad, in a 2023 article in Philosophy and Public Affairs, developed a related argument: that longtermists simultaneously hold that humanity faces high existential risk and that existential risk mitigation has astronomical value, but that existential risk pessimism actually reduces the expected value of existential risk mitigation—threatening what he calls the "astronomical value thesis" at the heart of longtermist prioritization.33
The "Ends Justify Means" Concern
Several critics have argued that longtermism's consequentialist architecture—its insistence that actions be evaluated by their expected long-term outcomes—creates insufficient ethical guardrails against harmful behavior in the present. Because Bankman-Fried described himself as a committed consequentialist, some observers raised the question of whether his fraudulent conduct could be rationalized within a framework that prizes outcome maximization.34 This is not to say that longtermism endorses fraud; MacAskill explicitly denied that it does, citing passages in What We Owe the Future opposing "ends justify the means" reasoning.14 Critics contend, however, that a philosophy oriented around scale-sensitive expected value calculations may systematically underweight deontological constraints—honesty, informed consent, fiduciary duty—that are not easily quantified.35
Philosopher Alice Crary argued in Radical Philosophy that longtermism's corruption is inseparable from the way its core ideas are put into practice, suggesting the problem is not merely one of bad individual actors but of structural features of the ideology.32
Epistemic Challenges and Probability Concerns
Beyond the FTX controversy, critics have identified several internal philosophical vulnerabilities. Émile Torres, in an essay published in Aeon on October 19, 2021—more than a year before the FTX collapse—argued that longtermism's emphasis on maximizing the long-run potential of the human species could justify extreme technological acceleration and concentration of power in the present, and that it risks treating current individuals as instrumentalities for future value rather than as beings with intrinsic moral worth.36 Torres described himself as "a former longtermist" who had "come to see this worldview as quite possibly the most dangerous secular belief system in the world today."36 Torres has subsequently published in academic venues including Synthese, Inquiry, Bioethics, and Metaphilosophy, and in 2024 co-authored with Timnit Gebru a peer-reviewed article in First Monday arguing that the normative framework motivating AGI development is rooted in the Anglo-American eugenics tradition.37
The probability mathematics underlying longtermist prioritization have also attracted sustained academic criticism. Longtermist arguments often hold that even very small reductions in extinction probability carry more expected value than saving large numbers of current lives, given the astronomical scale of potential future generations.38 Christian Tarsney, in a 2023 article in Synthese, argued that the effects of present actions on the very long-run future may be nearly impossible to predict, and developed models showing mixed conclusions about whether the case for longtermism is robust under this "epistemic cluelessness."39 Thorstad's 2024 article in Ethics further critiqued the moral mathematics underlying longtermist arguments for prioritizing existential risk mitigation, and was cited in a 2025 symposium introduction in Moral Philosophy and Politics as a major recent contribution to the critical literature.40
A 2025 symposium in Moral Philosophy and Politics surveyed the major objections: epistemic cluelessness about the far future, questionable assumptions about demographic development, reliance on "fanaticist decision theory," excessive demandingness, insufficient attention to current problems, and threats to the integrity of meaningful human lives.40
Internal Incoherence
Some critics have pointed to a potential self-undermining quality in longtermism's demands. If present generations are required to make substantial sacrifices in the service of future flourishing, and if this pattern persists across time, the result may be a history of continuous present-day privation whose benefits accrue only to a final generation—a structure that critics argue undermines the philosophy's own goal of ensuring humanity's long-run welfare. This "perpetual sacrifice" objection has been raised in the population ethics literature and in critical forums.41 Defenders respond that longtermist recommendations do not require impoverishing present generations, pointing to interventions such as AI safety research and pandemic preparedness that are argued to produce near-term co-benefits.
Defenders' Responses
Proponents of longtermism have consistently argued that the FTX scandal reflects the moral failures of an individual, not the validity of a philosophical framework. MacAskill's and Toby Ord's arguments rest on three premises—that future people matter morally, that future populations could be enormous in scale, and that current actions can reliably influence existential outcomes—none of which is logically affected by what any particular donor did with cryptocurrency exchange funds.42
MacAskill's immediate post-FTX response invoked the content of What We Owe the Future directly, arguing the book explicitly opposed "ends justify the means" reasoning and that the fraud was a violation of, not an expression of, longtermist ethics.14 He acknowledged the need for serious reflection on how the EA community had extended trust to Bankman-Fried and what institutional changes were warranted.
Defenders also note that longtermism encompasses a range of positions, from MacAskill's more modest claim that long-term impact is a key moral priority, to stronger versions holding that it is the overwhelming priority. Hilary Greaves and Christian Tarsney's 2025 Oxford University Press volume distinguishes between "minimal" and "expansive" longtermism, representing a pro-longtermism philosophical development in direct dialogue with the critical literature.43 The more cautious formulations explicitly recommend robust actions under uncertainty, including option-preservation, avoidance of irreversible harms, and epistemic humility—recommendations not obviously in tension with conventional ethical constraints.44
On the question of present neglect, defenders argue that longtermism is compatible with significant near-term intervention and that critics construct a false opposition. Pandemic preparedness, for instance, serves both near-term public health and long-run existential risk reduction; AI safety research is relevant both to current harms from deployed systems and to hypothetical future catastrophe.45
The Global Priorities Institute has continued to produce research engaging with the critical literature, including working papers that take seriously the epistemic challenges raised by Tarsney and Thorstad.26 Open Philanthropy, the largest funder of longtermist-adjacent work, resumed grantmaking after its brief pause and directed over $750 million across its portfolios in 2023.28
Criticisms and Concerns
Association with Undemocratic Power Concentration
One of the most persistent criticisms concerns longtermism's relationship to concentrated private wealth. The movement's philanthropic infrastructure has historically depended on a small number of extremely wealthy technology entrepreneurs who exercise significant influence over research agendas and institutional priorities without meaningful democratic oversight.17 Critics characterize this as privately held power operating in the public sphere with minimal civic accountability, and argue that the FTX scandal illustrated the risks inherent in such concentration. This critique, developed by Crary in Radical Philosophy and by Torres in multiple venues, treats the dependency on billionaire philanthropists not as a contingent feature but as a structural one arising from longtermism's compatibility with—and appeal to—those who have already accumulated exceptional resources.32
Alleged "Ethics Washing"
Critics have alleged that longtermist and EA identity functioned as what they term "ethics washing"—presenting a philanthropic persona that provided social legitimacy and political access while underlying financial practices were problematic. Bankman-Fried's congressional testimony emphasized his philanthropic commitments even as his exchange allegedly misused customer funds.46 Proponents dispute this characterization, arguing that the vast majority of EA-identified donors and organizations were themselves victims of fraud and that characterizing the entire movement's identity as performative overgeneralizes from a single case. Whether the alleged pattern represents a systemic feature of longtermist culture or an individual's exploitation of genuine community norms remains contested among analysts of the collapse.
The Absence of Tractable Interventions
A practical critique that has gained traction concerns the difficulty of identifying concrete, tractable interventions that reliably reduce existential risk by measurable amounts. Critics note that identifying clearly promising longtermist interventions that meet rigorous evidence-based standards has proven challenging, raising questions about whether the framework generates reliable action-guidance.47 Defenders contest this directly, pointing to AI safety research, pandemic preparedness infrastructure, and biosecurity policy work as interventions with both near-term value and longtermist rationale. The disagreement partly reflects differing standards of evidence: critics apply standards developed in global health cost-effectiveness analysis, while defenders argue those standards are not appropriate for novel risk categories where experimental evidence is unavailable.
Key Uncertainties
Several significant questions remain open:
- Philosophical independence from scandal: Whether the core philosophical arguments of longtermism can be evaluated cleanly apart from their institutional context and funding history is disputed. Critics like Crary argue the ideas and their implementation are inseparable; defenders maintain the opposite.
- Probability and cluelessness: The degree to which existential risk estimates are tractable versus speculative remains unresolved. Tarsney's 2023 Synthese article and Thorstad's 2024 Ethics article both raise formal challenges to the probability reasoning underlying longtermist prioritization; defenders have yet to fully respond in the peer-reviewed literature.3940
- Institutional recovery: Whether EA and longtermist institutions will recover funding and public credibility comparable to their pre-2022 levels is unclear. The closure of the Future of Humanity Institute and the loss of FTX funding represent significant institutional changes; the rebranding of Open Philanthropy's portfolio and the continued operation of GPI represent countervailing signals.
- Philosophical development: How longtermism's academic defenders will respond to the growing body of philosophical criticism—including the Bristol Foundations of Longtermism project and the 2025 Moral Philosophy and Politics symposium—remains to be seen. The Greaves–Tarsney distinction between minimal and expansive longtermism may represent one such response.43
Sources
Footnotes
-
Longtermism: A Philosophy to Last a Lifetime or Two — Oxford Political Review — Longtermism: A Philosophy to Last a Lifetime or Two — Oxford Political Review ↩
-
OK, WTF Is Longtermism? — VICE (November 2022) — OK, WTF Is Longtermism? — VICE (November 2022) ↩
-
Effective Altruism and Longtermism: The Elite Tech Ideologies Damaged by FTX — The Week — Effective Altruism and Longtermism: The Elite Tech Ideologies Damaged by FTX — The Week ↩ ↩2
-
Who Else Belongs to My Moral Circle? The Foundations of Longtermism — University of Bristol Arts Matter Blog (November 2024) — Who Else Belongs to My Moral Circle? The Foundations of Longtermism — University of Bristol Arts Matter Blog (November 2024) ↩
-
Long-termism: An Ethical Trojan Horse — Carnegie Council — Long-termism: An Ethical Trojan Horse — Carnegie Council ↩ ↩2
-
Centre for Effective Altruism — Longtermism — Centre for Effective Altruism — Longtermism ↩
-
Longtermism: A Philosophy to Last a Lifetime or Two — Oxford Political Review — Longtermism: A Philosophy to Last a Lifetime or Two — Oxford Political Review ↩
-
Sam Bankman-Fried, Effective Altruism, and Alameda — TIME (March 2023) — Sam Bankman-Fried, Effective Altruism, and Alameda — TIME (March 2023) ↩
-
Contemporary Utilitarians: Sam Bankman-Fried — Utilitarianism.com — Contemporary Utilitarians: Sam Bankman-Fried — Utilitarianism.com ↩
-
Sam Bankman-Fried and the Effective Altruism Delusion — New Statesman (November 7, 2023) — Sam Bankman-Fried and the Effective Altruism Delusion — New Statesman (November 7, 2023) ↩ ↩2
-
What We Owe the Past: William MacAskill, Effective Altruism and the Wrong Life — Logos Journal (October 2023) — What We Owe the Past: William MacAskill, Effective Altruism and the Wrong Life — Logos Journal (October 2023) ↩
-
What the FTX Collapse Teaches Us About Ethics — Principia Advisory (March 2023) — What the FTX Collapse Teaches Us About Ethics — Principia Advisory (March 2023) ↩
-
A Personal Statement on FTX — William MacAskill, EA Forum (November 12, 2022) — A Personal Statement on FTX — William MacAskill, EA Forum (November 12, 2022) ↩ ↩2 ↩3 ↩4
-
Effective Altruism, Longtermism, and the Problem of Arbitrary Power — The Philosopher 1923 (2023) — Effective Altruism, Longtermism, and the Problem of Arbitrary Power — The Philosopher 1923 (2023) ↩
-
Wrong Lessons from the FTX Catastrophe — EA Forum — Wrong Lessons from the FTX Catastrophe — EA Forum ↩
-
The Toxic Ideology of Longtermism — Alice Crary, Radical Philosophy (2023) — The Toxic Ideology of Longtermism — Alice Crary, Radical Philosophy (2023) ↩ ↩2
-
Why Effective Altruism and Longtermism Are Toxic Ideologies — Current Affairs (interview with Torres, May 2023) — Why Effective Altruism and Longtermism Are Toxic Ideologies — Current Affairs (interview with Torres, May 2023) ↩
-
Exclusive: Effective Altruist Leaders Were Warned About Sam Bankman-Fried Years Before FTX Collapsed — TIME (March 15, 2023) — Exclusive: Effective Altruist Leaders Were Warned About Sam Bankman-Fried Years Before FTX Collapsed — TIME (March 15, 2023) ↩ ↩2
-
The Toxic Ideology of Longtermism — Alice Crary, Radical Philosophy (2023) — The Toxic Ideology of Longtermism — Alice Crary, Radical Philosophy (2023) ↩
-
The End of the Future of Humanity Institute — Daily Nous (April 18, 2024) — The End of the Future of Humanity Institute — Daily Nous (April 18, 2024) ↩
-
Future of Humanity Institute 2005–2024: Final Report — EA Forum (April 17, 2024) — Future of Humanity Institute 2005–2024: Final Report — EA Forum (April 17, 2024) ↩ ↩2 ↩3
-
The Future of Humanity Institute Closes — Bioethics Observatory (June 2024) — The Future of Humanity Institute Closes — Bioethics Observatory (June 2024) ↩
-
Citation rc-5b40 ↩
-
Nick Beckstead Is Leaving the Effective Ventures Boards — Eli Rose, EA Forum (September 6, 2023) — Nick Beckstead Is Leaving the Effective Ventures Boards — Eli Rose, EA Forum (September 6, 2023) ↩
-
Global Priorities Institute — General Support Grant — Open Philanthropy (2023–2024) — Global Priorities Institute — General Support Grant — Open Philanthropy (2023–2024) ↩ ↩2
-
We're No Longer 'Pausing Most New Longtermist Funding Commitments' — Holden Karnofsky, Open Philanthropy EA Forum post (January 30, 2023) — We're No Longer 'Pausing Most New Longtermist Funding Commitments' — Holden Karnofsky, Open Philanthropy EA Forum post (January 30, 2023) ↩ ↩2
-
Our Progress in 2023 and Plans for 2024 — Open Philanthropy (March 27, 2024) — Our Progress in 2023 and Plans for 2024 — Open Philanthropy (March 27, 2024) ↩ ↩2 ↩3
-
Observations on the Funding Landscape of EA and AI Safety — EA Forum (October 2, 2023) — Observations on the Funding Landscape of EA and AI Safety — EA Forum (October 2, 2023) ↩ ↩2 ↩3
-
What Will FTX's Collapse Mean for Global Health and Development? — Devex — What Will FTX's Collapse Mean for Global Health and Development? — Devex ↩
-
Why Longtermism Is the World's Most Dangerous Secular Credo — Émile Torres, Aeon (October 19, 2021) — Why Longtermism Is the World's Most Dangerous Secular Credo — Émile Torres, Aeon (October 19, 2021) ↩
-
The Toxic Ideology of Longtermism — Alice Crary, Radical Philosophy (2023) — The Toxic Ideology of Longtermism — Alice Crary, Radical Philosophy (2023) ↩ ↩2 ↩3
-
High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation — David Thorstad, Philosophy and Public Affairs 51(4): 373–412 (Fall 2023) — High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation — David Thorstad, Philosophy and Public Affairs 51(4): 373–412 (Fall 2023) ↩
-
FTX, EA Principles, and the Longtermist EA Community — EA Forum — FTX, EA Principles, and the Longtermist EA Community — EA Forum ↩
-
Back to Virtue: Effective Altruism After FTX — MercatorNet — Back to Virtue: Effective Altruism After FTX — MercatorNet ↩
-
Why Longtermism Is the World's Most Dangerous Secular Credo — Émile Torres, Aeon (October 19, 2021) — Why Longtermism Is the World's Most Dangerous Secular Credo — Émile Torres, Aeon (October 19, 2021) ↩ ↩2
-
The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence — Émile P. Torres and Timnit Gebru, First Monday (April 2024) — The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence — Émile P. Torres and Timnit Gebru, First Monday (April 2024) ↩
-
The Epistemic Challenge to Longtermism — Christian Tarsney, Synthese (2023) — The Epistemic Challenge to Longtermism — Christian Tarsney, Synthese (2023) ↩ ↩2
-
Philosophy for the Long Run: Introduction to the Symposium on Longtermism — Moral Philosophy and Politics, De Gruyter (2025) — Philosophy for the Long Run: Introduction to the Symposium on Longtermism — Moral Philosophy and Politics, De Gruyter (2025); see also: Mistakes in the Moral Mathematics of Existential Risk — David Thorstad, Ethics 135(1): 122–150 (2024) ↩ ↩2 ↩3
-
On the Fundamental Incoherence of Longtermism — Mind Your Metaphysics (Substack) — On the Fundamental Incoherence of Longtermism — Mind Your Metaphysics (Substack) ↩
-
Centre for Effective Altruism — Longtermism — Centre for Effective Altruism — Longtermism ↩
-
Minimal and Expansive Longtermism — Hilary Greaves and Christian Tarsney, Oxford University Press (2025) — Minimal and Expansive Longtermism — Hilary Greaves and Christian Tarsney, Oxford University Press (2025) ↩ ↩2
-
What Will FTX's Collapse Mean for Global Health and Development? — Devex — What Will FTX's Collapse Mean for Global Health and Development? — Devex ↩
-
What the FTX Collapse Teaches Us About Ethics — Principia Advisory (March 2023) — What the FTX Collapse Teaches Us About Ethics — Principia Advisory (March 2023) ↩
-
Why Haven't We Seen a Promising Longtermist Intervention Yet? — EA Forum — Why Haven't We Seen a Promising Longtermist Intervention Yet? — EA Forum ↩
References
1What the FTX Collapse Teaches Us About Ethics — Principia Advisory (March 2023)principia-advisory.com▸
This article examines the ethical failures behind the FTX cryptocurrency exchange collapse and the role of effective altruism-adjacent thinking in rationalizing misconduct. It draws lessons about how consequentialist reasoning, poorly structured incentives, and lack of organizational accountability can lead to catastrophic ethical failures, with implications for AI safety culture and governance.
“As one of the most prominent effective altruists, he donated swathes of money to social and political causes.”
The source does not mention Bankman-Fried's congressional testimony, nor does it discuss the response of proponents of effective altruism to the allegations against him.
“The new CEO John J. Ray, who had been brought in to clean up FTX’s implosion, said that he had never seen “such a complete failure of corporate controls.””
The source mentions Ray's comment on corporate controls, but does not mention a comparison to Enron. The source does not mention Bankman-Fried's conviction on wire fraud and securities fraud.
An accessible introduction and critical examination of longtermism as a philosophical framework, exploring its core claims that future people matter morally and that shaping the long-term future is among the most important tasks humanity faces. The article discusses implications for policy and ethics, including connections to existential risk reduction and effective altruism.
“Longtermist thinking is responsible for the founding of various institutes such as the Future of Humanity Institute (FHI) and the Global Priorities Institute (GPI), as well as the Effective Altruism philosophical movement, with the EA movement now possessing $46 billion in dedicated funding (Torres, 2021).”
The claim mentions that the philosophy was developed most prominently by William MacAskill and Toby Ord within the Centre for Effective Altruism ecosystem, but the source only mentions that the philosophy was coined by William MacAskill and Toby Ord. The claim mentions that the institutional expression included the Future of Humanity Institute at Oxford and the Global Priorities Institute, but the source only mentions that longtermist thinking is responsible for the founding of the Future of Humanity Institute and the Global Priorities Institute. The claim mentions that the network of researchers and grantmakers were funded substantially through Open Philanthropy and Sam Bankman-Fried's FTX Future Fund, but the source does not mention Open Philanthropy or Sam Bankman-Fried's FTX Future Fund.
“Longtermist thinking is responsible for the founding of various institutes such as the Future of Humanity Institute (FHI) and the Global Priorities Institute (GPI), as well as the Effective Altruism philosophical movement, with the EA movement now possessing $46 billion in dedicated funding (Torres, 2021).”
The claim mentions that the EA movement had accumulated an estimated $46 billion in dedicated funding by 2021, but the source does not specify the year. The source also does not mention that much of the funding was directed toward existential risk research. The source does not mention 80,000 Hours.
This Devex article examines the fallout from FTX's bankruptcy on the effective altruism (EA) philanthropic ecosystem, particularly nonprofits in global health, development, and animal welfare that relied on FTX Future Fund grants. Sam Bankman-Fried's 'earning to give' strategy had made him one of EA's most prominent donors, and the collapse left many organizations scrambling for alternative funding while also damaging EA's broader credibility and reputation.
“But pandemic preparedness, an area Bankman-Fried backed, is a cause that falls under both longtermism and global health.”
The source does not mention AI safety research or its relevance to current harms from deployed systems and to hypothetical future catastrophe.
“Between the direct impact on nonprofits, and the indirect impact on a growing movement within philanthropy that has steered significant funding to people in low-resource settings, FTX’s collapse could jeopardize causes that have appealed to effective altruists, from animal welfare to global health.”
The claim that global health and development organizations funded by EA were uncertain whether their funding pipelines would survive the collapse is not explicitly stated in the article. The claim that roughly 70% of Open Philanthropy's total funding had gone toward global health and wellbeing, with approximately 30% toward longtermist areas is not mentioned in the article. The claim that the subsequent rebranding to 'Global Catastrophic Risks' further blurred the near-term/long-term distinction at the organizational level is not mentioned in the article.
Anja Kaspersen and Wendell Wallach critique longtermism as articulated by William MacAskill, arguing that while protecting future generations is intuitively appealing, the framework raises serious practical and ethical problems around trade-offs, fairness, and the concentration of decision-making power. The article questions who decides how much present generations sacrifice for speculative future threats and who bears those costs.
“Recently the philosopher William MacAskill, with his book What We Owe The Future , has been popularizing the idea that the fate of humanity should be our top moral priority.”
The source does not mention EA expanding its focus to incorporate longer-horizon concerns alongside near-term interventions such as global health and poverty alleviation. The source does not mention MacAskill coining the term "longtermism" in 2017. The source only mentions MacAskill's book, not Ord's.
Open Philanthropy announced the end of its November 2022 pause on longtermist funding, which had been triggered by FTX's collapse and declining Meta stock. To establish a new funding bar, the organization ranked nearly all grants made over 18 months and estimated sustainable annual spending levels over 20-50 years, enabling resumed grantmaking in AI safety, biosecurity, and EA community growth at a higher bar than previously applied.
“Since then, we’ve done some work to assess where our new funding bar should be, and we have created enough internal guidance that the pause no longer applies.”
WRONG DATE: The pause was lifted in late January 2023, according to the source, but the source does not mention the organization establishing new grant assessment guidance at that time. UNSUPPORTED: The source does not mention Open Philanthropy directing over $750 million in grants across its portfolios in 2023. UNSUPPORTED: The source does not mention Open Philanthropy renaming its "Longtermism" portfolio to "Global Catastrophic Risks (GCR)" in 2023–2024, or the reasons for the rebranding.
An introductory overview of longtermism as an ethical framework, presenting definitions from Will MacAskill and Toby Ord, and arguing that future people's moral status, their potentially vast numbers, and our ability to shape long-run outcomes make addressing existential risks a top priority. The page highlights risks like misaligned AGI and engineered pandemics as key focus areas for longtermist action.
“This view rests on the idea that future people matter morally, that there could be a very large number of future people, and that there are actions we can take now to affect how good or bad the future is.”
The claim attributes the arguments to MacAskill and Ord, but the source only mentions MacAskill and Ord separately, not jointly. The claim mentions 'current actions can reliably influence existential outcomes', but the source mentions 'actions we can take now to affect how good or bad the future is', which is not exactly the same as influencing existential outcomes.
“In his forthcoming book “ What We Owe the Future ,” Will MacAskill offers two distinct definitions of longtermism: Longtermism: the view that positively influencing the longterm future is a key moral priority of our time. Strong Longtermism: the view that positively influencing the longterm future is the key moral priority of our time.”
The source does not mention EA expanding its focus to incorporate longer-horizon concerns in the 2010s. The source does not explicitly credit MacAskill with coining the term 'longtermism' in 2017, although it does mention his book 'What We Owe the Future' and his definitions of longtermism.
7The Epistemic Challenge to Longtermism — Christian Tarsney, Synthese (2023)Springer (peer-reviewed)·Christian Tarsney·2023·Paper▸
Christian Tarsney examines the epistemic challenge to longtermism—the objection that long-term effects of present actions are too unpredictable to guide decision-making, even given the far future's astronomical importance. Using two simple models comparing longtermist and neartermist interventions, Tarsney finds that longtermism's case depends on either accepting minuscule probabilities of enormous payoffs (Pascalian fanaticism) or relying on non-obvious empirical assumptions about predictability. The analysis reveals that while expected value maximization may support longtermism, this conclusion is fragile and contingent on controversial premises about uncertainty and rational choice.
“Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict—perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations.”
The wiki claim mentions Thorstad's 2024 article in *Ethics* and a 2025 symposium introduction in *Moral Philosophy and Politics*, but this source only discusses Tarsney's work.
This EA Forum post argues that EA's core principles are indeterminate at the community level, making it unclear whether SBF genuinely violated them, since EA's safeguards against maximization misuse rely on external human judgment rather than internal constraints. The author proposes recentering the longtermist EA community around shared moral-epistemic virtues rather than explicit principles to better guard against similar failures.
“Back to EA: consequentialism is a majority view within EA , and its implications with respect to respecting common sense moral norms appear at least controversial , especially from a longtermist point of view.”
On November 10, 2022, the FTX Future Fund team (Nick Beckstead, Leopold Aschenbrenner, Avital Balwit, Ketan Ramakrishnan, and Will MacAskill) announced their resignation following FTX's collapse, citing fundamental questions about the legitimacy of FTX's business operations. They expressed deep regret that many committed grants to EA and AI safety projects would likely go unfulfilled and condemned any deception by FTX leadership.
“We were shocked and immensely saddened to learn of the recent events at FTX. Our hearts go out to the thousands of FTX customers whose finances may have been jeopardized or destroyed.”
10The Future of Humanity Institute Closes — Bioethics Observatory (June 2024)bioethicsobservatory.org▸
The Bioethics Observatory reports on the April 2024 closure of Oxford's Future of Humanity Institute (FHI), founded by Nick Bostrom in 2005. The piece covers the institute's focus on existential risk, AI, longtermism, and effective altruism, its controversial Silicon Valley backers, and the combination of bureaucratic disputes and personal scandals that led to its closure.
Émile Torres presents a critical philosophical and political argument against longtermism, contending that its prioritization of vast speculative future populations over present human welfare constitutes a dangerous secular ideology. The essay argues that longtermism's framing can justify ignoring or even causing present-day harms in service of maximizing long-run expected utility, and that its growing institutional funding amplifies these dangers.
William MacAskill publicly condemns the actions behind FTX's collapse, expressing outrage and shame over Sam Bankman-Fried's alleged misuse of customer funds. He argues that if fraud occurred, it represents a betrayal of EA's core principles of integrity and honesty, and warns against 'ends justify the means' reasoning that may have enabled such behavior.
“But if there was deception and misuse of funds, I am outraged, and I don’t know which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception.”
Professor Richard Pettigrew introduces a Leverhulme Trust-funded research project (2024–2027) critically examining the philosophical foundations of longtermism—the view that future people's welfare should dominate moral decision-making today. The project scrutinizes the core argument that the vast number of potential future people means their interests outweigh those of present people, questioning whether this reasoning survives philosophical scrutiny.
“‘The Foundations of Longtermism’ is a research project funded for three years by the Leverhulme Trust.”
The project at the University of Bristol was announced before 2024, as the article is dated in 2024 and discusses the project as already having received funding. The claim mentions multiple peer-reviewed articles published in *Ethics*, *Philosophy and Public Affairs*, and *Synthese*, but the source does not mention these publications.
14Why Haven't We Seen a Promising Longtermist Intervention Yet? — EA ForumEA Forum·Yarrow Bouchard 🔸·2025·Blog post▸
The author critically examines why longtermism, despite eight years of discussion since the term was coined in 2017, has not produced genuinely novel, promising, and actionable interventions. The post argues that core longtermist ideas predate the movement by decades or centuries, and that most proposed interventions either lack novelty, are already pursued for near-term reasons, or lack clear actionability.
This is the introduction to a 2025 academic symposium on longtermism published in Moral Philosophy and Politics, authored by Stefan Riedener. It frames longtermism as the view that positively influencing the long-term future is a key moral priority, outlines the core argument for the view, and sets the stage for critical philosophical examination of its foundations and implications.
“Many people have explored objections to longtermism: that we’re too clueless about the far future ( Mogensen 2021 ; Tarsney 2023 ), that our actions don’t relevantly affect it ( Schwitzgebel 2024 ), that standard arguments rely on questionable assumptions about risks or demographic development ( Thorstad 2024 , 2023 ), or on a dubiously fanaticist decision theory ( Kosonen 2022 ), that longtermism is too demanding ( Mogensen 2020 ), doesn’t have many revisionary real-life implications ( Plant 2023 ), is too insouciant about current problems and the current political system ( Crary 2023 ), and thus potentially dangerous ( Singer 2021 ; Torrs 2021 ), or that it threatens the integrity and meaningfulness of our lives ( Riedener forthcoming ).”
A critical left-wing analysis of William MacAskill's longtermism and Effective Altruism, using the FTX/Sam Bankman-Fried collapse as a lens to examine the philosophical and political contradictions of EA's techno-philanthropist ideology. The authors argue that longtermism's neglect of history and present injustice in favor of speculative futures reflects a fundamentally flawed and politically convenient moral framework.
Reports on the April 2024 closure of Oxford's Future of Humanity Institute (FHI), founded by Nick Bostrom in 2005, due to escalating administrative conflicts with Oxford's Faculty of Philosophy. The piece covers the institutional history, research contributions spanning existential risk, AI alignment, longtermism, and effective altruism, and the bureaucratic deterioration that led to hiring/fundraising freezes starting in 2020 and eventual shutdown.
“While FHI had achieved significant academic and policy impact, the final years were affected by a gradual suffocation by Faculty bureaucracy.”
The source does not mention the Governance of AI Program spinning out of the university in 2021. The source does not mention controversies surrounding FHI figures, specifically a 1996 email with racist content by Nick Bostrom or separate allegations of misconduct involving FHI-adjacent individuals. The source does not mention Open Philanthropy's grants of £1.6 million in 2017 and £13.3 million in 2018, or that a significant portion of the latter remained unspent due to the hiring freeze.
A New Statesman long-read examining the collapse of Sam Bankman-Fried's crypto empire and its implications for the effective altruism movement. The article, updated after his November 2023 fraud and money laundering conviction, questions whether EA's utilitarian philosophy enabled or encouraged the ethical failures at FTX. It explores whether the 'earn to give' ideology and ends-justify-means reasoning contributed to SBF's conduct.
“And yet, data analysis by the Economist found that by 2022, 40 per cent of effective altruism’s funding was directed towards longtermist causes.”
unsupported: The source does not contain the direct quote attributed to Bankman-Fried about his utilitarianism. unsupported: The source does not mention that the FTX Future Fund was explicitly organized around longtermist grant priorities. minor_issues: The source states that by 2022, 40% of effective altruism's funding was directed towards longtermist causes, not specifically EA's funding.
Anders Sandberg's retrospective on FHI's 19-year history documents the institute's research contributions, organizational lessons, and ultimate closure. The report reflects on what worked and what didn't in pioneering existential risk research, offering practical guidance for future organizations tackling neglected long-term global challenges.
“Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.”
The claim that Anders Sandberg described the process as "a gradual suffocation by Faculty bureaucracy" is not directly supported by the provided source. While the source mentions Sandberg's final report and refers to the increasing administrative headwinds within the Faculty of Philosophy, it doesn't explicitly quote him using that phrase. The claim that the Governance of AI Program spun out of the university in 2021 is not explicitly mentioned in the source. The source does not mention 'allegations of misconduct involving FHI-adjacent individuals'. The claim states Open Philanthropy made grants of £1.6 million in 2017 and £13.3 million in 2018. The source supports this, but the source does not mention that a significant portion of the latter remained unspent at the time of closure due to the hiring freeze, only that a large part of the grant remained unspent due to limited faculty administrative capacity for hiring and the subsequent hiring freezes it imposed.
This article examines how the collapse of FTX and Sam Bankman-Fried's fraud scandal damaged the credibility of Effective Altruism (EA) and longtermism as influential ideologies in the tech and AI safety communities. It explores the philosophical and practical entanglements between EA, longtermism, and the FTX catastrophe, and considers the reputational and institutional fallout for these movements.
“The collapse of FTX, the Bahamas-based cryptocurrency exchange founded by Sam Bankman-Fried, has caused more than just misery for investors and the implosion of the crypto industry. Does looming FTX collapse spell the end of crypto? Perhaps most unexpectedly, the demise of the superstar financier and his company has led to the discrediting of two philosophical movements closely associated with Bankman-Fried – “effective altruism” and “long-termism”.”
The source does not mention that Bankman-Fried pledged the bulk of his fortune to effective altruist and longtermist causes. The source does not mention the FTX Future Fund or the amount of money it committed to longtermist researchers and organizations. The source does not mention that the funding became unrecoverable when the exchange filed for bankruptcy.
This post analyzes significant shifts in EA and AI safety funding as of October 2023, documenting the proliferation of new grantmaking bodies, funding gaps in EA infrastructure and AI safety, and the lasting impact of the FTX collapse on longtermist funding. It provides a snapshot of where money is flowing and where gaps exist, while acknowledging the analysis is preliminary and anecdotal.
“The Survival and Flourishing Fund has increased their giving in 2023. Longview Philanthropy expects to increase their giving in the years to come. Longview have moved >$55 million since their founding in 2018; their 2023 advising will be >$10 million; and they expect 2024 money moved to be greater than 2023.”
The claim states that the Long-Term Future Fund paid out approximately $5.36 million in grants between May 2023 and March 2024, but this information is not present in the source. The claim states that Google invested $500 million in Anthropic in October 2023, committing up to $1.5 billion total, but the source states that Amazon will invest up to $4 billion in Anthropic.
A TIME investigation revealing that prominent figures in the Effective Altruism (EA) community received warnings about Sam Bankman-Fried's ethical conduct and risky behavior years before the collapse of FTX, yet failed to act. The piece examines how EA's close relationship with SBF and FTX funding created conflicts of interest and governance failures. It raises broader questions about EA's oversight mechanisms and the dangers of prioritizing financial ends over ethical means.
“Sam Bankman-Fried and Will MacAskill weren’t just philosophical allies. They were old friends. The two met in 2013, when Bankman-Fried was still an undergrad at MIT. MacAskill convinced the young utilitarian math geek that he could maximize his impact by taking a high-paying finance job and giving his money away. Effective Altruists call this “earning to give.””
The article states that MacAskill convinced Bankman-Fried in 2013, not 2012, to pursue "earning to give."
“It’s not entirely clear how EA leaders reacted to the warnings. Sources familiar with the discussions told TIME that the concerns were downplayed, rationalized as typical startup squabbles, or dismissed as “he said-she said,” as two people put it.”
A critical essay arguing that Effective Altruism and longtermism are ideologically harmful frameworks that distort moral priorities, concentrate power among elites, and obscure present-day injustices in favor of speculative future concerns. The piece contends these movements provide philosophical cover for the wealthy to avoid structural change while feeling virtuous. It represents a left-leaning critique of EA's utilitarian calculus and longtermism's focus on existential risk.
“The EA movement, at least before the catastrophic collapse of FTX last November, had $46.1 billion in committed funding, an enormous amount of money that they could spend on their research projects.”
A profile of Sam Bankman-Fried as a contemporary utilitarian, examining his application of utilitarian ethics to earning money and donating it effectively through effective altruism. The piece explores how his philosophical commitments shaped his business decisions and philanthropic strategy, including his focus on existential risk reduction.
“SBF describes himself as " ...a total, act, hedonistic/one level (as opposed to high and low pleasure), classical (as opposed to negative) utilitarian ”.”
The source does not explicitly state that the FTX Future Fund was 'explicitly organized around longtermist grant priorities'. The source does not provide the exact statistic that 'approximately 40% of EA's funding was directed toward longtermist causes, much of it toward AI safety.'
Open Philanthropy's annual review covering their 2023 grantmaking activities and strategic priorities for 2024, with significant focus on AI safety funding. The report details how the organization is allocating resources across global catastrophic risks, biosecurity, and AI alignment, reflecting their evolving views on AI timelines and risk.
This 80,000 Hours article introduces and defends longtermism — the view that positively influencing the long-term future is among the most important moral priorities. It explains why the vast number of potential future people gives strong ethical weight to existential risk reduction and civilizational flourishing, and how this framing shapes career and cause prioritization.
This article critically examines the Effective Altruism movement in the wake of the FTX/Sam Bankman-Fried scandal, arguing that the utilitarian calculus underlying EA—including 'longtermism' and AI existential risk focus—is philosophically flawed. It advocates for a return to virtue ethics as a more reliable moral foundation for charitable and philanthropic action.
Alice Crary critiques longtermism as a morally and politically dangerous ideology, arguing it prioritizes speculative future beings over present harms, reinforces techno-utopian power structures, and provides ideological cover for inaction on urgent injustices. The piece engages with Effective Altruism and figures like Nick Bostrom and William MacAskill, contending that longtermism's ethical framework is fundamentally flawed and serves elite interests.
“Benjamin Todd, co-founder of the EA-affiliate 80,000 hours, estimated in summer 2021 that total pledges to EA had reached forty-six billion dollars.”
The source does not explicitly state that the movement's funding apparatus channeled resources into 'EA institution-building and movement growth,' but it does mention funding for EA institutions. The source states that Benjamin Todd estimated total pledges to EA had reached $46 billion, not that Émile Torres observed this. The source does not explicitly state that billions remained dedicated to longtermist efforts even after the FTX collapse, but it does say that the longtermist enterprise remains well-funded and well-positioned to repair its reputation.
“Longtermism deflects from EA’s wonted attention to current human and animal suffering. It defends in its place a concern for the wellbeing of the potentially trillions of humans who will live in the long-term future, and, taking the sheer number of prospective people to drown out current moral problems, exhorts us to regard threats to humanity’s continuation as a moral priority, if not the moral priority.”
The claim mentions David Thorstad and his argument in *Philosophy and Public Affairs*, but this source only discusses Alice Crary's views on longtermism. The claim states that Alice Crary wrote in *Radical Philosophy* in 2023, arguing that longtermists give existential threats such weight that they deprioritize actual suffering in the world we live in, and that the FTX collapse brought this structural feature of the ideology into public view. While the source supports the first part of this statement, it does not explicitly state that the FTX collapse brought this structural feature of the ideology into public view. It mentions the FTX collapse in relation to a change in the public mood and the scrutiny of longtermism's ties to FTX, but not directly as revealing the prioritization of existential threats over current suffering.
“That includes Peter Singer, whose contributions to utilitarian ethics were EA’s original inspiration. Singer is skeptical about whether humanity is indeed at a uniquely portentous moment in history, and he de-emphasises existential risk in a manner that indicates impatience with longtermists’ commitment to the posture they call non-neutrality.”
Nick Beckstead resigned from the boards of Effective Ventures UK and US on August 23, 2023, following over nine months of recusal from board matters related to the FTX collapse. His inability to contribute meaningfully due to the recusal made resignation the appropriate course. Beckstead had been a founding board member instrumental in establishing Effective Ventures and its constituent projects over 14 years.
“On 23rd August, Nick Beckstead stepped down from the boards of Effective Ventures UK and Effective Ventures US.”
Wikipedia's overview of longtermism, the ethical view that positively influencing the long-term future is a top moral priority. It covers the philosophical foundations, key proponents, criticism, and relationship to existential risk reduction and effective altruism.
A philosophical critique of Effective Altruism and longtermism, arguing that these movements risk concentrating arbitrary power in the hands of a small technocratic elite. The piece examines how longtermist justifications can be used to rationalize undemocratic decision-making in the name of humanity's long-term future. It raises concerns about the political and ethical implications of tech-sector philanthropy guided by these ideologies.
“This defence admits that they were duped by an unethical huckster, but denies that there is any serious flaw in the movement itself. It is simply an unhappy coincidence that Samuel Bankman-Fried was associated with them.”
This EA Forum post argues that the community is overcorrecting in response to the FTX collapse, warning against using SBF's fraud to discredit longtermism, ambitious EA strategies, or earning-to-give. The author distinguishes between SBF's criminal conduct and the philosophical positions he espoused, attributing the disaster to failures of transparency and ethics rather than to EA's core strategies.
Torres and Gebru critique the ideological cluster they term 'TESCREAL' (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism), arguing these movements share eugenic roots and use AGI as a vehicle for utopian promises that risk marginalizing present-day populations. The paper contends that this ideological bundle disproportionately shapes AI safety and development discourse, embedding historically problematic assumptions about human optimization and population control into mainstream AI governance conversations.
“In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century.”
Open Philanthropy awarded a general support grant to the Global Priorities Institute (GPI) at Oxford University for 2023–2024. GPI conducts academic research on global priorities, including existential risk, longtermism, and the philosophical and economic foundations of effective altruism. This grant reflects Open Philanthropy's ongoing support for foundational research informing rational resource allocation toward the most important global challenges.
David Thorstad challenges the influential 'astronomical value' argument that existential risk mitigation deserves overwhelming priority due to the enormous potential future population. He argues that the expected value calculations underlying this claim are undermined by deep uncertainty and risk-aversion considerations, and that the case for prioritizing x-risk work over near-term interventions is weaker than commonly assumed.
This blog post argues that longtermism is mathematically incoherent because its core method—multiplying tiny probabilities of future events by astronomically large numbers of potential future people—becomes logically unstable when applied universally as an ethical system. The author contends that while consequentialist reasoning has practical utility for specific decisions, extending it into a comprehensive longtermist framework generates paradoxes that undermine its own foundations.
A Vice explainer published in November 2022 examining longtermism as an ideological framework popular among tech elites, using the FTX collapse as a lens to critique its philosophical assumptions and practical implications. The article breaks down longtermist ideas about prioritizing future generations and existential risk reduction while questioning whether such reasoning can justify harmful short-term actions.
“One immediate consequence is that FTX’s Future Fund—which provided funds to longtermist causes—will no longer be able to disperse the $160 million committed to a number of researchers and organizations over the next few years.”
Failed to parse LLM response
This Oxford University Press volume by Hilary Greaves and Christian Tarsney systematically examines longtermism, distinguishing between minimal versions (giving significant weight to future generations) and expansive versions (prioritizing the long-run future above all else). The work provides philosophical foundations for evaluating how much moral weight should be assigned to future people and the implications for policy and action. It is a key academic treatment of longtermism as it relates to existential risk reduction and AI safety priorities.
“This chapter highlights the gap between the minimal form of longtermism established by standard arguments and this more expansive view, and considers (without reaching any firm conclusions) which form of longtermism is more plausible.”
Wikipedia's comprehensive overview of Effective Altruism (EA), a philosophical and social movement that uses evidence and reasoning to determine the most effective ways to benefit others. The article covers EA's history, core principles, major cause areas (including global poverty, animal welfare, and existential risk), and prominent organizations and figures. It also addresses criticisms and controversies surrounding the movement.