Longterm Wiki
Updated 2026-02-03HistoryData
Page StatusContent
Edited 10 days ago1.2k words
3
QualityStub
75
ImportanceHigh
12
Structure12/15
823900%6%
Updated every 6 weeksDue in 5 weeks
Issues1
QualityRated 3 but structure suggests 80 (underrated by 77 points)

Longtermist Funders

Organization

Longtermist Funders

Overview of major funders supporting AI safety, existential risk reduction, and longtermist causes. These organizations and individuals collectively provide hundreds of millions of dollars annually to research, policy, and field-building efforts aimed at ensuring beneficial AI development.

1.2k words

Overview

Longtermist funders provide critical financial support for organizations working on AI safety, existential risk reduction, and related cause areas. The funding landscape is characterized by a relatively small number of major philanthropists and foundations that provide the majority of resources, with additional support from regranting programs and smaller donors.

The field has experienced significant growth in funding over the past decade, though it remains small relative to overall AI development spending. Major shifts occurred in 2022-2023 with the FTX collapse eliminating a significant planned funding source, though other funders have partially filled the gap.

Comprehensive Funder Comparison

By Annual Giving and Focus Area

FunderAnnual GivingAI SafetyGlobal HealthScienceEducationOther
Gates Foundation≈$7BMinimal$4B$1B$500M$1B
Wellcome Trust≈$1.5BMinimal$500M$800M$200M
Chan Zuckerberg Initiative≈$1B$0$200M$800M$30M
Howard Hughes Medical Institute≈$1B$0Minimal$1B
Coefficient Giving≈$700M$65M$300M$50M$285M
MacArthur Foundation≈$260MMinimal$50M$200M
Hewlett Foundation≈$473M$8M$100M$365M
Survival and Flourishing Fund≈$35M$30M$5M
Schmidt Futures≈$200M$5M$100M$50M$45M
Long-Term Future Fund≈$5-10M$5-10M
Manifund≈$2-5M$1-3M$1-2M

Key Individual Philanthropists

PersonNet WorthAnnual GivingAI SafetyLifetime TotalPrimary Vehicle
Bill Gates≈$130B≈$5BMinimal$50B+Gates Foundation
Elon Musk (Funder)≈$400B≈$250MMinimal≈$8BMusk Foundation
Mark Zuckerberg≈$200B≈$1B$0≈$8BCZI
Dustin Moskovitz≈$17B≈$700M$65M$4B+Coefficient Giving
MacKenzie Scott≈$35B≈$3-4BUnknown$17B+Direct giving
Jaan Tallinn≈$500M≈$50M$40M+$100M+SFF, direct
Vitalik Buterin (Funder)≈$500M≈$50M$15M+$800M+FLI ($665M), MIRI, Balvi
Eric Schmidt≈$25B≈$200M$5M$1B+Schmidt Futures

AI Safety Funding Concentration

The AI safety funding landscape is highly concentrated among a few donors:

FunderAI Safety (Annual)% of Total AI Safety Funding
Coefficient Giving$65M≈55%
Survival and Flourishing Fund$30M≈25%
Jaan Tallinn (direct)$10M≈8%
Vitalik Buterin$5-15M≈5-10%
Long-Term Future Fund$5-10M≈5%
Other sources$5-10M≈5%
Total estimated≈$120-150M/year100%

Untapped Philanthropic Potential

Several major philanthropists have significant resources but minimal AI safety engagement:

PersonNet WorthCurrent AI SafetyPotential (1% of net worth)
Elon Musk$400B≈$0$4B/year
Mark Zuckerberg$200B$0$2B/year
Bill Gates$130BMinimal$1.3B/year
Larry Ellison$230B$0$2.3B/year
Jeff Bezos$200B$0$2B/year

If these five individuals allocated just 1% of their net worth annually to AI safety, it would represent $11.6B/year — roughly 80x current total funding.

AI Safety Funders (Detailed)

OrganizationTypeAnnual Giving (Est.)Primary FocusKey Grantees
Coefficient GivingFoundation$65M AI safetyTechnical alignment, governance, evalsMIRI, Redwood, METR, GovAI
Survival and Flourishing FundDonor Lottery$30MAI safety, x-riskMIRI, ARC Evals, SERI, CAIS
Long-Term Future FundRegranting$5-10MAI safety, x-risk researchIndividual researchers, small orgs
ManifundRegranting Platform$2-5MEA causes broadlyCommunity projects

Non-AI-Safety Major Funders

OrganizationTypeAnnual GivingFocus AreasAI Safety
Gates FoundationFoundation$7BGlobal health, poverty, educationMinimal
Wellcome TrustFoundation$1.5BHealth research, scienceMinimal
Chan Zuckerberg InitiativeLLC$1BAI-biology, disease cures$0
Hewlett FoundationFoundation$473MEnvironment, democracy, education$8M (cybersecurity)
MacArthur FoundationFoundation$260MClimate, justice, nuclear riskMinimal
Schmidt FuturesLLC$200MScience, AI applications, talent$5M

AI Safety Funding Landscape

Loading diagram...

Broader Philanthropy Landscape (For Context)

Loading diagram...

The Scale Gap

CategoryAnnual FundingNotes
AI Safety (total)≈$120-150MHighly concentrated
Gates Foundation alone≈$7,000M50x AI safety total
AI capabilities (industry)≈$50,000M+400x AI safety total
Global philanthropy≈$500,000M4,000x AI safety total

Pending Major Funding Sources

Anthropic-Derived Capital

Anthropic (Funder) represents potentially the largest future source of longtermist philanthropic capital. At Anthropic's current $350B valuation:

SourceEstimated ValueEA LikelihoodNotes
Founder pledges (7 founders, 80%)$39-59B2/7 strongly EA-alignedOnly Dario & Daniela have documented EA connections
Jaan Tallinn stake$2-6B (conservative)Very highSeries A lead investor
Dustin Moskovitz stake$3-9BCertain$500M+ already in nonprofit
Employee pledges + matching$20-40BHigh (in DAFs)Historical 3:1 matching reduced to 1:1 for new hires
Total risk-adjusted$25-70BWide range reflects cause allocation uncertainty

Key uncertainties:

  • Only 2/7 founders have documented strong EA connections—71% of founder equity may go to non-EA causes
  • Matching program reduced from 3:1 at 50% to 1:1 at 25% for new employees
  • IPO timeline: 2026-2027 expected; capital deployment likely 2027-2035

For comparison, this $25-70B range represents 170-470x current annual AI safety funding of ≈$150M. Even if only 10% ultimately reaches EA causes, it would still be transformative.

See Anthropic (Funder) for comprehensive analysis.

OpenAI Foundation

The OpenAI Foundation holds 26% of OpenAI, worth approximately $130B at current valuations. Unlike Anthropic's pledge-based model, the Foundation has direct legal control over these assets. Cause allocation is uncertain—the Foundation's stated mission focuses on "safe AGI" but specific philanthropic priorities are undisclosed.

Recent Trends

2024-2025 Developments:

  • Coefficient Giving launched $40M AI Safety Request for Proposals (January 2025)
  • SFF allocated $34.33M, with 86% going to AI-related projects
  • Coefficient Giving (formerly Open Philanthropy) rebranded in November 2025
  • LTFF continued steady grantmaking at ≈$5M annually
  • Anthropic founders announced 80% donation pledges (January 2026)

Post-FTX Landscape:

  • Future Fund's collapse eliminated ≈$160M in committed grants
  • Some organizations faced funding crises; others found alternative support
  • Field-wide diversification of funding sources

Related Pages

Top Related Pages

Analysis

Elon Musk (Funder)

Concepts

Center for AI SafetyAnthropic (Funder)OpenAI FoundationDustin Moskovitz (AI Safety Funder)Jaan TallinnManifund

Organizations

Vitalik Buterin (Funder)