Skip to content
Longterm Wiki
Updated 2026-02-03HistoryData
Page StatusContent
Edited 2 months ago1.3k words3 backlinksUpdated every 6 weeksOverdue by 15 days
3QualityStub •88.5ImportanceHigh50.5ResearchModerate
Content6/13
SummaryScheduleEntityEdit history3Overview
Tables8/ ~5Diagrams2/ ~1Int. links42/ ~10Ext. links1/ ~6Footnotes0/ ~4References1/ ~4Quotes0Accuracy0Backlinks3
Change History3
Surface tacticalValue in /wiki table and score 53 pages6 weeks ago

Added `tacticalValue` to `ExploreItem` interface, `getExploreItems()` mappings, the `/wiki` explore table (new sortable "Tact." column), and the card view sort dropdown. Scored 49 new pages with tactical values (4 were already scored), bringing total to 53.

sonnet-4 · ~30min

Clarify overview pages with new entity type7 weeks ago

Added `overview` as a proper entity type throughout the system, migrated all 36 overview pages to `entityType: overview`, built overview-specific InfoBox rendering with child page links, created an OverviewBanner component, and added a knowledge-base-overview page template to Crux.

Investigate and integrate Foundation-layer.ai#1447 weeks ago

Investigated foundation-layer.ai, extracted content via Jina reader proxy, then integrated findings into the wiki: created a new organization page, added entity definition, added resource entry, and updated funders-overview with The Foundation Layer Fund and AISTOF.

Issues2
QualityRated 3 but structure suggests 87 (underrated by 84 points)
Links1 link could use <R> components

Longtermist Funders (Overview)

Overview

Longtermist funders provide critical financial support for organizations working on AI safety, existential risk reduction, and related cause areas. The funding landscape is characterized by a relatively small number of major philanthropists and foundations that provide the majority of resources, with additional support from regranting programs and smaller donors.

The field has experienced significant growth in funding over the past decade, though it remains small relative to overall AI development spending. Major shifts occurred in 2022-2023 with the FTX collapse eliminating a significant planned funding source, though other funders have partially filled the gap.

Comprehensive Funder Comparison

By Annual Giving and Focus Area

FunderAnnual GivingAI SafetyGlobal HealthScienceEducationOther
Gates Foundation≈$7BMinimal$4B$1B$500M$1B
Wellcome Trust≈$1.5BMinimal$500M$800M$200M
Chan Zuckerberg Initiative≈$1B$0$200M$800M$30M
Howard Hughes Medical Institute≈$1B$0Minimal$1B
Coefficient Giving≈$700M$65M$300M$50M$285M
MacArthur Foundation≈$260MMinimal$50M$200M
Hewlett Foundation≈$473M$8M$100M$365M
Survival and Flourishing Fund≈$35M$30M$5M
Schmidt Futures≈$200M$5M$100M$50M$45M
Long-Term Future Fund≈$5-10M$5-10M
Manifund≈$2-5M$1-3M$1-2M

Key Individual Philanthropists

PersonNet WorthAnnual GivingAI SafetyLifetime TotalPrimary Vehicle
Bill Gates≈$130B≈$5BMinimal$50B+Gates Foundation
Elon Musk (Funder)≈$400B≈$250MMinimal≈$8BMusk Foundation
Mark Zuckerberg≈$200B≈$1B$0≈$8BCZI
Dustin Moskovitz≈$17B≈$700M$65M$4B+Coefficient Giving
MacKenzie Scott≈$35B≈$3-4BUnknown$17B+Direct giving
Jaan Tallinn≈$500M≈$50M$40M+$100M+SFF, direct
Vitalik Buterin (Funder)≈$500M≈$50M$15M+$800M+FLI ($665M), MIRI, Balvi
Eric Schmidt≈$25B≈$200M$5M$1B+Schmidt Futures

AI Safety Funding Concentration

The AI safety funding landscape is highly concentrated among a few donors:

FunderAI Safety (Annual)% of Total AI Safety Funding
Coefficient Giving$65M≈55%
Survival and Flourishing Fund$30M≈25%
Jaan Tallinn (direct)$10M≈8%
Vitalik Buterin$5-15M≈5-10%
Long-Term Future Fund$5-10M≈5%
Other sources$5-10M≈5%
Total estimated≈$120-150M/year100%

Untapped Philanthropic Potential

Several major philanthropists have significant resources but minimal AI safety engagement:

PersonNet WorthCurrent AI SafetyPotential (1% of net worth)
Elon Musk$400B≈$0$4B/year
Mark Zuckerberg$200B$0$2B/year
Bill Gates$130BMinimal$1.3B/year
Larry Ellison$230B$0$2.3B/year
Jeff Bezos$200B$0$2B/year

If these five individuals allocated just 1% of their net worth annually to AI safety, it would represent $11.6B/year — roughly 80x current total funding.

AI Safety Funders (Detailed)

OrganizationTypeAnnual Giving (Est.)Primary FocusKey Grantees
Coefficient GivingFoundation$65M AI safetyTechnical alignment, governance, evalsMIRI, Redwood, METR, GovAI
Survival and Flourishing FundDonor Lottery$30MAI safety, x-riskMIRI, ARC Evals, SERI, CAIS
Long-Term Future FundRegranting$5-10MAI safety, x-risk researchIndividual researchers, small orgs
The Foundation Layer FundDonor-Advised$70M+ (cumulative, 100+ grants)Alignment, nonproliferation, defensive tech, power distribution, talentBroad AI safety ecosystem
AI Safety Tactical Opportunities Fund (AISTOF)Pooled Fund$30M+ (cumulative, 150+ grants)Emerging opportunities across governance, alignment, evalsRapid-response grantmaking
ManifundRegranting Platform$2-5MEA causes broadlyCommunity projects

Non-AI-Safety Major Funders

OrganizationTypeAnnual GivingFocus AreasAI Safety
Gates FoundationFoundation$7BGlobal health, poverty, educationMinimal
Wellcome TrustFoundation$1.5BHealth research, scienceMinimal
Chan Zuckerberg InitiativeLLC$1BAI-biology, disease cures$0
Hewlett FoundationFoundation$473MEnvironment, democracy, education$8M (cybersecurity)
MacArthur FoundationFoundation$260MClimate, justice, nuclear riskMinimal
Schmidt FuturesLLC$200MScience, AI applications, talent$5M

AI Safety Funding Landscape

Diagram (loading…)
flowchart TD
  subgraph AISafetyDonors["AI Safety Donors (≈120M/year)"]
      DM[Dustin Moskovitz<br/>65M/year]
      JT[Jaan Tallinn<br/>50M/year]
      VB[Vitalik Buterin<br/>15M/year]
  end

  subgraph AISafetyVehicles["AI Safety Vehicles"]
      CG[Coefficient Giving<br/>65M AI safety]
      SFF[SFF<br/>30M]
      LTFF[LTFF<br/>5-10M]
      FLF[Foundation Layer Fund<br/>70M+ cumulative]
      MF[Manifund<br/>2-5M]
  end

  subgraph Recipients["AI Safety Recipients"]
      RESEARCH[Research Orgs<br/>MIRI, Redwood, METR]
      POLICY[Policy & Governance<br/>GovAI, CAIS, RAND]
      FIELD[Field Building<br/>80K, Atlas, SERI]
      EVALS[Evaluations<br/>METR, Epoch]
  end

  DM --> CG
  JT --> SFF
  VB -->|Direct| RESEARCH
  CG --> RESEARCH
  CG --> POLICY
  CG --> EVALS
  SFF --> RESEARCH
  SFF --> POLICY
  LTFF --> RESEARCH
  LTFF --> FIELD
  FLF --> RESEARCH
  FLF --> POLICY
  FLF --> FIELD
  MF --> FIELD

  style AISafetyDonors fill:#e6f3ff
  style AISafetyVehicles fill:#ccffcc
  style Recipients fill:#ffffcc

Broader Philanthropy Landscape (For Context)

Diagram (loading…)
flowchart TD
  subgraph MegaDonors["Mega-Donors (Minimal AI Safety)"]
      GATES[Bill Gates<br/>130B NW, 5B/year]
      MUSK[Elon Musk<br/>400B NW, 250M/year]
      ZUCK[Mark Zuckerberg<br/>200B NW, 1B/year]
      SCOTT[MacKenzie Scott<br/>35B NW, 4B/year]
  end

  subgraph MegaFoundations["Major Foundations"]
      GATESF[Gates Foundation<br/>7B/year]
      CZI[CZI<br/>1B/year]
      WELLCOME[Wellcome Trust<br/>1.5B/year]
      HEWLETT[Hewlett<br/>473M/year]
  end

  subgraph NonAIFocus["Primary Focus Areas"]
      HEALTH[Global Health<br/>5B+/year]
      SCIENCE[Science<br/>3B+/year]
      CLIMATE[Climate<br/>500M+/year]
      EDU[Education<br/>1B+/year]
  end

  GATES --> GATESF
  ZUCK --> CZI
  GATESF --> HEALTH
  GATESF --> SCIENCE
  CZI --> SCIENCE
  WELLCOME --> HEALTH
  WELLCOME --> SCIENCE
  HEWLETT --> CLIMATE
  HEWLETT --> EDU

  style MegaDonors fill:#ffe6e6
  style MegaFoundations fill:#fff0cc
  style NonAIFocus fill:#e6ffe6

The Scale Gap

CategoryAnnual FundingNotes
AI Safety (total)≈$120-150MHighly concentrated
Gates Foundation alone≈$7,000M50x AI safety total
AI capabilities (industry)≈$50,000M+400x AI safety total
Global philanthropy≈$500,000M4,000x AI safety total

Pending Major Funding Sources

Anthropic-Derived Capital

Anthropic (Funder) represents potentially the largest future source of longtermist philanthropic capital. At Anthropic's current $350B valuation:

SourceEstimated ValueEA LikelihoodNotes
Founder pledges (7 founders, 80%)$39-59B2/7 strongly EA-alignedOnly Dario & Daniela have documented EA connections
Jaan Tallinn stake$2-6B (conservative)Very highSeries A lead investor
Dustin Moskovitz stake$3-9BCertain$500M+ already in nonprofit
Employee pledges + matching$20-40BHigh (in DAFs)Historical 3:1 matching reduced to 1:1 for new hires
Total risk-adjusted$25-70BWide range reflects cause allocation uncertainty

Key uncertainties:

  • Only 2/7 founders have documented strong EA connections—71% of founder equity may go to non-EA causes
  • Matching program reduced from 3:1 at 50% to 1:1 at 25% for new employees
  • IPO timeline: 2026-2027 expected; capital deployment likely 2027-2035

For comparison, this $25-70B range represents 170-470x current annual AI safety funding of ≈$150M. Even if only 10% ultimately reaches EA causes, it would still be transformative.

See Anthropic (Funder) for comprehensive analysis.

OpenAI Foundation

The OpenAI Foundation holds 26% of OpenAI, worth approximately $130B at current valuations. Unlike Anthropic's pledge-based model, the Foundation has direct legal control over these assets. Cause allocation is uncertain—the Foundation's stated mission focuses on "safe AGI" but specific philanthropic priorities are undisclosed.

2024-2026 Developments:

  • Coefficient Giving launched $40M AI Safety Request for Proposals (January 2025)
  • SFF allocated $34.33M, with 86% going to AI-related projects
  • Coefficient Giving (formerly Open Philanthropy) rebranded in November 2025
  • LTFF continued steady grantmaking at ≈$5M annually
  • Anthropic founders announced 80% donation pledges (January 2026)
  • The Foundation Layer launched (early 2026) — a comprehensive philanthropic guide by Tyler John (Effective Institutions Project) synthesizing five years of AI safety advisory into a donor guidebook, covering alignment, nonproliferation, defensive tech, power distribution, and talent

Post-FTX Landscape:

  • Future Fund's collapse eliminated ≈$160M in committed grants
  • Some organizations faced funding crises; others found alternative support
  • Field-wide diversification of funding sources

References

A comprehensive guide by Tyler John (Effective Institutions Project) designed to persuade major philanthropists to fund AI safety work. It outlines AGI timelines, three categories of existential risk (loss of control, malicious use, power concentration), and proposes a five-pillar philanthropic strategy covering alignment science, nonproliferation, defensive technology, power distribution, and talent mobilization.

Related Wiki Pages

Top Related Pages

Analysis

Anthropic (Funder)Elon Musk (Funder)

Organizations

MacArthur FoundationCoefficient GivingSchmidt FuturesWilliam and Flora Hewlett FoundationSurvival and Flourishing FundVitalik Buterin (Funder)

Other

Jaan TallinnDustin MoskovitzMark Zuckerberg

Concepts

EA Funding Absorption Capacity