Skip to content
Longterm Wiki
Navigation
Updated 2026-03-20HistoryData
Page StatusContent
Edited 2 weeks ago3.8k words131 backlinksUpdated every 6 weeksDue in 4 weeks
55QualityAdequate •35.5ImportanceReference80ResearchHigh
Content8/13
SummaryScheduleEntityEdit history1Overview
Tables20/ ~15Diagrams2/ ~2Int. links23/ ~31Ext. links50/ ~19Footnotes0/ ~12References6/ ~12Quotes0Accuracy0RatingsN:2.5 R:5 A:6.5 C:6.5Backlinks131
Change History1
Fix factual errors found in wiki audit7 weeks ago

Systematically audited ~35+ high-risk wiki pages for factual errors and hallucinations using parallel background agents plus direct reading. Fixed 13 confirmed errors across 11 files.

Issues2
QualityRated 55 but structure suggests 100 (underrated by 45 points)
Links22 links could use <R> components

Coefficient Giving

Funder

Coefficient Giving

Coefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a $40M Technical AI Safety RFP in 2025 covering 21 research areas with 2-week EOI response times.

TypeFunder
Founded2017
LocationSan Francisco, CA
Funding~$200M+/year to AI safety
Related
People
Nick BecksteadLuke Muehlhauser
3.8k words · 131 backlinks

Quick Assessment

DimensionAssessmentEvidence
ScaleDominant$4B+ total grants; ≈$46M AI safety in 2023
Structure13 cause-specific fundsMulti-donor pooled funds since Nov 2025 rebrand
AI Safety FocusLeading funder$336M+ to AI safety since 2014; ≈60% of external AI safety funding
Application ModelRolling RFPs + regranting300-word EOI, 2-week response; supports platforms like Manifund
TransparencyHighPublic grants database, annual progress reports
Key FundersGood Ventures (primary)Dustin Moskovitz & Cari Tuna; expanding to multi-donor model

Organization Details

AttributeDetails
Full NameCoefficient Giving (formerly Open Philanthropy)
TypePhilanthropic Advising and Funding Organization
Legal StructureLLC (independent since 2017)
Founded2014 (as GiveWell outgrowth); 2017 (independent); 2025 (rebranded)
Total Grants$4+ billion (as of June 2025)
AI Safety Grants$336+ million (≈12% of total)
2024 AI Safety Spend≈$50 million committed
LeadershipAlexander Berger (CEO), Holden Karnofsky (Board)
LocationSan Francisco, California
Websitecoefficientgiving.org
Grants Databasecoefficientgiving.org/funds

Overview

Coefficient Giving is a major philanthropic organization that has directed over $4 billion in grants since 2014 across global health, AI safety, pandemic preparedness, farm animal welfare, and other cause areas. In November 2025, the organization rebranded from Open Philanthropy to Coefficient Giving, signaling an expansion from serving primarily one anchor donor (Good Ventures, the foundation of Dustin Moskovitz and Cari Tuna) to operating 13 cause-specific funds open to multiple philanthropists. The name "Coefficient" reflects the organization's goal of multiplying impact through research, grantmaking, and partnerships—with "co" nodding to collaboration and "efficient" reflecting their unusual focus on cost-effectiveness.

Coefficient Giving is widely considered the largest funder of AI safety work globally. Since 2014, approximately $336 million (12% of total grants) has gone to AI safety research and governance, with roughly $46 million deployed in 2023 alone—making it the dominant external funder in a field where most safety research happens inside frontier AI labs. The organization's Navigating Transformative AI Fund supports technical AI safety research, AI governance and policy work, and capacity building, with a $40 million Technical AI Safety RFP launched in 2025 covering 21 research areas.

The organization distinguishes itself through its strategic cause selection methodology—identifying problems that are large, tractable, and neglected relative to their size. This approach, combined with a willingness to fund speculative research and support multiple funding mechanisms (direct grants, regranting programs, pooled funds), has made Coefficient Giving central to the effective altruism funding ecosystem. However, critics have noted concerns about funding concentration, the slow pace of spending relative to the scale of AI risks, and heavy focus on evaluations over alignment research in recent technical AI safety grants.

History and Evolution

Origins (2011-2017)

Coefficient Giving traces its origins to 2011 when GiveWell, the charity evaluator founded by Holden Karnofsky and Elie Hassenfeld, began advising Good Ventures on how to deploy Dustin Moskovitz's philanthropic capital effectively. Good Ventures was established by Moskovitz (Facebook co-founder, net worth ≈$12 billion) and Cari Tuna in 2011. By 2014, this advising relationship formalized into "Open Philanthropy" as a distinct project within GiveWell, focused on identifying high-impact giving opportunities across a broader range of cause areas than GiveWell's traditional global health focus.

In 2017, Open Philanthropy spun off from GiveWell as an independent LLC, enabling it to pursue its own strategic priorities while GiveWell continued focusing on evidence-backed global health interventions. The separation reflected diverging methodologies: GiveWell prioritizes robust evidence of effectiveness, while Open Philanthropy embraced "hits-based giving"—funding speculative, high-variance projects where a few major successes could justify many failures.

Growth and AI Safety Focus (2015-2024)

Coefficient Giving began supporting AI safety work in 2015, when the field was nascent and institutional support was minimal. Early grants helped establish foundational organizations including the Machine Intelligence Research Institute (MIRI), the Center for Human-Compatible AI at UC Berkeley, and the Future of Humanity Institute at Oxford. By 2023, AI safety had become Coefficient Giving's largest longtermist cause area, reflecting growing concern about advanced AI risks among the leadership team.

YearAI Safety Milestone
2015First AI safety grants; field had ≈10 full-time researchers
2017Independent organization; Holden Karnofsky publishes AI concerns
2019AI safety spending exceeds $20M annually
2022$150M Regranting Challenge launched (not AI-specific)
2023≈$46M AI safety spending; largest funder in the field
2024≈$50M committed; 68% to evaluations/benchmarking
2025Rebrand to Coefficient Giving; $40M Technical AI Safety RFP

The November 2025 Rebrand

On November 18, 2025, Open Philanthropy announced its rebranding to Coefficient Giving. The change reflected several strategic shifts:

Multi-Donor Expansion: The organization moved from primarily serving Good Ventures to operating pooled funds open to any philanthropist. In 2024, Coefficient directed over $100 million from donors besides Good Ventures; by 2025, non-Good Ventures funding had more than doubled.

Brand Clarity: The old "Open Philanthropy" name created confusion—journalists mistook it for OpenAI, and potential grantees confused it with Open Society Foundations. "Coefficient Giving" was chosen to provide a distinctive identity and resolve that confusion.

Structural Reorganization: The organization restructured from program areas to 13 distinct funds, each with dedicated leadership and transparent goals, allowing donors to support specific causes at scale.

Organizational Structure

Diagram (loading…)
flowchart TD
  GV[Good Ventures<br/>Moskovitz & Tuna] --> CG[Coefficient Giving<br/>4B+ directed]
  OD[Other Donors<br/>$100M+ in 2024] --> CG

  CG --> NTAI[Navigating<br/>Transformative AI]
  CG --> BPP[Biosecurity &<br/>Pandemic Prep]
  CG --> GCR[Global Catastrophic<br/>Risks Opportunities]
  CG --> GH[Global Health<br/>& Wellbeing]
  CG --> OTHER[10 Other Funds]

  NTAI --> TAIS[Technical AI<br/>Safety Research]
  NTAI --> GOV[AI Governance<br/>& Policy]
  NTAI --> CAP[Capacity Building<br/>& Field Growth]

  TAIS --> CAIS[Center for AI Safety]
  TAIS --> REDWOOD[Redwood Research]
  TAIS --> MIRI[MIRI]

  style GV fill:#e6f3ff
  style CG fill:#ccffcc
  style NTAI fill:#ffffcc
  style TAIS fill:#ffcccc

The 13 Funds Model

Since the November 2025 rebrand, Coefficient Giving operates through 13 cause-specific funds, each pooling money from multiple donors:

FundFocusKey Activities
Navigating Transformative AIAI safety & governanceTechnical research, policy, capacity building
Biosecurity & Pandemic PreparednessCatastrophic bio risksResearch, policy, infrastructure
Global Catastrophic Risks OpportunitiesCross-cutting x-risk workEcosystem support, foundational work
Science and Global Health R&DNeglected disease researchTB, malaria, high-risk transformational science
Global Health PolicyPolicy for health impactLead exposure, air pollution
Global Aid PolicyDevelopment effectivenessEvidence-based aid policy
Farm Animal WelfareFactory farming reformWelfare reforms, alternative proteins
Effective Giving and CareersEA movement buildingGiving What We Can, 80,000 Hours
Abundance & GrowthEconomic prosperity$120M launched 2025 for scientific progress
Criminal Justice ReformUS criminal justiceBail reform, prosecutorial accountability
Land Use ReformHousing and developmentYIMBY policy, zoning reform
Immigration PolicyImmigration reformPolicy research and advocacy
Other Global HealthRemaining health causesMalaria, deworming, direct cash transfers

The Navigating Transformative AI Fund is Coefficient's primary vehicle for AI-related grantmaking, supporting:

Technical AI Safety Research: Work aimed at making advanced AI systems more trustworthy, robust, controllable, and aligned. This includes interpretability research, robustness to adversarial inputs, scalable oversight methods, and understanding emergent capabilities.

AI Governance and Policy: Frameworks for safe, secure, and responsibly managed AI development, including export controls, compute governance, international coordination, and corporate governance mechanisms.

Capacity Building: Growing and strengthening the field of researchers and practitioners working on AI challenges, including training programs, career development, and institutional infrastructure.

Short-Timeline Projects: New projects expected to be particularly impactful if timelines to transformative AI are short, reflecting Coefficient's view that advanced AI could emerge within the next 5-15 years.

Regrantor Model

ComponentDescription
SelectionCoefficient Giving identifies trusted individuals with relevant expertise
BudgetEach regrantor receives $200K - $2M to distribute
AutonomyRegrantors make independent decisions within guidelines
ReportingRegrantors document grants, Coefficient Giving maintains oversight
RenewalStrong performers may receive additional budgets

Regrantor Criteria

CriterionDescription
Domain ExpertiseDeep knowledge in cause area
Community ConnectionsKnow who does good work
JudgmentTrack record of good decisions
CapacityTime to evaluate and make grants
Values AlignmentShare EA/longtermist priorities

AI Safety Grantmaking

2024 Technical AI Safety Funding Breakdown

An analysis of Coefficient Giving's Technical AI Safety funding revealed the following distribution of the $28M recorded in their database:

Research AreaPercentageAmount (~)Assessment
Evaluations/Benchmarking68%$19MPrimary focus; critics note AI Safety Institutes already well-resourced
Interpretability≈10%≈$3MMechanistic interpretability, circuit analysis
Robustness≈5%≈$1.5MAdversarial robustness, red-teaming
Value Alignment≈5%≈$1.5MRLHF alternatives, preference learning
Field Building≈5%≈$1.5MTraining programs, community
Forecasting≈3%≈$1MTimelines, capabilities
Other≈4%≈$1MGovernance research, miscellaneous

Note: The $28M figure underestimates total 2024 spending as some approved grants had not been posted to the database at time of analysis. Coefficient acknowledged spending "roughly $50 million" on technical AI safety in 2024.

Historical Major AI Safety Grants

Totals from Coefficient Giving's public grants database (includes grants made when the organization was named Open Philanthropy):

GranteeTotal (All Years)PeriodNotable Impact
Redwood Research$26.5M2021-2025Interpretability methods, control research (plus $3M for Constellation coworking space)
Future of Humanity Institute$20.4M2016-2021Strategic analysis (closed 2024)
EA Funds (LTFF + Infrastructure)$19.3M2022-2023Regranting via Long-Term Future Fund and EA Infrastructure Fund
Center for Human-Compatible AI$17.8M2016-2021Stuart Russell's CHAI lab ($23M including BERI-CHAI collaborations)
MIRI$14.8M2016-2020Agent foundations, embedded agency
Epoch AI$13.3M2022-2025Compute trends, capability timelines
Center for AI Safety$12.5M2022-2023Compute cluster, training programs
Anthropic$0 directlyN/AVC-funded; no grants in Coefficient Giving's database. GiveWell/Open Philanthropy co-founder Holden Karnofsky is married to Anthropic co-founder Daniela Amodei and joined Anthropic in 2025.

2025 Technical AI Safety RFP

In early 2025, Coefficient launched a $40 million Request for Proposals across 21 research areas, with funding available for substantially more based on application quality. Key features:

Priority Research Areas (starred items are especially prioritized):

CategoryResearch Areas
Alignment FoundationsAlternatives to adversarial training*, alignment faking*, scalable oversight*
InterpretabilityMechanistic interpretability*, representation engineering, probing
EvaluationDangerous capability evaluations*, propensity evaluations*, automated red-teaming
RobustnessAdversarial robustness, distribution shift, specification gaming
Governance-AdjacentAI governance research, responsible scaling policies

Grant Characteristics:

AspectDetails
Size RangeAPI credits ($1-10K) to seed funding for new orgs ($1M+)
Application300-word expression of interest (EOI)
Response TimeWithin 2 weeks of EOI submission
Decision Timeline4-8 weeks for full proposals
EligibilityAcademic researchers, nonprofits, independent researchers, new orgs

Regranting Ecosystem

Coefficient Giving supports multiple regranting platforms and mechanisms to achieve faster, more distributed funding decisions. This represents a deliberate strategy to complement slower direct grantmaking with nimble, expert-driven allocation.

Funding Flow Through Regranting

Diagram (loading…)
flowchart LR
  CG[Coefficient Giving] --> LTFF[Long-Term<br/>Future Fund]
  CG --> MF[Manifund<br/>Regrants]
  CG --> SFF[Survival &<br/>Flourishing Fund]

  LTFF --> I1[Individual<br/>Researchers]
  LTFF --> O1[Small Orgs]

  MF --> I2[Early-Stage<br/>Projects]
  MF --> I3[Emerging<br/>Talent]

  SFF --> O2[Established<br/>Orgs]

  style CG fill:#e6f3ff
  style LTFF fill:#ffffcc
  style MF fill:#ccffcc
  style SFF fill:#ffcccc

Long-Term Future Fund (LTFF)

The Long-Term Future Fund is a committee-based grantmaking fund that receives significant support from Coefficient. About half of LTFF funding historically comes from Coefficient Giving donations.

AspectDetails
Annual Volume≈$6.7M (2023)
AI Safety Portion≈$4.3M (≈65% of grants)
Grant Count≈200 grants per year
Median Grant≈$15-30K
Decision ModelCommittee of fund managers
TransparencyHigh (public grant reports)

LTFF grants tend toward smaller, faster decisions than direct Coefficient grants, serving researchers and projects that may not yet warrant Coefficient's full evaluation process.

Manifund AI Safety Regranting

Manifund operates a distinct regranting model where individual experts receive budgets to make independent funding decisions. For 2025, Manifund raised $2.25 million and announced their first 10 regrantors.

Named 2025 Regrantors:

RegrantorBudgetBackgroundFocus
Evan Hubinger$450KAnthropic AGI Safety Researcher, former LTFF managerTechnical AI safety
Ryan Kidd≈$100K+Co-director of SERI MATSEmerging talent
Marius Hobbhahn≈$100K+CEO of Apollo ResearchEvaluations, scheming
Lisa Thiergart≈$100K+Director at SL5 Task Force, former MIRIGovernance
Gavin Leech≈$100K+Cofounder Arb ResearchResearch reviews
Dan Hendrycks≈$100K+Director of CAISSafety research
Adam Gleave≈$100K+CEO of FAR AIAdversarial robustness

Manifund Regranting Characteristics:

FeatureDetails
SpeedGrant to bank account in under 1 week
Typical Grant Size$5K-$50K
Decision AuthoritySolo regrantor decisions
OversightManifund reviews but doesn't approve
Risk ToleranceHigh (encourages speculative grants)

Notable Manifund Grants:

ProjectAmountRegrantorsImpact
Timaeus (DevInterp)$143,200Evan Hubinger, Rachel Weinberg, Marcus Abramovitch, Ryan KiddFirst funding; accelerated research months
ChinaTalk$37,000Joel Becker, Evan HubingerCoverage of China/AI, including DeepSeek
Shallow Review 2024$9,000Neel Nanda, Ryan KiddInduced further $5K from Coefficient Giving

Survival and Flourishing Fund (SFF)

The Survival and Flourishing Fund uses a unique "S-process" algorithm for grant allocation, primarily funded by Jaan Tallinn (Skype co-founder). While Coefficient and SFF are independent, they share many grantees and strategic priorities.

AspectCoefficientSFF
2024 Volume≈$650M total≈$24M
AI Safety %≈12%≈86% ($20M)
Decision ModelStaff + regrantorsS-process algorithm
SpeedRollingTwice yearly rounds
OverlapHighHigh

How to Apply for Funding

Direct Application to Coefficient

The most straightforward path for substantial funding requests:

StepDetailsTimeline
1. Check RFPsReview active Requests for ProposalsOngoing
2. Submit EOI300-word expression of interest describing projectN/A
3. Initial ResponseCoefficient responds with interest level2 weeks
4. Full ProposalIf invited, submit detailed proposal with budget2-4 weeks to prepare
5. Due DiligenceCoefficient evaluates organization and proposal4-8 weeks
6. DecisionGrant approval or rejectionTotal: 2-4 months

Tips for Applicants (from Coefficient's guidance):

The bar is intentionally low for submitting expressions of interest. Key failure modes to avoid include not demonstrating understanding of prior work (read papers linked in relevant RFP sections) and not demonstrating that your team has prior experience with ML projects. Even uncertain proposals are worth submitting as the RFP is partly an experiment to understand funding demand.

Via Regranting Platforms

Faster and more accessible for smaller grants:

PlatformBest ForHow to Apply
Manifund$5-50K projects, emerging researchersCreate project on manifund.org, contact regrantors directly
LTFF$10-100K, established track recordApply via EA Funds
SFF$100K+, established organizationsApply during S-process rounds

Finding Regrantors

Many regrantors are reachable through:

  • Direct outreach: Email or social media (many are publicly active on Twitter/X, LessWrong)
  • EA communities: EA Forum, Alignment Forum, local EA groups
  • Professional networks: AI safety conferences (NeurIPS safety track, ICML), SERI MATS alumni
  • Manifund platform: Create project and regrantors may proactively reach out

Comparison with Other AI Safety Funders

AspectCoefficient GivingLTFFSFFManifund
2024 AI Safety Volume≈$50M≈$4.3M≈$20M≈$2M
Total AssetsGood Ventures ($12B+)Pool of donorsJaan TallinnDonors
Decision ModelStaff + regrantorsCommitteeS-process algorithmIndividual regrantors
Typical Grant Size$100K-$5M$15-100K$100K-$2M$5-50K
Speed (EOI to decision)2-4 months1-3 months6 months (rounds)Under 2 weeks
TransparencyMedium (public database)High (detailed reports)High (S-process public)Very high (live on platform)
Risk ToleranceMediumMedium-HighMediumHigh
Best ForMajor grants, established orgsGrowing researchersEstablished orgsEarly-stage, speculative

Funding Gap Analysis

According to an overview of AI safety funding, total external philanthropic AI safety funding (≈$100M annually) is dwarfed by:

ComparisonAmountRatio to Safety Funding
Generative AI Investment (2023)≈$24B240:1
Frontier Lab Safety Budgets≈$500M+ combined5:1
US Government AI R&D≈$3B annually30:1

This funding gap is a persistent concern in the AI safety community, though Coefficient and other funders argue that talent constraints, not funding, are often the binding limitation.

Critical Assessment

Strengths

Scale and Stability: With Good Ventures' multi-billion dollar backing, Coefficient can make commitments that smaller funders cannot. This enables multi-year organizational support, compute grants, and substantial research programs.

Strategic Sophistication: The organization's cause selection methodology and research depth (public writeups, shallow investigations, deep dives) provides unusually transparent reasoning for grant decisions.

Ecosystem Building: By funding LTFF, Manifund, and other regranting mechanisms, Coefficient amplifies its reach while maintaining quality through trusted intermediaries.

Hits-Based Giving: Willingness to fund speculative research acknowledges that transformative progress often comes from unexpected directions, though this increases variance in outcomes.

Limitations and Criticisms

Funding Concentration: With Coefficient representing ~60% of external AI safety funding, the field is heavily dependent on one organization's worldview and priorities. Critics note this could lead to "possible solutions being overlooked or assumptions no longer being questioned."

Evaluation Focus: The heavy focus on evaluations/benchmarking (68% of 2024 technical grants) has drawn criticism. As one researcher noted, "This looks much worse than I thought it would, both in terms of funding underdeployment, and in terms of overfocusing on evals." Critics argue AI Safety Institutes are already well-resourced for evaluation work.

Alignment Neglect: Some researchers express disappointment that "there's so little emphasis in this RFP about alignment, i.e. research on how to build an AI system that is doing what its developer intended it to do."

Slow Spending: Coefficient has acknowledged that "in retrospect, our rate of spending was too slow, and we should have been more aggressively expanding support for technical AI safety work earlier." Key reasons cited include difficulty making qualified senior hires and disappointment with returns to past spending.

Grantmaking Depth vs. Volume Tension: Former technical AI safety lead Ajeya Cotra pursued a more "understanding-oriented" approach to grantmaking --- forming deep inside views about how specific research directions connect to preventing AI takeover, rather than relying on heuristics about researcher quality. This approach produced the narrowly targeted $25M agent benchmarks RFP but involved significant upfront research investment that reduced grant volume. Cotra characterized the broader organizational tension: "developing your views is a tonne of work... and in the meantime, you're not getting money out the door."

Grants Database Limitations: The public grants database "offers an increasingly inaccurate picture" of Coefficient's work, as it generally excludes funding advised from non-Good Ventures donors. Coefficient is considering deprecating it.

Strategic Questions

QuestionContext
Funding deployment rateIs $50M/year appropriate given AI development pace?
Evaluation vs alignment balanceShould more funding go to core alignment research?
Lab relationshipsHow to maintain independence while funding lab-adjacent work?
Multi-donor modelWill expanding beyond Good Ventures change priorities?
Talent vs funding constraintIs the field truly talent-constrained, or is this justifying underspending?
Inference compute transitionShould Coefficient prepare to shift from funding human researchers to purchasing AI inference compute as AI becomes more capable?
Crunch time readinessDoes the organization have decision-making processes capable of rapidly deploying billions during a potential intelligence explosion?

Future Strategy: From Salaries to Inference Compute

Ajeya Cotra has argued that Coefficient Giving's spending profile may need to fundamentally shift as AI capabilities advance. Currently, over 80% of grant money goes to human salaries. But in a future where AI systems surpass most human researchers, the optimal allocation might shift heavily toward purchasing API credits and GPU time. Cotra suggests the organization should track what fraction of spending goes toward AI inference for grantees (already occurring via ChatGPT subscriptions and API credits), assess whether this fraction is growing appropriately relative to AI capabilities, and build readiness to deploy large sums rapidly during "crunch time."

Key challenges to this transition include organizational inertia (Coefficient's multi-layer grant approval process is designed for human-scale decisions, not billion-dollar compute purchases), model access uncertainty (frontier models may not be commercially available during the intelligence explosion), and the jaggedness problem (AI may automate some research domains well before others, making it hard to judge the right moment to shift). Cotra suggests Coefficient may want to hedge by investing in AI-exposed assets (like GPU manufacturers) to offset rising compute prices during crunch time.

Sources and Citations

Primary Sources

Analysis and Commentary

Regranting Programs

Grantee Information

References

1Coefficient GivingCoefficient Giving

Coefficient Giving is a philanthropic platform focused on directing funding toward high-impact AI safety and existential risk reduction efforts. It aims to help donors identify and support the most effective organizations working on preventing catastrophic AI outcomes. The platform provides guidance and resources for individuals seeking to contribute financially to the AI safety ecosystem.

★★★★☆

Coefficient Giving's Navigating Transformative AI fund solicits proposals for technical AI safety research projects, outlining funding priorities, eligibility criteria, and submission guidelines for researchers seeking grants. The RFP focuses on advancing safety-relevant technical work to reduce risks from transformative AI systems. It serves as a funding opportunity announcement for independent researchers and institutions working on AI safety.

★★★★☆
3Center for Human-Compatible AIhumancompatible.ai

CHAI is a UC Berkeley research center dedicated to reorienting AI development toward systems that are provably beneficial and aligned with human values. It conducts technical and conceptual research on problems including value alignment, corrigibility, and AI safety, and serves as a major hub for academic AI safety work.

4Long-Term Future FundCentre for Effective Altruism

The Long-Term Future Fund is an Effective Altruism-affiliated grantmaking fund focused on improving humanity's prospects over the long run, particularly by supporting work on reducing existential and catastrophic risks. It funds research, advocacy, and capacity-building projects related to AI safety, biosecurity, and other global priorities. The fund is managed by a committee of EA community members and operates on a rolling grants basis.

★★★☆☆

Manifund's regranting program allows vetted individuals (regrantors) to distribute funding to AI safety and other cause area projects on behalf of donors. Regrantors have discretion over how to allocate their grant budgets, enabling faster and more flexible funding decisions than traditional grant processes.

6Survival and Flourishing Fundsurvivalandflourishing.fund

SFF is a philanthropic organization that coordinates grant recommendations for existential risk reduction and AI safety work, having distributed over $152 million since 2019. It uses a distinctive 'S-Process' for collaborative grant evaluation among multiple donors and advisors. SFF is a significant funding source for many leading AI safety organizations and researchers.

Structured Data

28 facts·2669 recordsView in FactBase →
Headcount
100
as of 2025
Total Funding Raised
$4 billion
as of Jun 2025
Founded Date
Jun 2017

All Facts

28
Organization
PropertyValueAs OfSource
Founded DateJun 2017
HeadquartersSan Francisco
Legal StructureLimited liability company (LLC)
CountryUnited States
Financial
PropertyValueAs OfSource
Total Funding Raised$4 billionJun 2025
11 earlier values
202540000000
202463600000
202463600000
20242800000000
202450000000
2024650000000
2024336000000
2024$650 million
2024$2.8 billion
202346000000
202346000000
Headcount1002025
1 earlier value
2025150
Market Share602024
2 earlier values
202468
202460%
Funding Received$100 million2024
People
PropertyValueAs OfSource
Founded ByDustin Moskovitz,Holden Karnofsky
Founder (text)Cari Tuna
Biographical
PropertyValueAs OfSource
Wikipediahttps://en.wikipedia.org/wiki/Coefficient_Giving
General
PropertyValueAs OfSource
Websitehttps://www.coefficientgiving.org
Other
PropertyValueAs OfSource
Key PersonElie Hassenfeld

Divisions

28
NameDivisionTypeStatusStartDateEndDateSlug
Abundance & Growthfundactive2025-03
Global Growthfundactive2024-10
Farm Animal Welfarefundactive
Effective Giving & Careersfundactive
Global Aid Policyfundactive2018
Navigating Transformative AIfundactive
Forecastingfundactive
Biosecurity & Pandemic Preparednessfundactive
Global Catastrophic Risksprogram-areaactive
Lead Exposure Action Fund (LEAF)fundactive2024
Criminal Justice Reformfunddissolved20142022
Air Qualityfundactive2022

Funding Programs

15
NameProgramTypeDescriptionDivisionIdCurrencyApplicationUrl
Global Health and Wellbeing Grantmakinggrant-roundCoefficient Giving's grantmaking for global health, development, and farm animal welfareCs0fLbZ1KOUSDopenphilanthropy.org
Technical AI Safety RFP (2025)rfpRFP across 21 research areas under Navigating Transformative AI; $40M committed with more available based on qualitypSFM3kjj6MUSDopenphilanthropy.org
AI Safety Grantmakinggrant-roundCoefficient Giving's ongoing AI safety grantmaking, covering technical alignment research, governance, and field-buildingpSFM3kjj6MUSDopenphilanthropy.org
Lead Exposure Action Fund (LEAF)grant-roundMulti-donor pooled fund addressing lead exposure globally. $100-125M raised with Gates Foundation, UNICEF, and others.IQGwgrW29gUSD
Criminal Justice Reform Grantmakinggrant-roundHistorical grantmaking for criminal justice reform. Program wound down in 2022.FgO_AwGzujUSD
Global Aid Policy Grantmakinggrant-round50+ grants totaling ~$30M encouraging generous and cost-effective international aid.q-1FLb6AndUSD
Air Quality Grantmakinggrant-round40+ grants totaling ~$20M focused on air quality improvement in South Asia and high-pollution areas.EcbbkPXezKUSD
Forecasting Grantmakinggrant-roundGrantmaking for forecasting infrastructure, prediction markets, and forecasting research. 30+ grants totaling ~$50M.KTH1AhXH7lUSD
Global Growth Grantmakinggrant-round$40M+ committed over 3 years for policy research on economic growth in low/middle-income countries.V_-6iB_DLAUSD
Biosecurity and Pandemic Preparednessgrant-roundGrantmaking for biosecurity, pandemic preparedness, and related policy workjwPJSsqL6TUSD
Farm Animal Welfare Grantmakinggrant-roundGrantmaking for farm animal welfare: corporate cage-free campaigns, broiler welfare, alt-protein research, fish welfare, and advocacy in Asia. Contributed to 3,000+ corporate cage-free commitments.T0Aiw5CBG2USD
Abundance & Growth Grantsgrant-round$120M committed over 3 years for economic growth, scientific progress, and US-focused innovation.vQ4qEI1GhgUSD

Grants

2626
NameDateAmount
US Cochrane Center — General Support2012-07$100,000
Containment of Artemisinin Resistance in Eastern Myanmar2012-08$1 million
Against Malaria Foundation — General Support (December 2012)2012-12$1.3 million
GiveDirectly — General Support (December 2012)2012-12$500,000
Schistosomiasis Control Initiative — General Support (2012)2012-12$250,000
Update to Millions Saved Project2013-06$50,000
Center for Global Development — General Support2013-07$300,000
BetaGov2013-12$200,000
Deworm the World Initiative — General Support (December 2013)2013-12$1.5 million
Schistosomiasis Control Initiative — General Support (2013)2013-12$750,000
Research on Crime, Incarceration and Cannabis Regulation2013-12$245,000
New Incentives — General Support (2014)2014-01$100,000

Related Wiki Pages

Top Related Pages

Approaches

AI Safety Training Programs

Analysis

Is EA Biosecurity Work Limited to Restricting LLM Biological Use?XPT (Existential Risk Persuasion Tournament)Donations List Website

Organizations

Centre for Long-Term ResilienceFTX Future FundJohns Hopkins Center for Health SecurityLong-Term Future Fund (LTFF)

Other

Nick BecksteadAI EvaluationsScalable OversightRecoding America

Key Debates

Technical AI Safety ResearchAI Accident Risk CruxesThe Case For AI Existential Risk

Concepts

AI TimelinesEa Epistemic Failures In The Ftx Era

Risks

Bioweapons Risk

Historical

Deep Learning Revolution EraThe MIRI Era