Skip to content
Longterm Wiki
Navigation
Updated 2026-01-29HistoryData
Page StatusContent
Edited 2 months ago4.8k words22 backlinksUpdated every 6 weeksOverdue by 20 days
59QualityAdequate •29ImportancePeripheral42ResearchLow
Content8/13
SummaryScheduleEntityEdit historyOverview
Tables22/ ~19Diagrams2/ ~2Int. links24/ ~39Ext. links79/ ~24Footnotes0/ ~14References25/ ~14Quotes0Accuracy0RatingsN:3.5 R:6 A:5.5 C:7Backlinks22
Issues3
QualityRated 59 but structure suggests 100 (underrated by 41 points)
Links53 links could use <R> components
StaleLast edited 65 days ago - may need review

Survival and Flourishing Fund (SFF)

Funder

Survival and Flourishing Fund

SFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders express utility functions and an algorithm allocates grants favoring projects with enthusiastic champions rather than consensus picks; median grant ~$100K.

TypeFunder
Founded2019
LocationSan Francisco, CA
Funding~$30-50M/year
Related
People
Jaan Tallinn
Organizations
QURI (Quantified Uncertainty Research Institute)
4.8k words · 22 backlinks

Quick Assessment

DimensionAssessmentEvidence
ScaleMajor$34.33M distributed in 2025; $100M+ since 2019
AI FocusDominant86% of 2025 grants to AI-related work (up from ≈50% in 2019)
MechanismUniqueS-process algorithmic allocation favoring champion-backed projects
TransparencyHighPublishes full grant lists with amounts; process documented
SpeedVariesS-process: 3-6 months; Speculation Grants: 1-2 weeks
Grant SizeMedium-LargeMedian: ≈$100K; Average: ≈$274K for AI safety
Risk ToleranceHigherFunds early-stage and speculative research
Primary FunderJaan TallinnSkype/Kazaa co-founder, ≈$900M net worth

Organization Details

AttributeDetails
Full NameSurvival and Flourishing Fund
TypeVirtual Fund / Donor-Advised Fund
Founded2019 (evolved from BERI's grantmaking)
Primary FunderJaan Tallinn (also funds Lightspeed Grants)
Additional FundersJed McCaleb, David Marble (Casey and Family Foundation), Survival and Flourishing Corp
Fiscal SponsorSilicon Valley Community Foundation
OperatorSurvival and Flourishing Corp (manages S-process)
Websitesurvivalandflourishing.fund
Contactsff-contact@googlegroups.com
MechanismS-process (multi-recommender simulation allocation)
Funding ProgramsS-Process Grant Rounds (1-2/year), Speculation Grants (rolling), Matching Pledges (2025+)
Total Historical Giving$100M+ since 2019
S-Process DevelopersAndrew Critch, Jaan Tallinn, Oliver Habryka, Kevin Arlin, Jason Moggridge

Overview

The Survival and Flourishing Fund (SFF) is the second-largest funder of AI safety research after Coefficient Giving, having distributed over $100 million since beginning grantmaking in 2019. Financed primarily by Jaan Tallinn, the Skype and Kazaa co-founder with an estimated net worth of approximately $900 million, SFF uses a distinctive algorithmic mechanism called the "S-process" to allocate grants based on recommendations from multiple advisors.

SFF originated from the Berkeley Existential Risk Initiative (BERI) in 2019 as a way to continue BERI's grantmaking activities while allowing BERI to focus on its core mission of operational support. Initially funded with approximately $2 million from BERI (itself funded by Tallinn), SFF has grown dramatically: from $2 million distributed in 2019 to $34.33 million in 2025.

SFF's focus has increasingly centered on AI safety as the field has grown. In 2025, approximately 86% of grants went to AI-related projects, up from roughly 50% in 2019. This reflects both Tallinn's longstanding concern about AI existential risk and the growing urgency perceived in the field. The fund supports a diverse portfolio ranging from technical research organizations (MIRI, METR, FAR AI) to policy groups (Center for AI Policy, GovAI) and field-building initiatives (SERI MATS, 80,000 Hours).

The S-process mechanism distinguishes SFF from traditional foundations. Rather than having a single decision-maker or voting committee, SFF uses multiple "recommenders" (typically 6-12 per round) who express their funding preferences as mathematical utility functions. An algorithm then computes final allocations that respect funders' meta-preferences about which recommenders to trust on which topics. Critically, the system is designed to favor funding projects that at least one recommender is excited about, rather than projects that achieve consensus approval.

2025 Grant Round

SFF's 2025 grant round distributed $34.33 million across dozens of organizations, significantly exceeding the initial $10-20 million estimate. The round featured three specialized tracks: the Main Track (6 recommenders, $6-12M), the Freedom Track (3 recommenders, $2-4M), and the Fairness Track (3 recommenders, $2-4M). In total, twelve recommenders participated in evaluating applications for funder Jaan Tallinn.

2025 Breakdown by Cause Area

Cause AreaAmountShareKey Recipients
AI Safety & Governance≈$29.5M86%MIRI, METR, CAIS, GovAI, Apollo, FAR AI, university programs
Biosecurity≈$2.5M7%SecureBio, Johns Hopkins CHS, NTI
Other X-Risk≈$1.5M4%Nuclear risk, forecasting, civilizational resilience
Meta/Community≈$0.8M3%EA community building, longevity, fertility research

Notable 2025 AI Safety Grantees

OrganizationFocus AreaNotes
MIRITechnical alignment researchLongstanding SFF grantee; founded by Eliezer Yudkowsky
METR (formerly ARC Evals)Frontier model evaluationsLeading dangerous capability evaluations; budget rapidly growing
Center for AI SafetyResearch and advocacyTotal SFF funding: $6.4M+ historically
Apollo ResearchDeception detection in AILeading European evals group; recent o1 research
GovAIAI governance researchOxford-based policy research
FAR AIAlignment researchTechnical safety research
SecureBioAI + biosecurity intersection$250K in 2025; some recommenders felt deserved more
Palisade ResearchSecurity researchAI safety security focus

2025 Matching Pledge Program

New to 2025, SFF introduced a Matching Pledge Program designed to diversify the funding landscape and increase grantee independence. Matching Pledges are commitments by funders to match outside donations at specified rates (0.5x, 1x, 2x, or 3x) up to pledged amounts. Organizations that opted into the program received algorithmic boosts to their evaluations, factoring in the expected leverage from external donors.

The goals of the Matching Pledge Program include:

  • Diversifying funding sources beyond SFF
  • Encouraging other donors to give more
  • Increasing fundraising robustness and independence of grantees
  • Reducing single-funder dependency risk

Non-AI Existential Risk (~14% / ≈$5M)

CategoryApproximate AmountExample Organizations
Biosecurity≈$2,500,000SecureBio, Johns Hopkins Center for Health Security, NTI Bio
Nuclear Risk≈$500,000Various organizations working on nuclear security
Civilizational Resilience≈$1,000,000ALLFED, global catastrophic risk research
Meta/Other≈$1,000,000Forecasting, fertility research, longevity, memetics research

The S-Process Mechanism

The S-process ("S" stands for "Simulation") is SFF's distinctive grant allocation mechanism, co-developed by Andrew Critch, Jaan Tallinn, Oliver Habryka, Kevin Arlin, and Jason Moggridge. Unlike traditional grantmaking where a committee votes or a single program officer decides, the S-process uses mathematical preference functions and an optimization algorithm to allocate funding.

Diagram (loading…)
flowchart TD
  subgraph Inputs["Inputs"]
      APPS[Applications<br/>Organizations submit funding requests]
      RECS[Recommenders<br/>4-12 experts per round]
      DONORS[Funder Meta-Preferences<br/>Trust weights per recommender]
  end

  subgraph Process["S-Process Simulation"]
      EVAL[Marginal Value Functions<br/>Each recommender specifies utility<br/>curves for each application]
      DISC[Discussion Rounds<br/>Multiple meetings to refine<br/>evaluations and share information]
      ALGO[Allocation Algorithm<br/>Cycles through recommenders,<br/>each allocates next $1K to<br/>highest-value application]
  end

  subgraph Output["Output"]
      GRANTS[Final Grant Amounts<br/>Published with full transparency]
      MATCH[Matching Pledges<br/>Optional leverage for grantees]
  end

  APPS --> EVAL
  RECS --> EVAL
  DONORS --> ALGO
  EVAL --> DISC
  DISC --> ALGO
  ALGO --> GRANTS
  ALGO --> MATCH

  style Inputs fill:#e6f3ff
  style Process fill:#ccffcc
  style Output fill:#ffffcc

How It Works: Step by Step

The S-process operates through a structured series of meetings and algorithmic simulations:

1. Application Submission: Organizations submit applications via the SFF Funding Rolling Application, describing their work, funding needs, and theory of change. Applications are accepted on a rolling basis throughout the year.

2. Recommender Selection: For each grant round, funders agree on a set of 4-12 "Recommenders" with relevant expertise. The 2025 round featured 12 recommenders across three tracks (Main, Freedom, Fairness), with 6 recommenders in the Main Track and 3 each in the specialized tracks.

3. Initial Evaluation: Recommenders review applications and specify marginal value functions for funding each organization. These functions express how much value the recommender places on each additional dollar granted to each applicant.

4. Discussion Meetings: Over a series of 4+ hour-long meetings, recommenders discuss applications, share information, and adjust their evaluations. According to recommender Zvi Mowshowitz, this typically involves "several additional discussions with other recommenders individually, many hours spent reading applications, doing research and thinking about what recommendations to make."

5. Funder Meta-Preferences: Funders (primarily Jaan Tallinn) specify their own value functions for deferring to each recommender. This creates a weighted influence system where funders can express differential trust in recommenders for different cause areas.

6. Algorithm Computes Allocations: The S-process algorithm runs a simulation that cycles through recommenders. In each cycle, each recommender allocates their next $1,000 to whichever application has the highest marginal value according to their function, given what's already been allocated. This continues until budgets are exhausted.

7. Final Adjustments: Funders review algorithmic recommendations and may make adjustments. They retain final authority over all grants and can make grants the algorithm didn't endorse based on information learned during the process.

8. Publication: Final grant amounts are published on the SFF website with full transparency about recipients and amounts.

Key Design Principle: Champion-Based Funding

The S-process is explicitly designed to favor funding things that at least one recommender is excited about, rather than things that every recommender is excited about. As SFF explains:

"The grant recommendations do not especially represent the 'average' opinion of the group in any sense."

This means organizations benefit most from having one or two strong champions among the recommenders, rather than achieving lukewarm consensus support. The cycling allocation mechanism ensures every recommender's top priorities get funded, with marginal decisions depending on finding enthusiastic backers.

Advantages of the S-Process

AdvantageDescriptionEvidence
Champion DiscoverySurfaces projects with passionate advocatesCycling algorithm prioritizes each recommender's top picks
Expertise MatchingDifferent recommenders evaluate areas where they have expertise2025 round used specialized Freedom and Fairness tracks
Preference AggregationMathematically combines diverse views without averagingUtility function approach preserves intensity of preferences
ScalabilityCan process hundreds of applications efficientlyHandles $34M+ rounds with dozens of grantees
TransparencyProcess is documented; results are publishedFull grant lists available on SFF website
Reduced Single-Point FailureNo single gatekeeper makes all decisionsMultiple recommenders required for funding
Funder AutonomyDonors retain final decision authorityCan override algorithmic recommendations

Criticisms and Limitations

Zvi Mowshowitz, who served as an SFF recommender, has written extensively about the process's limitations:

LimitationDescriptionMitigating Factors
Time ConstraintsRecommenders have limited time (30-60 min per applicant typical) despite scopeMultiple recommenders provide redundancy
ComplexityProcess harder to understand than traditional grantsDetailed documentation available
Newcomer DisadvantageOrganizations unknown to recommenders may be overlookedSpeculation Grants provide entry path
Large-Ask IncentivesProcess rewards asking for large amountsAlgorithm accounts for diminishing marginal value
Legibility BiasFavors organizations with credible, recognizable storiesRecommender diversity helps
EA Ecosystem CaptureEA relationships heavily influence decisions despite no official EA affiliationSpecialized tracks (Freedom, Fairness) broaden perspective
Limited FeedbackRejected applicants may not understand whyTrade-off with recommender time
Gaming PotentialRecommenders could strategically misrepresent preferencesProcess design and repeated interaction limit this

Jaan Tallinn: Primary Funder

Jaan Tallinn (born February 14, 1972) is an Estonian programmer, entrepreneur, and one of the most significant individual funders of AI safety research globally. His estimated net worth of approximately $900 million derives primarily from his founding role in two transformative tech companies: Kazaa (peer-to-peer file sharing) and Skype (sold to eBay in 2005, later to Microsoft for $8.5 billion in 2011).

Background and Career

PeriodRoleSignificance
1996B.S. Theoretical Physics, University of TartuAcademic foundation
1989Co-founder, Bluemoon (Estonia)Created Kosmonaut, first Estonian game sold abroad
≈2001-2003Developer, FastTrack/KazaaBuilt P2P technology later repurposed for Skype
2003-2005Founding engineer, SkypeCore developer; sold to eBay 2005
2012Co-founder, CSERCambridge Centre for the Study of Existential Risk (with Huw Price, Martin Rees)
2014Co-founder, FLIFuture of Life Institute (with Max Tegmark, Anthony Aguirre)
2019Primary funder, SFFSurvival and Flourishing Fund
2022Primary funder, Lightspeed GrantsRapid-turnaround longtermist grantmaking
PresentBoard member, CAISCenter for AI Safety
PresentMember, UN AI Advisory BodyInternational AI governance
PresentBoard, Bulletin of the Atomic ScientistsNuclear/existential risk communication

Tallinn became concerned about AI existential risk after reading works by Nick Bostrom and Eliezer Yudkowsky. He describes himself as having "yet to meet anyone working at AI labs who thinks the risk of training the next-generation model 'blowing up the planet' is less than 1%." He was among the signatories of both the Future of Life Institute's 2023 open letter calling for a pause on training AI systems more powerful than GPT-4, and the Center for AI Safety's 2023 statement on mitigating extinction risk from AI.

Tallinn's AI Safety Investments and Philanthropy

Beyond grantmaking through SFF, Tallinn has made significant direct investments in AI safety:

Investment/GrantTypeNotes
AnthropicSeries A lead investorBoard observer; AI safety-focused company
DeepMindSeries A investorEarly investor alongside Elon Musk, Peter Thiel (acquired by Google 2014)
MIRIGrants$1M+ since 2015 to Machine Intelligence Research Institute
CSERFounding grant≈$200,000 initial donation in 2012
Frontier Model Forum AI Safety FundPhilanthropic partnerAlongside foundations like Schmidt Sciences, Packard
100+ startupsVC investments$130M+ invested, profits directed to AI safety nonprofits

2024 Philanthropy Overview

According to Tallinn's 2024 philanthropy overview, he allocated approximately $20 million through his personal foundation in 2024, focusing on long-term alignment research and field-building initiatives. This made him one of the largest individual AI safety donors that year. Key 2024 initiatives included funding the AI Futures Project / AI 2027 initiative.

In the broader context of AI safety funding, Tallinn's contributions through SFF and direct giving represent approximately 15-20% of total philanthropic AI safety funding, second only to Coefficient Giving. Analysis of the AI safety funding landscape estimates global AI safety research funding reached $110-130 million in 2024, with Tallinn contributing approximately $20 million through his personal foundation plus additional amounts through SFF.

Historical Grant Patterns

Grant Round Totals by Year

SFF's grantmaking has grown dramatically since its founding:

RoundAmountNotes
2019-Q4$2.01MFirst round; at high end of $1-2M estimate
2020-H1$1.82MAbove $0.8-1.5M estimate
2020-H2$3.63MAbove $2.5-3M estimate
2021-H1$9.76MAt high end of $9-10M estimate
2021-H2$9.61MMiddle of $8-12M estimate
2022-H1$8.06MMiddle of $5-10M estimate
2022-H2$10.0MAbove $8M estimate
2023-H1$21.0MAbove $10M estimate
2023-H2$21.29MIncludes $9.62M Lightspeed Grants incorporated
2024$19.86MAbove $5-15M estimate; includes $0.85M Speculation Grants
2025$34.33MAbove $10-20M estimate; three-track structure
Total≈$141MSince 2019

Note: 2023-H2 total includes Lightspeed Grants amounts that Jaan Tallinn requested be incorporated into the SFF announcement.

Evolution of Focus

PeriodPrimary FocusAI ShareContext
2019X-risk broadly≈50%Initial funding post-BERI split
2020-2021Growing AI focus≈65%GPT-3 release increases urgency
2022-2023Strong AI emphasis≈75%Post-FTX collapse; SFF becomes more critical
2024-2025Dominant AI focus≈86%ChatGPT/GPT-4 catalyze rapid field growth

Notable Cumulative Grantees

Organizations that have received significant SFF funding across multiple rounds:

OrganizationCumulative Total (Est.)FocusStatus
MIRI$15M+Technical alignment researchOngoing; budget exceeds typical SFF allocation
Center for AI Safety$6.4M+Research, advocacy, field-buildingOngoing; Tallinn is board member
METR (ARC Evals)$5M+Frontier model evaluationsBudget growing beyond traditional x-risk funding
80,000 Hours$3M+Career guidance for impactOngoing
SERI MATS$3M+AI safety mentorship programOngoing
GovAI$2M+AI governance researchOxford-based
QURI$650K+Epistemic tools (Squiggle, Metaforecast)Ongoing
Redwood Research$2M+Alignment researchTechnical interpretability
FAR AI$1.5M+Alignment researchTechnical safety
Conjecture$1M+Alignment researchUK-based
Future Society$627KAI governanceAlso received FLI funding

SFF Timeline

YearEventSignificance
2019SFF founded from BERIEvolved from Berkeley Existential Risk Initiative's grantmaking
2019-Q4First grant round ($2.01M)Established S-process mechanism
2020GPT-3 releaseIncreased urgency around AI safety funding
2021Major scale-up (≈$19M total)Two rounds totaling nearly $20M; SFF becomes major funder
2022Lightspeed Grants foundedTallinn creates complementary rapid-turnaround fund
2022 NovFTX/Future Fund collapseSFF becomes more critical as Future Fund disappears
2023Record funding (≈$42M)Largest year; includes Lightspeed Grants integration
2023Tallinn signs AI pause letterFLI open letter calling for pause on GPT-4+ training
2023Tallinn signs CAIS statement"Mitigating extinction risk from AI should be a global priority"
2024$19.86M distributedContinued major funding; three-track structure introduced
2025$34.33M distributedLargest single round; Matching Pledge Program launched
2025Speculation Grants expand≈35 grantors with ≈$16M total budget

Funding Ecosystem

Diagram (loading…)
flowchart TD
  subgraph Sources["Funding Sources"]
      JT[Jaan Tallinn<br/>Primary Funder<br/>≈$100M net worth]
      JM[Jed McCaleb]
      DM[David Marble<br/>Casey Family Foundation]
      SFC[Survival and Flourishing Corp]
  end

  subgraph Mechanisms["Funding Mechanisms"]
      SFF[SFF S-Process<br/>$14M in 2025<br/>Champion-based allocation]
      SPEC[Speculation Grants<br/>≈$16M budget<br/>1-2 week decisions]
      LIGHT[Lightspeed Grants<br/>≈$1M/round<br/>Days-weeks]
      MATCH[Matching Pledges<br/>2025+ program<br/>Leverage external donors]
  end

  subgraph Recipients["Major Recipient Categories"]
      AI[AI Safety Orgs<br/>86% of funding<br/>MIRI, METR, CAIS, etc.]
      BIO[Biosecurity<br/>7% of funding<br/>SecureBio, JHU CHS]
      OTHER[Other X-Risk<br/>7% of funding<br/>Nuclear, civilizational]
  end

  JT --> SFF
  JT --> SPEC
  JT --> LIGHT
  JM --> SFF
  DM --> SFF
  SFC --> SFF

  SFF --> AI
  SFF --> BIO
  SFF --> OTHER
  SPEC --> AI
  LIGHT --> AI
  MATCH --> AI

  style Sources fill:#e6f3ff
  style Mechanisms fill:#ccffcc
  style Recipients fill:#ffffcc

Speculation Grants Program

In addition to the main S-process rounds, SFF operates a Speculation Grants program for expedited funding. This addresses a key limitation of the S-process: its 3-6 month timeline can be too slow for time-sensitive opportunities.

How Speculation Grants Work

AttributeDetails
TimelineDecisions in 1-2 weeks (vs. 3-6 months for S-process)
Grantors≈35 "Speculation Grantors" with individual budgets
Total Budget≈$16M across all grantors (up from $4M initially)
Per-Grantor BudgetTypically ≈$400K each
Funding SourceAll Speculation Grants currently funded by Jaan Tallinn
ApplicationSame form as S-process; submitting requests both simultaneously

Key Features

Eligibility Gateway: Receiving a Speculation Grant of $10K+ guarantees eligibility for the next S-process round. This provides an entry path for organizations unknown to recommenders.

Speed vs. Information Trade-off: As the program notes, "to get money faster, you have to provide more information, not less." Applicants must submit full applications even for expedited funding.

S-Process Integration: If an organization receives a Speculation Grant and later receives an S-process recommendation, they only receive additional funds to the extent the S-process amount exceeds the Speculation Grant (avoiding double-counting).

2024 Example

In the 2024 round, $0.85M in funding was distributed previously through Speculation Grants, integrated into the total $19.86M round announcement.

Comparison with Other Funders

SFF operates alongside but independently from other major longtermist funders, each with distinct approaches and comparative advantages:

Funding Landscape Overview

Funder2024 AI SafetyGrant StyleSpeedGrant SizeRisk Tolerance
Coefficient Giving≈$63.6MStaff-drivenMonthsLarge ($1M+)Moderate
SFF≈$20M (via Tallinn)Recommender-aggregatedWeeks-MonthsMedium ($100K-$1M)Higher
LTFF≈$4.3MCommitteeWeeksSmall-Medium ($10K-$500K)Higher
Lightspeed Grants≈$5MIndividual grantorsDays-WeeksSmall ($5K-$100K)Higher

Source: EA Forum analysis of AI safety funding

Comparative Characteristics

DimensionSFFCoefficient GivingLTFF
Decision ProcessMulti-recommender algorithmStaff researchCommittee deliberation
Champion RequirementOne enthusiastic backerStaff convictionMultiple committee members
Feedback to ApplicantsLimitedModerateSome public reasoning
Funding ConcentrationDiversifiedCan concentrate heavilyDiversified
Independence from CoefficientFullN/APartial (40% Coefficient funded in 2022)
Primary Funder Wealth≈$900M (Tallinn)≈$15B (Good Ventures)Varied donors

Strategic Position

SFF's Niche: SFF is often willing to fund organizations that other funders consider higher-risk or more speculative, making it an important source of support for early-stage research groups. The S-process's champion-based design means an organization can receive funding if even one recommender is strongly enthusiastic, whereas consensus-based approaches might reject the same application.

Post-FTX Importance: After the collapse of FTX and the Future Fund in late 2022, SFF became even more critical to the longtermist funding ecosystem. The Future Fund had been positioned as a major new funder with similar cause priorities; its disappearance increased reliance on SFF and Coefficient Giving.

LTFF Relationship: LTFF has received funding from both SFF and Coefficient Giving, making it partially downstream of these larger funders. About 40% of LTFF's 2022 funding came from Coefficient (then Open Philanthropy). LTFF typically makes smaller grants ($10K-$500K) compared to SFF's median ≈$100K and often funds individuals or very early-stage projects.

Lightspeed Grants: Also primarily funded by Jaan Tallinn, Lightspeed Grants focuses on even faster turnaround than SFF's Speculation Grants. The 2023-H2 round included $9.62M from Lightspeed Grants incorporated into the SFF announcement.

Application Process

How to Apply

Applications are submitted through the SFF Funding Rolling Application. A single submission requests consideration for both Speculation Grants (expedited) and the next S-process round.

Application ElementDetails
Submission FormSFF Funding Rolling Application (online)
Rolling AcceptanceApplications accepted year-round
Dual ConsiderationSame application for Speculation Grants and S-process
QuestionsContact sff-contact@googlegroups.com

Timeline

StageTypical TimelineNotes
Speculation Grant Decision1-2 weeks after submissionIf time-sensitive; requires $10K+ grant for S-process eligibility
S-Process RoundAnnounced 2-4 months before deadline1-2 rounds per year
S-Process Evaluation2-3 monthsRecommender meetings, discussions, algorithm
Final Recommendations1-2 months after evaluationPublished on SFF website
Fund DistributionShortly after announcementVia fiscal sponsor or direct to org

Eligibility Requirements

CriterionRequirementNotes
Mission AlignmentWork on existential risk, especially AIBiosecurity, nuclear risk, civilizational resilience also funded
Legal Status501(c)(3) or equivalentInternational equivalents accepted
Speculation Grant$10K+ award guarantees S-process eligibilityProvides entry path for new organizations
Funding NeedIdentified use of fundsConcrete budget and milestones

Tips for Applicants

Based on public information about successful grants and recommender commentary:

What Works:

  1. Find a Champion: The S-process rewards having at least one recommender who is enthusiastic about your work. Being known to recommenders helps significantly.
  2. Clear Theory of Change: Explain specifically how your work reduces existential risk, with logical chain from activities to impact.
  3. Concrete Outputs: Describe specific deliverables and milestones rather than vague research directions.
  4. Team Credibility: Highlight relevant experience, past work, and track record. Reference legible signals where possible.
  5. Appropriate Ask Size: The process rewards asking for larger amounts, but ask for what you can actually absorb and deploy effectively.
  6. Provide More Information: For faster funding (Speculation Grants), provide more detail, not less.

Potential Challenges:

  • New organizations: Without existing relationships to recommenders, may need to go through Speculation Grants first
  • Non-AI focus: With 86% of funding going to AI, non-AI projects face steeper competition
  • Consensus-dependent projects: The champion-based model may disadvantage projects that are "good but not great" to everyone
  • Limited feedback: Rejected applicants may not receive detailed explanations

2025 Tracks

The 2025 round featured three specialized tracks, and all eligible applications were evaluated in all tracks:

TrackRecommendersBudgetFocus
Main Track6$6-12MGeneral x-risk, especially AI
Freedom Track3$2-4MProjects supporting human freedom in AI era
Fairness Track3$2-4MProjects supporting fairness in AI era

SFF explains the specialized tracks: "Fairness and freedom are values SFF considers crucial to humanity's survival and flourishing in the era of AI technology, especially now that leading experts in AI have acknowledged that AI presents an extinction-level threat to humanity."

Key Debates and Considerations

The Champion-Based Model

The S-process's design to fund projects with at least one enthusiastic recommender, rather than consensus picks, is both a strength and a debate point:

Arguments For:

  • Surfaces innovative projects that might be filtered out by consensus processes
  • Allows recommenders with specialized knowledge to back projects others don't understand
  • Prevents "design by committee" homogenization of the funding portfolio
  • Rewards organizations that build strong relationships with knowledgeable advocates

Arguments Against:

  • May fund projects that are genuinely bad ideas one person happens to like
  • Creates incentives to cultivate individual recommenders rather than build broad support
  • Could lead to funding based on personal relationships rather than merit
  • Makes the recommender selection process highly consequential

EA Ecosystem Influence

Zvi Mowshowitz has noted that despite no official relationship between SFF and Effective Altruism, "at least the SFF process and its funds were largely captured by the EA ecosystem. EA reputations, relationships and framings had a large influence on the decisions made." This raises questions about:

  • Whether SFF provides genuine diversification from EA-aligned funders
  • How organizations outside EA networks can access SFF funding
  • Whether the 2025 Freedom and Fairness tracks genuinely broaden perspectives

Single-Funder Dependency

SFF is heavily dependent on Jaan Tallinn as its primary funder. While other funders (Jed McCaleb, David Marble) participate, Tallinn's ≈$900M net worth and commitment to AI safety are central to SFF's scale. This creates:

  • Sustainability risk: SFF's future depends significantly on Tallinn's continued wealth and priorities
  • Governance concentration: One person's views heavily shape funding direction
  • Mitigation efforts: The 2025 Matching Pledge Program explicitly aims to diversify funding sources

AI Concentration Trade-offs

The shift from ~50% AI focus in 2019 to ~86% in 2025 reflects both genuine urgency and potential trade-offs:

  • For: AI risk may genuinely be the most pressing x-risk; funding follows perceived importance
  • Against: Biosecurity, nuclear risk, and other x-risks may be relatively underfunded; portfolio diversification has value under uncertainty

Strengths and Limitations

Organizational Strengths

StrengthDescriptionEvidence
ScaleSecond-largest AI safety funder after Coefficient Giving$100M+ total; $34M in 2025 alone
Innovative MechanismS-process leverages diverse expertise systematicallyMathematical preference aggregation; champion-based design
Speed OptionsSpeculation Grants provide rapid funding path1-2 week decisions; ≈$16M budget
Risk ToleranceWilling to fund speculative researchFunds early-stage orgs others won't
TransparencyPublishes complete grant listsFull recipient and amount disclosure
ConsistencyReliable annual grantmaking1-2 rounds per year since 2019
Funder CommitmentTallinn is deeply engagedBoard roles, direct investments, ongoing giving

Organizational Limitations

LimitationDescriptionMitigating Factors
Single Funder RiskHeavily dependent on Jaan TallinnMatching Pledge Program; additional funders participating
Process ComplexityS-process harder to understand than traditional grantsDetailed documentation available
Recommender DependencyUnknown organizations face barriersSpeculation Grants provide entry path
Limited FeedbackRejected applicants may not understand whyTrade-off with recommender time
AI Concentration86% AI focus leaves other x-risks underfundedReflects genuine prioritization; other funders cover other areas
EA Ecosystem InfluenceDespite independence, EA relationships matterSpecialized tracks aim to broaden

Sources and Citations

Primary Sources

Recommender Commentary

Funding Analysis

Jaan Tallinn

References

1Jaan Tallinn - WikipediaWikipedia·Reference

Wikipedia biography of Jaan Tallinn, Estonian software engineer and co-founder of Skype, who became a prominent AI safety philanthropist and activist. He co-founded the Centre for the Study of Existential Risk (CSER) and the Future of Life Institute (FLI), and has been a major funder of AI safety research. His transition from tech entrepreneur to existential risk advocate makes him a significant figure in the AI safety community.

★★★☆☆
2Survival and Flourishing Fundsurvivalandflourishing.fund

SFF is a philanthropic organization that coordinates grant recommendations for existential risk reduction and AI safety work, having distributed over $152 million since 2019. It uses a distinctive 'S-Process' for collaborative grant evaluation among multiple donors and advisors. SFF is a significant funding source for many leading AI safety organizations and researchers.

MIRI is a nonprofit research organization focused on ensuring that advanced AI systems are safe and beneficial. It conducts technical research on the mathematical foundations of AI alignment, aiming to solve core theoretical problems before transformative AI is developed. MIRI is one of the pioneering organizations in the AI safety field.

★★★☆☆

METR is an organization conducting research and evaluations to assess the capabilities and risks of frontier AI systems, focusing on autonomous task completion, AI self-improvement risks, and evaluation integrity. They have developed the 'Time Horizon' metric measuring how long AI agents can autonomously complete software tasks, showing exponential growth over recent years. They work with major AI labs including OpenAI, Anthropic, and Amazon to evaluate catastrophic risk potential.

★★★★☆

The Center for AI Safety (CAIS) is a research organization focused on mitigating catastrophic and existential risks from advanced AI systems. It conducts technical research, publishes surveys and statements, and supports field-building efforts across academia and industry. CAIS is notable for its broad coalition-building, including its widely-cited statement on AI extinction risk signed by leading researchers.

★★★★☆

Apollo Research is an AI safety organization focused on evaluating frontier AI systems for dangerous capabilities, particularly 'scheming' behaviors where advanced AI covertly pursues misaligned objectives. They conduct LLM agent evaluations for strategic deception, evaluation awareness, and scheming, while also advising governments on AI governance frameworks.

★★★★☆

The Centre for the Governance of AI (GovAI) is a leading research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI. It produces rigorous research on AI governance, policy, and societal impacts, while fostering a global talent pipeline for responsible AI oversight. GovAI bridges technical AI safety concerns with practical policy recommendations.

★★★★☆

FAR.AI (Frontier Alignment Research) is an AI safety research non-profit focused on technical breakthroughs in AI alignment and fostering global collaboration. The organization conducts research aimed at ensuring advanced AI systems are safe and aligned with human values. It serves as an institutional hub for safety-focused technical research at the frontier of AI capabilities.

★★★★☆
9SecureBio organizationsecurebio.org

SecureBio is an organization focused on reducing biological risks, particularly those arising from advances in biotechnology and AI-enabled capabilities. They conduct research and advocacy at the intersection of biosecurity and emerging technologies, including the risks posed by large language models and AI systems that could lower barriers to bioweapon development.

The Survival and Flourishing Fund (SFF) publishes its grant recommendations for organizations working on existential risk reduction, AI safety, and related cause areas. The page documents funding decisions made through SFF's regrantor model, where individual advisors recommend grants to reduce risks to humanity's long-term future. It serves as a transparency record of which organizations and projects receive philanthropic support in the AI safety ecosystem.

11CSER Overview - CambridgeCentre for the Study of Existential Risk

CSER is a multidisciplinary research centre at the University of Cambridge dedicated to studying and mitigating existential and global catastrophic risks, including those from advanced AI, biotechnology, and other emerging technologies. It brings together researchers from natural sciences, social sciences, and humanities to develop strategies for reducing civilisation-scale risks. CSER produces academic research, policy recommendations, and public engagement on long-term risks to humanity.

★★★★☆
12Future of Life InstituteFuture of Life Institute

The Future of Life Institute (FLI) is a nonprofit organization focused on steering transformative technologies, particularly AI, away from catastrophic risks and toward beneficial outcomes. They operate across policy advocacy, research funding, education, and outreach to promote responsible AI development. FLI has been influential in key AI safety milestones including the open letter on AI risks and the Asilomar AI Principles.

★★★☆☆

The United Nations AI Advisory Body is an intergovernmental initiative focused on international governance frameworks for artificial intelligence. It brings together experts and member states to develop recommendations on safe, inclusive, and accountable AI development globally. The body addresses cross-border AI risks and aims to strengthen multilateral cooperation on AI policy.

★★★★☆

A widely-signed open letter published by the Future of Life Institute in March 2023, calling on all AI labs to pause for at least 6 months the training of AI systems more powerful than GPT-4. It argues that AI development has entered a dangerous uncontrolled race and calls for shared safety protocols, independent auditing, and accelerated AI governance frameworks before proceeding with more powerful systems.

★★★☆☆

A concise open letter coordinated by the Center for AI Safety stating that mitigating extinction-level risk from AI should be a global priority alongside pandemics and nuclear war. The statement has been signed by hundreds of leading AI researchers, executives, and public figures including Geoffrey Hinton, Yoshua Bengio, Sam Altman, and Demis Hassabis, lending significant institutional credibility to existential AI risk concerns.

★★★★☆

Anthropic is an AI safety company focused on building reliable, interpretable, and steerable AI systems. The company conducts frontier AI research and develops Claude, its family of AI assistants, with a stated mission of responsible development and maintenance of advanced AI for long-term human benefit.

★★★★☆

Google DeepMind is a leading AI research laboratory combining the former DeepMind and Google Brain teams, focused on developing advanced AI systems and conducting research across capabilities, safety, and applications. The organization is one of the most influential labs in AI development, working on frontier models including Gemini and publishing widely-cited safety and capabilities research.

★★★★☆

80,000 Hours is a nonprofit that provides research and advice on how to use your career to have the most positive impact on the world's most pressing problems, with significant focus on AI safety and existential risk. They offer career guides, job boards, and in-depth research on high-priority cause areas and career paths. Their methodology emphasizes earning to give, direct work in high-impact fields, and building career capital.

★★★☆☆
19Redwood Research: AI Controlredwoodresearch.org

Redwood Research is a nonprofit AI safety organization that pioneered the 'AI control' research agenda, focusing on preventing intentional subversion by misaligned AI systems. Their key contributions include the ICML paper on AI Control protocols, the Alignment Faking demonstration (with Anthropic), and consulting work with governments and AI labs on misalignment risk mitigation.

Conjecture is an AI safety research company focused on cognitive emulation (CoEm) as an approach to building aligned AI systems. Their blog covers technical AI safety research, interpretability, and alignment strategies with a particular emphasis on making AI systems that reason more like humans in interpretable ways.

The Survival and Flourishing Fund's Speculation Grants program provides funding for early-stage, exploratory work related to existential risk reduction and long-term flourishing. It targets speculative or unconventional projects that may not yet meet the bar for larger grants but show potential for high impact. The program reflects a portfolio approach to philanthropic funding in the AI safety and existential risk space.

22Coefficient GivingCoefficient Giving

Coefficient Giving is a philanthropic platform focused on directing funding toward high-impact AI safety and existential risk reduction efforts. It aims to help donors identify and support the most effective organizations working on preventing catastrophic AI outcomes. The platform provides guidance and resources for individuals seeking to contribute financially to the AI safety ecosystem.

★★★★☆
23Long-Term Future FundCentre for Effective Altruism

The Long-Term Future Fund is an Effective Altruism-affiliated grantmaking fund focused on improving humanity's prospects over the long run, particularly by supporting work on reducing existential and catastrophic risks. It funds research, advocacy, and capacity-building projects related to AI safety, biosecurity, and other global priorities. The fund is managed by a committee of EA community members and operates on a rolling grants basis.

★★★☆☆
24SFF-2025 S-Process Recommendationssurvivalandflourishing.fund

The Survival and Flourishing Fund (SFF) publishes its 2025 grant recommendations resulting from its S-Process, a structured deliberation method used to allocate funding across AI safety, existential risk reduction, and related cause areas. The recommendations reflect the collective judgment of a group of independent donors about which organizations and projects merit philanthropic support. This serves as a public record of EA-aligned grantmaking priorities in the AI safety ecosystem.

25SFF-2024 S-Process Recommendationssurvivalandflourishing.fund

The Survival and Flourishing Fund (SFF) publishes its 2024 grant recommendations resulting from the S-Process, a structured collaborative funding process where independent evaluators propose and debate grants for existential risk reduction and AI safety organizations. This page lists funded organizations, grant amounts, and evaluator reasoning for supporting work on AI alignment, biosecurity, and related cause areas.

Structured Data

10 facts·475 recordsView full profile →
Revenue
$34.3 million
as of 2025
Total Funding Raised
$152 million
as of 2025

Key People

1
JT
Jaan Tallinn
Funder & Initiative Committee Member

All Facts

10
Financial
PropertyValueAs OfSource
Program Budget$16 million2025
Revenue$34.3 million2025
6 earlier values
2024$19.9 million
2023$31.7 million
2022$22.4 million
2021$19.6 million
2020$5.7 million
2019$1.9 million
Market Share86%2025
Total Funding Raised$152 million2025

Divisions

2
NameDivisionTypeStatusStartDate
Initiative Committeeteamactive2024
Survival and Flourishing Fund (Main)fundactive

Funding Programs

4
NameProgramTypeDescriptionDivisionIdCurrencyStatus
Matching Pledges Programgrant-roundFunders commit to matching outside donations to S-Process recipients at specified ratios (e.g., 2-to-1)IXK1XwTge0USDopen
S-Process Grantsgrant-roundSFF's primary grantmaking mechanism using a simulation-based allocation process where recommenders independently rank applicantsIXK1XwTge0USDopen
Speculation Grantsgrant-roundFaster-turnaround grants from SFF using novel donor coordination with ~35 grantorsIXK1XwTge0USDopen
Initiative Committee Grantsgrant-roundProactive grants made by the Initiative Committee (Jaan Tallinn, SFF Advisors, and anonymous voters) outside the S-ProcessUNSuINNojwUSDopen

Grants

468
NameDateAmount
General Support2019-07$60,000
General Support2019-07$260,000
General Support2019-07$280,000
General Support2019-07$40,000
General Support2019-07$130,000
General Support2019-07$110,000
General Support2019-10$150,000
General Support of the Charter Cities Institute2019-10$60,000
General Support2019-10$100,000
General Support2019-10$30,000
General Support of 80,000 Hours2019-10$40,000
General Support of Centre for the Study of Existential Risk, University of Cambridge2019-10$50,000

Related Wiki Pages

Top Related Pages

Approaches

Dangerous Capability Evaluations

Analysis

Anthropic IPOAnthropic (Funder)Donations List Website

Organizations

FAR AIOpenAI FoundationLong-Term Future Fund (LTFF)Redwood ResearchCoefficient GivingMETR

Other

Max TegmarkVipul Naik

Concepts

EA Shareholder Diversification from AnthropicFunders Overview