Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause Letter with 33,000+ signatories), and $665.8M Buterin donation (2021). Organization operates primarily as an advocacy and grantmaking institute. Sister organization Future of Life Foundation (FLF) handles incubation of new beneficial AI organizations. Both share leadership under Anthony Aguirre.
Revenue
$21.3 million
as of 2024
Headcount
29
as of 2025
Key Metrics
Revenue (ARR)
$21M2024
Headcount
292025
Facts
12
Financial
Headcount29
Annual Expenses$17.4 million
Revenue$21.3 million
Grant Received$665.8 million
Political
Lobbying Spend$360,000
General
Websitehttps://futureoflife.org/
Organization
Founded DateMar 2014
Other
CampaignPro-Human AI Declaration (March 2026): 5 pillars, 150+ signatory organizations (AFL-CIO to Congress of Christian Leaders), individual signatories include Yoshua Bengio, Daron Acemoglu, Steve Bannon, Ralph Nader
PublicationAI Safety Index published biannually (Summer 2025, Winter 2025). Evaluates 7 leading AI companies on 33 indicators across 6 domains. Winter 2025 finding: no company has adequate guardrails for catastrophic misuse.
SubsidiaryFuture of Life Action and Research, Inc. (FLARE) — 501(c)(4) advocacy arm
AI AlignmentApproachAI AlignmentComprehensive review of AI alignment approaches finding current methods (RLHF, Constitutional AI) show 75%+ effectiveness on measurable safety metrics for existing systems but face critical scalabi...Quality: 91/100
Analysis
AI Capability Threshold ModelAnalysisAI Capability Threshold ModelComprehensive framework mapping AI capabilities across 5 dimensions to specific risk thresholds, finding authentication collapse/mass persuasion risks at 70-85% likelihood by 2027, bioweapons devel...Quality: 72/100AI Safety Research Allocation ModelAnalysisAI Safety Research Allocation ModelAnalysis finds AI safety research suffers 30-50% efficiency losses from industry dominance (60-70% of ~$700M annually), with critical areas like multi-agent dynamics and corrigibility receiving 3-5...Quality: 65/100
Policy
Safe and Secure Innovation for Frontier Artificial Intelligence Models ActPolicySafe and Secure Innovation for Frontier Artificial Intelligence Models ActCalifornia's SB 1047 required safety testing, shutdown capabilities, and third-party audits for AI models exceeding 10^26 FLOP or $100M training cost; it passed the legislature (Assembly 48-16, Sen...Quality: 66/100EU AI ActPolicyEU AI ActComprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. ...Quality: 55/100
Organizations
Future of Life Foundation (FLF)OrganizationFuture of Life Foundation (FLF)501(c)(3) nonprofit sister organization to the Future of Life Institute (FLI), focused on incubating new organizations that steer transformative technology toward benefiting life. Founded in 2022 w...MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/100OpenAI FoundationOrganizationOpenAI FoundationThe OpenAI Foundation holds 26% equity (~$130B) in OpenAI Group PBC with governance control, but detailed analysis of board member incentives reveals strong bias toward capital preservation over ph...Quality: 87/100Redwood ResearchOrganizationRedwood ResearchA nonprofit AI safety and security research organization founded in 2021, known for pioneering AI Control research, developing causal scrubbing interpretability methods, and conducting landmark ali...Quality: 78/100
Other
Elon MuskPersonElon MuskComprehensive profile of Elon Musk's role in AI, documenting his early safety warnings (2014-2017), OpenAI founding and contentious departure, xAI launch and funding history, Neuralink BCI developm...Quality: 38/100Anthony AguirrePersonAnthony AguirrePhysicist and AI safety advocate serving as Executive Director of the Future of Life Institute and President of the Future of Life Foundation. Faggin Presidential Professor for Physics of Informati...Mark BrakelPersonMark BrakelGlobal Director of Policy at the Future of Life Institute (FLI). Leads FLI's advocacy and policy efforts globally, including the organization's engagement with the EU AI Act and international AI go...Emilia JavorskyPersonEmilia JavorskyDirector of the Futures Program at the Future of Life Institute (FLI). A scientist and inventor with extensive experience in biomedical research and technology development. Leads FLI's work on guid...
Concepts
Situational AwarenessCapabilitySituational AwarenessComprehensive analysis of situational awareness in AI systems, documenting that Claude 3 Opus fakes alignment 12% baseline (78% post-RL), 5 of 6 frontier models demonstrate scheming capabilities, a...Quality: 67/100Funders OverviewFunders OverviewOverview of major funders supporting AI safety, existential risk reduction, and longtermist causes. These organizations and individuals collectively provide hundreds of millions of dollars annually...Quality: 3/100
Key Debates
Technical AI Safety ResearchCruxTechnical AI Safety ResearchTechnical AI safety research encompasses six major agendas (mechanistic interpretability, scalable oversight, AI control, evaluations, agent foundations, and robustness) with 500+ researchers and $...Quality: 66/100AI Safety Solution CruxesCruxAI Safety Solution CruxesA comprehensive structured mapping of AI safety solution uncertainties across technical, alignment, governance, and agentic domains, using probability-weighted crux frameworks with specific estimat...Quality: 65/100
Risks
AI-Induced IrreversibilityRiskAI-Induced IrreversibilityComprehensive analysis of irreversibility in AI development, distinguishing between decisive catastrophic events and accumulative risks through gradual lock-in. Quantifies current trends (60-70% al...Quality: 64/100Autonomous WeaponsRiskAutonomous WeaponsComprehensive overview of lethal autonomous weapons systems documenting their battlefield deployment (Libya 2020, Ukraine 2022-present) with AI-enabled drones achieving 70-80% hit rates versus 10-2...Quality: 56/100
Historical
The MIRI EraHistoricalThe MIRI EraComprehensive chronological account of AI safety's institutional emergence (2000-2015), from MIRI's founding through Bostrom's Superintelligence to mainstream recognition. Covers key organizations,...Quality: 31/100