Skip to content
Longterm Wiki

Future of Life Institute

Safety Organization
Founded Mar 2014 (12 years old)futureoflife.orgWiki page →KB data →

Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause Letter with 33,000+ signatories), and $665.8M Buterin donation (2021). Organization operates primarily as an advocacy and grantmaking institute. Sister organization Future of Life Foundation (FLF) handles incubation of new beneficial AI organizations. Both share leadership under Anthony Aguirre.

Revenue
$21.3 million
as of 2024
Headcount
29
as of 2025

Key Metrics

Revenue (ARR)

$21M2024
Revenue (ARR) chart. Annual run rate: $550M in 2021 to $21M in 2024.$0$158M$316M$474M$632M202120232024

Headcount

292025
Headcount chart. Employees: 11 in 2023 to 29 in 2025.0918273620232025

Facts

12
Financial
Headcount29
Annual Expenses$17.4 million
Revenue$21.3 million
Grant Received$665.8 million
Political
Lobbying Spend$360,000
General
Websitehttps://futureoflife.org/
Organization
Founded DateMar 2014
Other
CampaignPro-Human AI Declaration (March 2026): 5 pillars, 150+ signatory organizations (AFL-CIO to Congress of Christian Leaders), individual signatories include Yoshua Bengio, Daron Acemoglu, Steve Bannon, Ralph Nader
PublicationAI Safety Index published biannually (Summer 2025, Winter 2025). Evaluates 7 leading AI companies on 33 indicators across 6 domains. Winter 2025 finding: no company has adequate guardrails for catastrophic misuse.
SubsidiaryFuture of Life Action and Research, Inc. (FLARE) — 501(c)(4) advocacy arm
Grant Given18100000
Legal Identifier47-1052538 (EIN)

Other Data

Publications
10 entries
TitlePublicationTypeAuthorsUrlPublishedDateIsFlagshipVenue
Pro-Human AI Declarationpolicy-briefFuture of Life Institutehumanstatement.org2026-01
AI Safety Index Winter 2025reportFuture of Life Institutefutureoflife.org2025-12FLI
AI Safety Index: Winter 2025reportFuture of Life Institutefutureoflife.org2025-12
Statement on Superintelligencepolicy-briefFuture of Life Institutesuperintelligence-statement.org2025-10
AI Safety Index: Summer 2025reportFuture of Life Institutefutureoflife.org2025-06
Pause Giant AI Experiments: An Open Letterpolicy-briefFuture of Life Institutefutureoflife.org2023-03
Lethal Autonomous Weapons Pledgepolicy-briefFuture of Life Institutefutureoflife.org2018-06
Asilomar AI Principlespolicy-briefFuture of Life Institutefutureoflife.org2017-01
Autonomous Weapons: AI and Robotics Researchers Open Letterpolicy-briefFuture of Life Institutefutureoflife.org2016-02
Research Priorities for Robust and Beneficial AI: An Open Letterpolicy-briefFuture of Life Institutefutureoflife.org2015-10

Divisions

7

Prediction Markets

4 active

Related Wiki Pages

Top Related Pages

Approaches

AI Alignment

Analysis

AI Capability Threshold ModelAI Safety Research Allocation Model

Policy

Safe and Secure Innovation for Frontier Artificial Intelligence Models ActEU AI Act

Organizations

Future of Life Foundation (FLF)MetaculusOpenAI FoundationRedwood Research

Other

Elon MuskAnthony AguirreMark BrakelEmilia Javorsky

Concepts

Situational AwarenessFunders Overview

Key Debates

Technical AI Safety ResearchAI Safety Solution Cruxes

Risks

AI-Induced IrreversibilityAutonomous Weapons

Historical

The MIRI Era