Future of Life Institute (FLI)
fli (E528)← Back to pagePath: /knowledge-base/organizations/fli/
Page Metadata
{
"id": "fli",
"numericId": null,
"path": "/knowledge-base/organizations/fli/",
"filePath": "knowledge-base/organizations/fli.mdx",
"title": "Future of Life Institute (FLI)",
"quality": 46,
"importance": 42,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-01-29",
"llmSummary": "Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause Letter with 33,000+ signatories), and $665.8M Buterin donation (2021). Organization operates primarily through advocacy and grantmaking rather than direct research, with active EU/UN/US policy engagement.",
"structuredSummary": null,
"description": "The Future of Life Institute is a nonprofit organization focused on reducing existential risks from advanced AI and other transformative technologies. Co-founded by Max Tegmark, Jaan Tallinn, Anthony Aguirre, Viktoriya Krakovna, and Meia Chita-Tegmark in March 2014, FLI has distributed over \\$25 million in AI safety research grants (starting with Elon Musk's \\$10M 2015 donation funding 37 projects), organized the 2015 Puerto Rico and 2017 Asilomar conferences that birthed the field of AI alignment and produced the 23 Asilomar Principles (5,700+ signatories), published the 2023 pause letter (33,000+ signatories including Yoshua Bengio and Stuart Russell), produced the viral Slaughterbots films advocating for autonomous weapons regulation, and received a \\$665.8M cryptocurrency donation from Vitalik Buterin in 2021. FLI maintains active policy engagement with the EU (advocating for foundation model regulation in the AI Act), UN (promoting autonomous weapons treaty), and US Congress.",
"ratings": {
"novelty": 2.5,
"rigor": 4,
"actionability": 2,
"completeness": 6.5
},
"category": "organizations",
"subcategory": "funders",
"clusters": [
"community",
"ai-safety",
"governance"
],
"metrics": {
"wordCount": 6079,
"tableCount": 32,
"diagramCount": 2,
"internalLinks": 18,
"externalLinks": 52,
"footnoteCount": 0,
"bulletRatio": 0.15,
"sectionCount": 51,
"hasOverview": true,
"structuralScore": 15
},
"suggestedQuality": 100,
"updateFrequency": 45,
"evergreen": true,
"wordCount": 6079,
"unconvertedLinks": [
{
"text": "futureoflife.org",
"url": "https://futureoflife.org/",
"resourceId": "786a68a91a7d5712",
"resourceTitle": "Future of Life Institute"
},
{
"text": "Future of Life Institute",
"url": "https://futureoflife.org/",
"resourceId": "786a68a91a7d5712",
"resourceTitle": "Future of Life Institute"
},
{
"text": "\"Pause Giant AI Experiments\"",
"url": "https://futureoflife.org/open-letter/pause-giant-ai-experiments/",
"resourceId": "531f55cee64f6509",
"resourceTitle": "FLI open letter"
},
{
"text": "Metaculus",
"url": "https://www.metaculus.com/",
"resourceId": "d99a6d0fb1edc2db",
"resourceTitle": "Metaculus"
},
{
"text": "FLI Official Website",
"url": "https://futureoflife.org/",
"resourceId": "786a68a91a7d5712",
"resourceTitle": "Future of Life Institute"
},
{
"text": "Pause Giant AI Experiments: An Open Letter",
"url": "https://futureoflife.org/open-letter/pause-giant-ai-experiments/",
"resourceId": "531f55cee64f6509",
"resourceTitle": "FLI open letter"
},
{
"text": "Pause Giant AI Experiments - Wikipedia",
"url": "https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter",
"resourceId": "4fc41c1e8720f41f",
"resourceTitle": "Pause letter"
},
{
"text": "FLI Website",
"url": "https://futureoflife.org/",
"resourceId": "786a68a91a7d5712",
"resourceTitle": "Future of Life Institute"
},
{
"text": "Pause Giant AI Experiments Letter",
"url": "https://futureoflife.org/open-letter/pause-giant-ai-experiments/",
"resourceId": "531f55cee64f6509",
"resourceTitle": "FLI open letter"
}
],
"unconvertedLinkCount": 9,
"convertedLinkCount": 0,
"backlinkCount": 3,
"redundancy": {
"maxSimilarity": 16,
"similarPages": [
{
"id": "pause-ai",
"title": "Pause AI",
"path": "/knowledge-base/organizations/pause-ai/",
"similarity": 16
},
{
"id": "jaan-tallinn",
"title": "Jaan Tallinn",
"path": "/knowledge-base/people/jaan-tallinn/",
"similarity": 16
},
{
"id": "mainstream-era",
"title": "Mainstream Era (2020-Present)",
"path": "/knowledge-base/history/mainstream-era/",
"similarity": 15
},
{
"id": "ai-futures-project",
"title": "AI Futures Project",
"path": "/knowledge-base/organizations/ai-futures-project/",
"similarity": 15
},
{
"id": "chan-zuckerberg-initiative",
"title": "Chan Zuckerberg Initiative",
"path": "/knowledge-base/organizations/chan-zuckerberg-initiative/",
"similarity": 15
}
]
}
}Entity Data
{
"id": "fli",
"type": "organization",
"title": "Future of Life Institute (FLI)",
"description": "Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause Letter with 33,000+ signatories), and $665.8M Buterin donation (2021). Organization operates primarily ",
"tags": [],
"relatedEntries": [],
"sources": [],
"lastUpdated": "2026-02",
"customFields": []
}Canonical Facts (0)
No facts for this entity
External Links
No external links
Backlinks (3)
| id | title | type | relationship |
|---|---|---|---|
| max-tegmark | Max Tegmark | researcher | — |
| pause-moratorium | Pause / Moratorium | policy | — |
| pause | Pause Advocacy | approach | — |
Frontmatter
{
"title": "Future of Life Institute (FLI)",
"description": "The Future of Life Institute is a nonprofit organization focused on reducing existential risks from advanced AI and other transformative technologies. Co-founded by Max Tegmark, Jaan Tallinn, Anthony Aguirre, Viktoriya Krakovna, and Meia Chita-Tegmark in March 2014, FLI has distributed over \\$25 million in AI safety research grants (starting with Elon Musk's \\$10M 2015 donation funding 37 projects), organized the 2015 Puerto Rico and 2017 Asilomar conferences that birthed the field of AI alignment and produced the 23 Asilomar Principles (5,700+ signatories), published the 2023 pause letter (33,000+ signatories including Yoshua Bengio and Stuart Russell), produced the viral Slaughterbots films advocating for autonomous weapons regulation, and received a \\$665.8M cryptocurrency donation from Vitalik Buterin in 2021. FLI maintains active policy engagement with the EU (advocating for foundation model regulation in the AI Act), UN (promoting autonomous weapons treaty), and US Congress.",
"sidebar": {
"order": 7
},
"quality": 46,
"llmSummary": "Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause Letter with 33,000+ signatories), and $665.8M Buterin donation (2021). Organization operates primarily through advocacy and grantmaking rather than direct research, with active EU/UN/US policy engagement.",
"lastEdited": "2026-01-29",
"importance": 42,
"update_frequency": 45,
"ratings": {
"novelty": 2.5,
"rigor": 4,
"actionability": 2,
"completeness": 6.5
},
"clusters": [
"community",
"ai-safety",
"governance"
],
"subcategory": "funders",
"entityType": "organization"
}Raw MDX Source
---
title: Future of Life Institute (FLI)
description: The Future of Life Institute is a nonprofit organization focused on reducing existential risks from advanced AI and other transformative technologies. Co-founded by Max Tegmark, Jaan Tallinn, Anthony Aguirre, Viktoriya Krakovna, and Meia Chita-Tegmark in March 2014, FLI has distributed over \$25 million in AI safety research grants (starting with Elon Musk's \$10M 2015 donation funding 37 projects), organized the 2015 Puerto Rico and 2017 Asilomar conferences that birthed the field of AI alignment and produced the 23 Asilomar Principles (5,700+ signatories), published the 2023 pause letter (33,000+ signatories including Yoshua Bengio and Stuart Russell), produced the viral Slaughterbots films advocating for autonomous weapons regulation, and received a \$665.8M cryptocurrency donation from Vitalik Buterin in 2021. FLI maintains active policy engagement with the EU (advocating for foundation model regulation in the AI Act), UN (promoting autonomous weapons treaty), and US Congress.
sidebar:
order: 7
quality: 46
llmSummary: "Comprehensive profile of FLI documenting $25M+ in grants distributed (2015: $7M to 37 projects, 2021: $25M program), major public campaigns (Asilomar Principles with 5,700+ signatories, 2023 Pause Letter with 33,000+ signatories), and $665.8M Buterin donation (2021). Organization operates primarily through advocacy and grantmaking rather than direct research, with active EU/UN/US policy engagement."
lastEdited: "2026-01-29"
importance: 42
update_frequency: 45
ratings:
novelty: 2.5
rigor: 4
actionability: 2
completeness: 6.5
clusters:
- community
- ai-safety
- governance
subcategory: funders
entityType: organization
---
import {DataInfoBox, Mermaid, R, EntityLink} from '@components/wiki';
## Quick Assessment
| Dimension | Assessment | Evidence |
|-----------|------------|----------|
| **Focus** | AI Safety Advocacy + Grantmaking | Dual approach: public campaigns and research funding |
| **Grant Scale** | \$25M+ distributed | 2015: \$7M to 37 projects; 2021: \$25M program from Buterin donation |
| **Public Profile** | Very High | Asilomar Principles (5,700+ signatories), Pause Letter (33,000+ signatories) |
| **Approach** | Policy + Research + Advocacy | <EntityLink id="E127">EU AI Act</EntityLink> engagement, UN <EntityLink id="E35">autonomous weapons</EntityLink>, Slaughterbots films |
| **Location** | Boston, MA (global staff of 20+) | Policy teams in US and EU |
| **Major Funding** | \$665.8M (2021 Buterin), \$10M (2015 Musk) | Endowment from cryptocurrency donation |
| **Key Conferences** | Puerto Rico 2015, Asilomar 2017 | Considered birthplace of <EntityLink id="E439">AI alignment</EntityLink> field |
## Organization Details
| Attribute | Details |
|-----------|---------|
| **Full Name** | Future of Life Institute |
| **Type** | 501(c)(3) Nonprofit |
| **EIN** | 47-1052538 |
| **Founded** | March 2014 |
| **Launch Event** | May 24, 2014 at MIT (auditorium 10-250) |
| **Founders** | <EntityLink id="E433">Max Tegmark</EntityLink> (President), <EntityLink id="E577">Jaan Tallinn</EntityLink>, Anthony Aguirre (Executive Director), Viktoriya Krakovna, Meia Chita-Tegmark |
| **Location** | Boston, Massachusetts (headquarters); global remote staff |
| **Staff Size** | 20+ full-time team members |
| **Teams** | Policy, Outreach, Grantmaking |
| **Website** | [futureoflife.org](https://futureoflife.org/) |
| **Related Sites** | [autonomousweapons.org](https://autonomousweapons.org/), [autonomousweaponswatch.org](https://autonomousweaponswatch.org/) |
| **Research Grants** | \$25M+ distributed across multiple rounds |
| **EU Advocacy Budget** | €446,619 annually |
## Overview
The [Future of Life Institute](https://futureoflife.org/) (FLI) is a nonprofit organization dedicated to reducing existential risks from advanced technologies, with a particular focus on artificial intelligence. Founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, UC Santa Cruz physicist Anthony Aguirre, DeepMind research scientist Viktoriya Krakovna, and Tufts researcher Meia Chita-Tegmark, FLI has become one of the most publicly visible organizations in the AI safety space. The organization officially launched on May 24, 2014, at MIT's auditorium 10-250 with a panel discussion on "The Future of Technology: Benefits and Risks," moderated by Alan Alda and featuring panelists including Nobel laureate Frank Wilczek, synthetic biologist George Church, and Jaan Tallinn.
Unlike research-focused organizations like <EntityLink id="E202">MIRI</EntityLink> or <EntityLink id="E557">Redwood Research</EntityLink>, FLI emphasizes public advocacy, policy engagement, and awareness-raising alongside its grantmaking. This tripartite approach—combining direct research funding, high-profile public campaigns, and government engagement—has made FLI particularly effective at shaping public discourse around AI risk. The organization's 2015 Puerto Rico conference is sometimes described as the "birthplace of the field of AI alignment," bringing together leading AI researchers to discuss safety concerns that had previously been marginalized in academic circles. The subsequent 2017 Asilomar conference produced the [23 Asilomar AI Principles](https://futureoflife.org/open-letter/ai-principles/), one of the earliest and most influential sets of <EntityLink id="E608">AI governance</EntityLink> principles.
FLI's major initiatives have helped establish AI safety as a mainstream concern rather than a fringe topic. The 2023 "Pause Giant AI Experiments" open letter garnered over 33,000 signatures and generated massive media coverage, even though the requested pause was not implemented by AI labs. The organization has also been influential in autonomous weapons policy, producing the viral [Slaughterbots](https://en.wikipedia.org/wiki/Slaughterbots) video series and advocating for international regulation at the United Nations. FLI received a transformative \$665.8 million cryptocurrency donation from Ethereum co-founder Vitalik Buterin in 2021, which has been partially converted to an endowment ensuring long-term organizational independence.
## Founding and Early History
The Future of Life Institute emerged from concerns about existential risks that had been growing among a network of physicists, AI researchers, and technology entrepreneurs. Max Tegmark, an MIT cosmologist who had become increasingly concerned about AI safety after reading <EntityLink id="E215">Nick Bostrom</EntityLink>'s work, connected with Jaan Tallinn, who had been funding existential risk research through organizations like MIRI and the Cambridge Centre for the Study of Existential Risk (CSER). Together with Anthony Aguirre (co-founder of the Foundational Questions Institute and later <EntityLink id="E199">Metaculus</EntityLink>), Viktoriya Krakovna (then a PhD student, now at DeepMind), and Meia Chita-Tegmark, they formally established FLI in March 2014.
The founding team recognized a gap in the existential risk ecosystem: while organizations like MIRI focused on <EntityLink id="E297">technical AI safety research</EntityLink> and CSER on academic study, there was no organization specifically dedicated to public engagement, policy advocacy, and convening stakeholders across academia, industry, and government. FLI was designed to fill this gap, with a mission to "steer transformative technology towards benefiting life and away from large-scale risks."
| Milestone | Date | Significance |
|-----------|------|--------------|
| FLI Founded | March 2014 | Organization formally established |
| MIT Launch Event | May 24, 2014 | Public launch with Alan Alda moderating; panelists included George Church, Frank Wilczek, Jaan Tallinn |
| Research Priorities Open Letter | January 2015 | First major public initiative; signed by Stephen Hawking, <EntityLink id="E116">Elon Musk</EntityLink>, and leading AI researchers |
| Puerto Rico Conference | January 2-5, 2015 | "The Future of AI: Opportunities and Challenges"; considered birthplace of AI alignment field |
| Musk Donation Announced | January 2015 | \$10M commitment to fund AI safety research |
| First Grants Announced | July 1, 2015 | \$7M awarded to 37 research projects |
| Asilomar Conference | January 5-8, 2017 | Produced 23 Asilomar Principles; 100+ attendees |
| Slaughterbots Video | November 13, 2017 | 2M+ views within weeks; screened at UN |
| Buterin Donation | 2021 | \$665.8M cryptocurrency donation |
| Pause Letter | March 2023 | 33,000+ signatures; massive media coverage |
## Key Initiatives
<Mermaid chart={`
flowchart TD
FLI[Future of Life Institute] --> GRANTS[Research Grants<br/>\$25M+]
FLI --> ADVOCACY[Public Advocacy]
FLI --> POLICY[Policy Work]
FLI --> EVENTS[Conferences]
GRANTS --> G2015[2015 Program<br/>\$7M / 37 projects]
GRANTS --> G2021[2021 Program<br/>\$25M Buterin]
ADVOCACY --> LETTERS[Open Letters]
ADVOCACY --> MEDIA[Slaughterbots Films]
LETTERS --> ASILOMAR[Asilomar Principles<br/>5,700+ signatories]
LETTERS --> PAUSE[Pause Letter 2023<br/>33,000+ signatories]
POLICY --> EU[EU AI Act<br/>Foundation models]
POLICY --> UN[UN Autonomous<br/>Weapons Treaty]
POLICY --> US[US Congress<br/>Testimony]
EVENTS --> PR15[Puerto Rico 2015<br/>Birth of AI alignment]
EVENTS --> AS17[Asilomar 2017<br/>23 Principles]
style FLI fill:#e6f3ff
style GRANTS fill:#ccffcc
style ADVOCACY fill:#ffffcc
style POLICY fill:#ffcccc
style EVENTS fill:#e6ccff
`} />
### Research Grants Program
FLI established the world's first peer-reviewed grant program specifically aimed at AI safety research. The program began following the January 2015 Puerto Rico conference, when Elon Musk announced a \$10 million donation to support "a global research program aimed at keeping AI beneficial to humanity."
**2015 Grant Program**: FLI issued a Request for Proposals (RFP) in early 2015, receiving nearly 300 applications from research teams worldwide. The RFP sought proposals in two categories: "project grants" (typically \$100,000-\$500,000 over 2-3 years) for research by small teams or individuals, and "center grants" (\$500,000-\$1,500,000) for establishing new research centers. On July 1, 2015, FLI announced \$7 million in awards to 37 research projects. <EntityLink id="E521">Coefficient Giving</EntityLink> (then Open Philanthropy) supplemented this with \$1.186 million after determining that the quality of proposals exceeded available funding.
| Grant Round | Amount | Projects | Source | Focus Areas |
|-------------|--------|----------|--------|-------------|
| **2015 Round** | \$7M | 37 | Elon Musk (\$10M donation) | Technical AI safety, value alignment, economics, policy, autonomous weapons |
| **Coefficient Giving Supplement** | \$1.186M | Additional projects | Coefficient Giving (then Open Philanthropy) | High-quality proposals exceeding initial funding |
| **2021 Program** | \$25M | Multiple | Vitalik Buterin donation | Expanded AI safety and governance research |
| **2023 Grants** | Various | Multiple | Ongoing | PhD fellowships, technical research |
**2015 Grant Recipients** (selected examples):
| Recipient | Institution | Amount | Project Focus |
|-----------|-------------|--------|---------------|
| **Nick Bostrom** | FHI Oxford | \$1.5M | Strategic Research Center for AI (geopolitical challenges) |
| **Stuart Russell** | UC Berkeley | ≈\$500K | Value alignment and inverse reinforcement learning |
| **MIRI** | Machine Intelligence Research Institute | \$299,310 | Long-term AI safety research (\$250K over 3 years) |
| **Owain Evans** | FHI (collaboration with MIRI) | \$227,212 | Algorithms learning human preferences despite irrationalities |
| **Manuela Veloso** | Carnegie Mellon | ≈\$200K | Explainable AI systems |
| **Paul Christiano** | UC Berkeley | ≈\$150K | Value learning approaches |
| **Ramana Kumar** | Cambridge (collaboration with MIRI) | \$36,750 | Self-reference in HOL theorem prover |
| **Michael Webb** | Stanford | ≈\$100K | Economic impacts of AI |
| **Heather Roff** | Various | ≈\$100K | Meaningful human control of autonomous weapons |
The funded projects spanned technical AI safety (ensuring advanced AI systems align with human values), economic analysis (managing AI's labor market impacts), policy research (autonomous weapons governance), and philosophical foundations (clarifying concepts of agency and liability for autonomous systems).
### Puerto Rico Conference (2015)
The [Puerto Rico AI Safety Conference](https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/) (officially "The Future of AI: Opportunities and Challenges") was held January 2-5, 2015, in San Juan. This conference is sometimes described as the "birthplace of the field of AI alignment," as it brought together the world's leading AI builders from academia and industry to engage with experts in economics, law, and ethics on AI safety for the first time at scale.
| Aspect | Details |
|--------|---------|
| **Dates** | January 2-5, 2015 |
| **Location** | San Juan, Puerto Rico |
| **Attendees** | ≈40 leading AI researchers and thought leaders |
| **Outcome** | Research Priorities Open Letter; Elon Musk \$10M donation announcement |
| **Significance** | First major convening of AI safety concerns with mainstream AI researchers |
**Notable Attendees**:
- **AI Researchers**: Stuart Russell (Berkeley), Thomas Dietterich (AAAI President), Francesca Rossi (IJCAI President), Bart Selman (Cornell), Tom Mitchell (CMU), Murray Shanahan (Imperial College)
- **Industry**: Representatives from Google DeepMind, Vicarious
- **Existential Risk Organizations**: FHI, CSER, MIRI representatives
- **Technology Leaders**: Elon Musk, Vernor Vinge
The conference produced an open letter on AI safety that was subsequently signed by Stephen Hawking, Elon Musk, and many leading AI researchers. Following the conference, Musk announced his \$10 million donation to fund FLI's research grants program.
### Asilomar Conference and AI Principles (2017)
The [Beneficial AI 2017 conference](https://futureoflife.org/event/bai-2017/), held January 5-8, 2017, at the Asilomar Conference Grounds in California, was a sequel to the 2015 Puerto Rico conference. More than 100 thought leaders and researchers in AI, economics, law, ethics, and philosophy met to address and formulate principles for beneficial AI development. The conference was not open to the public, with attendance curated to include influential figures who could shape the field's direction.
| Aspect | Details |
|--------|---------|
| **Dates** | January 5-8, 2017 |
| **Location** | Asilomar Conference Center, Pacific Grove, California |
| **Attendees** | 100+ AI researchers, industry leaders, philosophers |
| **Outcome** | 23 Asilomar AI Principles published January 30, 2017 |
| **Signatories** | 1,797 AI/robotics researchers + 3,923 others (5,700+ total) |
**Notable Participants**:
| Category | Participants |
|----------|--------------|
| **AI Researchers** | Stuart Russell (Berkeley), Bart Selman (Cornell), Yoshua Bengio (Montreal), Ilya Sutskever (OpenAI/DeepMind), Yann LeCun (Facebook), Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Viktoriya Krakovna (DeepMind/FLI), Stefano Ermon (Stanford) |
| **Industry Leaders** | Elon Musk (Tesla/SpaceX), Demis Hassabis (DeepMind CEO), Ray Kurzweil (Google) |
| **Philosophers & Authors** | Nick Bostrom (FHI), David Chalmers (NYU), Sam Harris |
| **FLI Leadership** | Jaan Tallinn, Max Tegmark, Richard Mallah |
**The 23 Asilomar AI Principles** are organized into three categories:
**Research Issues (5 principles)**:
1. Research Goal: Create beneficial, not undirected intelligence
2. Research Funding: Include safety research alongside capability research
3. Science-Policy Link: Constructive exchange between researchers and policymakers
4. Research Culture: Foster cooperation, trust, and transparency
5. Race Avoidance: Avoid corner-cutting on safety for competitive advantage
**Ethics and Values (13 principles)**:
6. Safety: AI systems should be safe and secure
7. Failure Transparency: Capability to determine causes of harm
8. Judicial Transparency: Explanations for legal decisions
9. Responsibility: Designers and builders are stakeholders in implications
10. Value Alignment: AI goals should align with human values
11. Human Values: Designed to be compatible with human dignity, rights, freedoms
12. Personal Privacy: Control over data access for AI systems
13. Liberty and Privacy: AI should not unreasonably curtail liberty
14. Shared Benefit: Benefits should be broadly distributed
15. Shared Prosperity: Economic prosperity should be broadly shared
16. Human Control: Humans should choose how to delegate decisions
17. Non-subversion: Power from AI should respect social processes
18. AI Arms Race: Lethal autonomous weapons race should be avoided
**Longer-term Issues (5 principles)**:
19. Capability Caution: Avoid strong assumptions about upper limits
20. Importance: Advanced AI could be profound change; plan accordingly
21. Risks: Catastrophic or existential risks require commensurate effort
22. Recursive Self-Improvement: Subject to strict safety and control
23. Common Good: Superintelligence should benefit all humanity
**Legacy and Influence**: The Asilomar Principles have been cited in policy discussions worldwide. Key themes (human-centric AI, transparency, robustness) appear in later legislation including the EU AI Act. Notable signatories included Stephen Hawking, Elon Musk, Anthony D. Romero (ACLU Executive Director), Demis Hassabis, Ilya Sutskever, Yann LeCun, Yoshua Bengio, and Stuart Russell.
### "Pause Giant AI Experiments" Letter (2023)
The open letter ["Pause Giant AI Experiments"](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) was published by FLI on March 22, 2023—one week after OpenAI released GPT-4. The letter called for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," citing concerns about AI-generated propaganda, extreme automation of jobs, human obsolescence, and society-wide loss of control. The timing was strategic: GPT-4 demonstrated capabilities that surprised even AI researchers, and public attention to AI risk was at an all-time high.
| Aspect | Details |
|--------|---------|
| **Published** | March 22, 2023 (one week after GPT-4 release) |
| **Signatories** | 33,000+ total |
| **Notable Signatories** | Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, Gary Marcus |
| **Request** | 6-month pause on training AI systems more powerful than GPT-4 |
| **Media Coverage** | Extensive worldwide coverage; US Senate hearing cited the letter |
**Key Arguments in the Letter**:
1. Contemporary AI systems are becoming "human-competitive at general tasks"
2. AI labs are locked in an "out-of-control race" that "no one—not even their creators—can understand, predict, or reliably control"
3. Profound risks to society including "flooding our information channels with propaganda and untruth," "automating away all jobs," and "loss of control of our civilization"
4. The pause should be used to develop "shared safety protocols" verified by independent experts
**Reactions and Criticism**:
| Critic/Supporter | Position | Argument |
|------------------|----------|----------|
| **Timnit Gebru, Emily Bender, Margaret Mitchell** | Critical | Letter is "sensationalist," amplifies "dystopian sci-fi scenario" while ignoring current algorithmic harms |
| **Bill Gates** | Did not sign | "Asking one particular group to pause doesn't solve the challenges" |
| **Sam Altman (OpenAI CEO)** | Critical | Letter is "missing most technical nuance"; OpenAI was not training GPT-5 as claimed in early drafts |
| **Reid Hoffman** | Critical | Called it "virtue signalling" with no real impact |
| **Eliezer Yudkowsky** | Critical (from other direction) | Wrote in Time: "shut it all down"—letter doesn't go far enough |
| **European Parliament** | Engaged | Issued formal response; EU policymakers cited letter in AI Act discussions |
| **US Senate** | Engaged | Hearing on AI safety cited the letter |
**Actual Outcomes**: The requested pause was not implemented. As FLI noted on the letter's one-year anniversary, AI companies instead "directed vast investments in infrastructure to train ever-more giant AI systems." However, FLI's policy director Mark Brakel noted that the response exceeded expectations: "The reaction has been intense. We feel that it has given voice to a huge undercurrent of concern about the risks of high-powered AI systems not just at the public level, but top researchers in AI and other topics, business leaders, and policymakers."
The letter did contribute to a significant shift in public discourse. AI safety became a mainstream media topic, government inquiries accelerated, and phrases like "existential risk from AI" entered common vocabulary. Whether this attention will translate to effective governance remains contested.
### Autonomous Weapons Advocacy: Slaughterbots
Beyond AI safety, FLI has been a leading advocate for international regulation of lethal autonomous weapons systems (LAWS). Their most visible campaign is the [Slaughterbots](https://en.wikipedia.org/wiki/Slaughterbots) video series, produced in collaboration with Stuart Russell.
**Slaughterbots (2017)**: Released November 13, 2017, this arms-control advocacy video presents a dramatized near-future scenario where swarms of inexpensive microdrones use facial recognition and AI to assassinate political opponents. The script was written by Stuart Russell; production was funded by FLI. According to Russell: "What we were trying to show was the property of autonomous weapons to turn into weapons of mass destruction automatically because you can launch as many as you want."
| Video | Release Date | Views | Key Message |
|-------|--------------|-------|-------------|
| **Slaughterbots** | November 13, 2017 | 2M+ within weeks | Microdrones as WMDs; need for regulation |
| **if human: kill()** | November 30, 2021 | Sequel | Depicts failed ban, technical errors, eventual treaty |
| **Artificial Escalation** | 2022 | Ongoing series | AI in nuclear command and control |
**UN Engagement**: FLI representatives regularly attend UN Convention on Certain Conventional Weapons (CCW) meetings in Geneva. FLI's Anna Hehir has spoken at these forums about the "proliferation and escalation risks of autonomous weapons," arguing these weapons are "unpredictable, unreliable, and unexplainable."
**Related Resources**: FLI operates [autonomousweapons.org](https://autonomousweapons.org/) (case for regulation) and [autonomousweaponswatch.org](https://autonomousweaponswatch.org/) (database of weapons systems with concerning autonomy levels developed globally).
## Founders and Leadership
### Max Tegmark (President)
| Aspect | Details |
|--------|---------|
| **Role** | Co-founder, President |
| **Background** | MIT Professor of Physics (cosmology specialty) |
| **Education** | PhD Physics, UC Berkeley (1994); BA Physics & Economics, Stockholm School of Economics (1990) |
| **Books** | *Life 3.0: Being Human in the Age of AI* (2017), *Our Mathematical Universe* (2014) |
| **Media** | Web Summit 2024 (Lisbon), numerous science documentaries, TED talks |
| **Research** | Cosmology, foundations of physics, consciousness, AI safety |
Tegmark is the most public face of FLI, frequently appearing in media to discuss AI risks. His 2017 book *Life 3.0* was widely read in technology circles and helped popularize concepts like "AI alignment" to general audiences. Tegmark has testified before the European Parliament on AI regulation and regularly engages with policymakers.
### Anthony Aguirre (Executive Director)
| Aspect | Details |
|--------|---------|
| **Role** | Co-founder, Executive Director |
| **Background** | Faggin Presidential Professor for the Physics of Information, UC Santa Cruz |
| **Education** | PhD Astronomy, Harvard University (2000) |
| **Other Roles** | Co-founder, Foundational Questions Institute (FQXi, 2006); Co-founder, [Metaculus](https://www.metaculus.com/) (2015) |
| **Books** | *Cosmological Koans* (2019); *Keep The Future Human* (March 2025) |
| **Research** | Theoretical cosmology, gravitation, statistical mechanics, AI governance |
Aguirre has shifted FLI's focus toward more direct policy engagement in recent years. His March 2025 essay *Keep The Future Human: Why and How We Should Close the Gates to AGI and Superintelligence* proposes an international regulatory scheme for AI. He has appeared on the AXRP podcast discussing FLI's strategy and the organization's evolution from academic grantmaking to policy advocacy.
### Jaan Tallinn (Co-founder, Board Member)
| Aspect | Details |
|--------|---------|
| **Role** | Co-founder, Board Member |
| **Background** | Founding engineer of Skype and Kazaa |
| **Philanthropy** | Founder, <EntityLink id="E567">Survival and Flourishing Fund</EntityLink>; co-founder, Cambridge Centre for the Study of Existential Risk (CSER) |
| **Estimated Giving** | \$100M+ to existential risk organizations |
| **Focus** | AI safety funding, existential risk ecosystem building |
Tallinn is one of the largest individual funders of existential risk research globally. His network of organizations (SFF, CSER, FLI) forms a significant portion of the AI safety funding landscape. He participated as a panelist at both the 2015 Puerto Rico and 2017 Asilomar conferences.
### Other Founders and Key Staff
| Person | Role | Background |
|--------|------|------------|
| **Viktoriya Krakovna** | Co-founder | Research scientist at DeepMind; AI safety research (specification gaming, impact measures) |
| **Meia Chita-Tegmark** | Co-founder | Previously at Tufts University; organizer and researcher |
| **Risto Uuk** | Head of EU Policy and Research | Leads FLI's EU AI policy work, including AI Act engagement |
| **Mark Brakel** | Director of Policy | Led response to pause letter; government relations |
| **Anna Hehir** | Policy (Autonomous Weapons) | UN Geneva CCW representative |
| **Emilia Javorsky** | Policy | Vienna Autonomous Weapons Conference 2025 representative |
**Staff Structure**: FLI has grown to 20+ full-time staff members globally, primarily organized into Policy, Outreach, and Grantmaking teams. Staff backgrounds span machine learning, medicine, government, and industry.
## Funding and Financials
FLI's funding history includes several transformative donations that have shaped the organization's trajectory and independence.
### Major Donors
| Donor | Amount | Year | Purpose |
|-------|--------|------|---------|
| **Vitalik Buterin** | \$665.8M (cryptocurrency) | 2021 | Largest donation; partial endowment, grantmaking |
| **Elon Musk** | \$10M | 2015 | First AI safety research grants program |
| **<EntityLink id="E521">Coefficient Giving</EntityLink>** | \$1.9M total | Various | Supplemental grant funding, operational support |
| **Survival and Flourishing Fund** | \$500K | Various | Operational support |
| **Jaan Tallinn** | Ongoing | 2014-present | Founding support, strategic direction |
### The Buterin Donation
In 2021, Ethereum co-founder Vitalik Buterin donated \$665.8 million in cryptocurrency to FLI—the largest single donation in the organization's history and one of the largest cryptocurrency donations to any nonprofit. The donation was "large and unconditional," with FLI converting a significant portion to an endowment to ensure long-term organizational independence. According to FLI's finances page, Buterin was not officially acknowledged as "largest donor by far" until May 2023, when the organization updated its funding page.
The donation has been used for:
- **Endowment**: Long-term organizational sustainability
- **2021 Grant Program**: \$25 million announced for AI safety research
- **Operational Deficit Coverage**: FLI's 2023 income was only \$624,714; the Buterin endowment covers operating shortfalls
- **Asset Transfers**: Between December 11-30, 2022, FLI transferred \$368 million to three related entities governed by the same four people (Max Tegmark, Meia Chita-Tegmark, Anthony Aguirre, Jaan Tallinn)
### Financial Overview
| Metric | Value | Notes |
|--------|-------|-------|
| **2023 Income** | \$624,714 | \$600K from single individual donor |
| **2024 Income** | €83,241 | Limited fundraising year |
| **EU Advocacy Spending** | €446,619/year | Includes staff and Dentons Global Advisors |
| **Total Grants Distributed** | \$25M+ | Across all grant programs |
| **Grant Size Range** | \$22,000 - \$1.5M | Historical range |
| **Donations Received** | 1,500+ | "Various sizes from wide variety of donors" since founding |
### Institutional Funders
| Funder | Amount | Purpose |
|--------|--------|---------|
| **<EntityLink id="E521">Coefficient Giving</EntityLink>** | \$1.186M (2015) | Supplement to Musk grants (high-quality proposals exceeded funding) |
| **<EntityLink id="E521">Coefficient Giving</EntityLink>** | Additional grants | Various operational support |
| **Survival and Flourishing Fund** | \$500K | Operational support |
## Policy Work and Government Engagement
FLI maintains active policy engagement across multiple jurisdictions, with dedicated staff for EU, UN, and US advocacy.
### European Union
FLI's EU work focuses on two priorities: (1) promoting beneficial AI development and (2) regulating lethal autonomous weapons. Their most significant achievement was advocating for the inclusion of foundation models (general-purpose AI systems) in the scope of the EU AI Act.
| Initiative | Status | FLI Role |
|------------|--------|----------|
| **EU AI Act (Foundation Models)** | Adopted | Successfully pushed for inclusion of general-purpose systems; advocated for adoption |
| **Definition of Manipulation** | Ongoing | Recommending broader definition to include any manipulatory technique and societal harm |
| **Autonomous Weapons Treaty** | Advocacy | Encouraging EU member states to support international treaty |
**EU Advocacy Details**:
- **Budget**: €446,619 annually (includes staff salaries and Dentons Global Advisors consulting)
- **Lead**: Risto Uuk (Head of EU Policy and Research)
- **Key Achievement**: Foundation models included in AI Act scope
### United Nations
FLI advocates at the UN for a legally binding international instrument on autonomous weapons and a new international agency to govern AI.
| Activity | Forum | Outcome |
|----------|-------|---------|
| **Autonomous Weapons Treaty** | CCW (Convention on Certain Conventional Weapons) Geneva | Ongoing advocacy; FLI agrees with ICRC recommendation for legally binding rules |
| **2018 Letter on Lethal Autonomous Weapons** | Global | FLI drafted letter calling for laws against lethal autonomous weapons |
| **Digital Cooperation Roadmap** | UN Secretary-General | FLI (with France and Finland) served as civil society champion; recommendations (3C) on AI governance were adopted |
| **Slaughterbots Screening** | UN CCW | 2017 video shown to delegates |
### United States
| Activity | Details |
|----------|---------|
| **Congressional Testimony** | Max Tegmark and others have testified before Congress on AI risk |
| **Senate Hearings** | 2023 pause letter cited in AI safety hearings |
| **Policy Research** | Analysis supporting US AI governance frameworks |
### Public Education and Outreach
| Medium | Activities |
|--------|------------|
| **Podcasts** | Interviews with researchers, policymakers; AXRP appearance by Anthony Aguirre |
| **Articles and Reports** | Explainers on AI risk, policy analysis, technical summaries |
| **Videos** | Slaughterbots series, educational content on AI safety |
| **Websites** | futureoflife.org, autonomousweapons.org, autonomousweaponswatch.org |
| **Newsletters** | Regular updates on AI safety and policy developments |
| **Social Media** | Ongoing communication; significant following |
| **Conferences** | Web Summit 2024 (Tegmark), Vienna Autonomous Weapons Conference 2025 (Javorsky) |
### Research and Fellowship Programs
| Program | Description |
|---------|-------------|
| **AI Safety Grants** | Direct research funding (see grants section) |
| **PhD Fellowships** | Technical AI safety research; 2024 launched US-China AI Governance fellowship |
| **Convening** | Conferences bringing together researchers, industry, and policymakers |
| **Publications** | Policy papers, technical research support |
#### Future of Life Foundation (FLF)
The Future of Life Foundation is a related organization sharing leadership with FLI (Anthony Aguirre serves as President of FLF and Executive Director of FLI). While FLI focuses primarily on advocacy and grantmaking, FLF operates more as an incubator for beneficial AI applications.
In 2025, FLF launched the <EntityLink id="E585">AI for Human Reasoning Fellowship</EntityLink>—a 12-week program that brought together 30 fellows to develop AI tools for coordination, epistemics, and collective decision-making. Fellows received \$25K-\$50K stipends to build prototypes including Community Notes AI, Deliberation Markets, Polis 2.0, and various forecasting tools.
## Controversies and Criticisms
FLI has faced significant criticism from multiple directions, reflecting tensions within the AI ethics and safety communities.
### Pause Letter Criticism (2023)
The 2023 pause letter was criticized from both within and outside the AI safety community:
| Critic | Affiliation | Criticism |
|--------|-------------|-----------|
| **Timnit Gebru** | DAIR, former Google | "Sensationalist"; amplifies "dystopian sci-fi scenario" while ignoring current algorithmic harms |
| **Emily Bender** | University of Washington | Co-author of "On the Dangers of Stochastic Parrots"; letter ignores real present-day harms |
| **Margaret Mitchell** | Former Google AI Ethics | "Letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don't have." |
| **Bill Gates** | Microsoft | "Asking one particular group to pause doesn't solve the challenges" |
| **Sam Altman** | OpenAI CEO | "Missing most technical nuance about where we need the pause"; disputed claims about GPT-5 training |
| **Reid Hoffman** | LinkedIn/Microsoft | "Virtue signalling" with no real impact |
| **Eliezer Yudkowsky** | MIRI | *Time* essay: "Shut it all down"—letter doesn't go far enough; requested moratorium is insufficient |
### Near-Term vs. Long-Term AI Risk Debate
Critics argue that FLI's focus on long-term existential risk from hypothetical superintelligent AI distracts from immediate harms:
| Argument | Source | FLI Position |
|----------|--------|--------------|
| "Long-term AI risk arguments are speculative and downplay near-term harms" | AI ethics researchers (Gebru, Bender, Mitchell) | Both near-term and long-term risks deserve attention |
| "Provoking fear of AI serves tech billionaires who fund these groups" | Critics of effective altruism | FLI maintains editorial independence despite funding sources |
| "Current discrimination and job loss are more urgent than speculative superintelligence" | Labor and civil rights advocates | AI safety research addresses both capability and deployment risks |
### TESCREALism Accusations
Philosopher Émile Torres has accused FLI of embracing "TESCREALism"—the ideology of re-engineering humanity through AI for immortality, space colonization, and post-human civilization. Torres argues that while some TESCREALists support unregulated AI development, FLI "embraces the goal but is alarmed by what can go wrong along the way." FLI has not directly responded to these characterizations.
### Controversial Grant Proposal (Nya Dagbladet Foundation)
In 2022, FLI faced controversy over a potential grant to the Nya Dagbladet Foundation (NDF):
| Timeline | Event |
|----------|-------|
| Initial review | FLI was "initially positive" about NDF proposal |
| Due diligence | FLI's process "uncovered information indicating that NDF was not aligned with FLI's values or charitable purposes" |
| November 2022 | FLI informed NDF they would not proceed with a grant |
| December 15, 2022 | Swedish media contacted FLI describing Nya Dagbladet as a "far-right extremist group" |
| Outcome | FLI [issued public statement](https://futureoflife.org/recent-news/statement-on-a-controversial-rejected-grant-proposal/); zero funding was given to NDF |
### Elon Musk Association
| Issue | Context | FLI Response |
|-------|---------|--------------|
| **Initial Funding** | \$10M grant from Musk (2015) | Donation was earmarked for research grants; FLI has received 1,500+ donations since |
| **Pause Letter Signatory** | Musk among 33,000+ signatories | Many prominent researchers also signed; Musk is one of thousands |
| **Perception** | Some media portray FLI as "Musk-aligned" | FLI maintains editorial and programmatic independence; Buterin donation is now larger |
| **Conflict of Interest Concerns** | Musk's xAI competes with OpenAI; pause letter benefits competitors | FLI points to diverse signatory list including Bengio, Russell, Hinton |
### Cryptocurrency Donation Transparency
| Issue | Context |
|-------|---------|
| **Late Disclosure** | Buterin's \$665.8M donation (2021) was not publicly acknowledged as "largest donor by far" on FLI's website until May 2023 |
| **Asset Transfers** | Between December 11-30, 2022, FLI transferred \$368M to three entities governed by the same four people (Tegmark, Chita-Tegmark, Aguirre, Tallinn) |
| **Cryptocurrency Volatility** | Donation value fluctuated significantly; actual liquid value unclear |
### Connection to FTX/Effective Altruism
FLI operates within the broader effective altruism ecosystem, which was significantly affected by the FTX collapse in November 2022. While FLI was not directly funded by FTX or the FTX Future Fund to the same extent as other EA organizations, the association has drawn scrutiny. FLI has not received clawback demands, but the broader EA funding crisis has affected the landscape in which FLI operates.
## Comparison with Other Organizations
| Aspect | FLI | MIRI | CAIS | Coefficient Giving | CSER |
|--------|-----|------|------|-------------------|------|
| **Primary Focus** | Advocacy + Grants + Policy | Technical AI safety research | Research + Statement of Concern | Grantmaking (broad) | Academic existential risk research |
| **Public Profile** | Very High | Low-Medium | Medium | Medium | Medium |
| **Media Strategy** | Very Active (viral videos, open letters) | Minimal | Selective (single statement) | Moderate | Academic publications |
| **Policy Engagement** | Very High (EU, UN, US) | Minimal | Limited | Moderate (via grantees) | Moderate |
| **Grant Distribution** | \$25M+ | N/A (recipient) | N/A (new org) | Billions | N/A |
| **Funding Model** | Major donors + endowment | Donations | Donations | Good Ventures | University + grants |
| **Geographic Focus** | Global | US | US | Global | UK |
| **Founding Year** | 2014 | 2000 | 2022 | 2014 | 2012 |
| **Founder Connection** | Tallinn (board) | Tallinn (funded) | Hinton, Bengio, etc. | Moskovitz | Tallinn (co-founder) |
### Positioning in the AI Safety Ecosystem
FLI occupies a distinct niche: high-profile public advocacy combined with grantmaking and policy engagement. While MIRI focuses on technical research and Coefficient Giving on behind-the-scenes grantmaking, FLI prioritizes visibility and discourse-shaping. This creates both advantages (media influence, policy access) and disadvantages (controversy, perception of sensationalism).
<Mermaid chart={`
flowchart LR
subgraph Technical["Technical Research"]
MIRI[MIRI]
ARC[ARC]
Redwood[Redwood]
Anthropic[Anthropic<br/>Alignment]
end
subgraph Academic["Academic Research"]
CSER[CSER]
FHI[FHI]
GovAI[GovAI]
end
subgraph Advocacy["Advocacy & Policy"]
FLI[FLI]
CAIS[CAIS]
end
subgraph Funding["Funding"]
OP[Coefficient<br/>Giving]
SFF[SFF]
end
FLI --> |grants| MIRI
FLI --> |grants| FHI
SFF --> |grants| FLI
OP --> |grants| FLI
OP --> |grants| MIRI
OP --> |grants| CSER
Tallinn[Jaan Tallinn] --> |founded| FLI
Tallinn --> |co-founded| CSER
Tallinn --> |founded| SFF
Tallinn --> |funded| MIRI
style FLI fill:#ffffcc
style Advocacy fill:#ffffcc
`} />
## Strengths and Limitations
### Strengths
| Strength | Evidence | Impact |
|----------|----------|--------|
| **Public Visibility** | Pause letter: 33,000+ signatures; Slaughterbots: 2M+ views; Asilomar Principles: 5,700+ signatories | Shaped public discourse on AI risk; made "AI safety" mainstream term |
| **Convening Power** | Puerto Rico 2015, Asilomar 2017 brought together top AI researchers, industry leaders, philosophers | Created field of AI alignment; produced influential governance frameworks |
| **Policy Access** | EU AI Act engagement; UN CCW participation; US Congressional testimony | Foundation models included in AI Act; autonomous weapons on international agenda |
| **Financial Resources** | \$665.8M Buterin donation; \$25M+ in grants distributed | Long-term sustainability; significant grantmaking capacity |
| **Communication** | Viral videos, open letters, effective media strategy | Public awareness of AI risk dramatically increased |
| **Network Effects** | Tallinn connections to CSER, SFF; overlap with EA/rationalist communities | Influence across multiple organizations |
| **First-Mover Advantage** | Founded 2014; first AI safety grants program 2015 | Established credibility before AI became mainstream concern |
### Limitations
| Limitation | Context | Consequence |
|------------|---------|-------------|
| **Controversy** | Pause letter criticism; TESCREALism accusations; near-term vs. long-term debate | Alienated some AI ethics researchers; credibility questioned in some circles |
| **Perception Issues** | Musk association; tech billionaire funding; late Buterin disclosure | Some view FLI as serving elite interests |
| **Research Capacity** | More advocacy than original research; relies on grantees | Dependent on others for technical work |
| **Governance Concentration** | Four individuals (Tegmark, Chita-Tegmark, Aguirre, Tallinn) control multiple related entities | Lack of external board diversity |
| **Messaging Criticism** | "Sensationalist" accusations; "dystopian sci-fi" framing | May undermine credibility with skeptics |
| **Narrow Community** | Closely tied to EA/rationalist/TESCREAL networks | Limited engagement with broader civil society |
| **Effectiveness Unclear** | Pause letter did not achieve pause; labs continued scaling | High-profile campaigns may not translate to policy change |
## Timeline
| Year | Event |
|------|-------|
| **March 2014** | FLI founded by Tegmark, Tallinn, Aguirre, Krakovna, Chita-Tegmark |
| **May 24, 2014** | Official launch at MIT; Alan Alda moderates panel |
| **January 2-5, 2015** | Puerto Rico Conference: "The Future of AI: Opportunities and Challenges" |
| **January 2015** | Research Priorities Open Letter; Musk announces \$10M donation |
| **July 1, 2015** | First AI safety grants announced: \$7M to 37 projects |
| **October 2016** | AI Safety Research profiles published |
| **January 5-8, 2017** | Asilomar Conference; 23 AI Principles developed |
| **January 30, 2017** | Asilomar AI Principles published |
| **November 13, 2017** | Slaughterbots video released; 2M+ views |
| **2018** | FLI drafts letter calling for laws against lethal autonomous weapons |
| **2021** | Vitalik Buterin donates \$665.8M in cryptocurrency |
| **July 2021** | \$25M grant program announced (Buterin funding) |
| **November 30, 2021** | Slaughterbots sequel "if human: kill()" released |
| **November 2022** | FLI rejects Nya Dagbladet Foundation grant; FTX collapse affects EA ecosystem |
| **December 2022** | \$368M transferred to three related entities |
| **March 22, 2023** | "Pause Giant AI Experiments" open letter published |
| **May 2023** | Buterin acknowledged as "largest donor by far" on website |
| **2024** | PhD Fellowship in US-China AI Governance launched |
| **November 2024** | Max Tegmark at Web Summit (Lisbon) |
| **January 2025** | Emilia Javorsky at Vienna Autonomous Weapons Conference |
| **March 2025** | Anthony Aguirre publishes *Keep The Future Human* |
## Sources and Citations
### Primary Sources
- [FLI Official Website](https://futureoflife.org/)
- [FLI About Page](https://futureoflife.org/about-us/)
- [FLI Our People](https://futureoflife.org/about-us/our-people/)
- [FLI Our History](https://futureoflife.org/about-us/our-history/)
- [FLI Finances](https://futureoflife.org/about-us/finances/)
### Grant Programs
- [2015 AI Safety Grant Program](https://futureoflife.org/grant-program/2015-grant-program/)
- [2023 Grants](https://futureoflife.org/grant-program/2023-grants/)
- [Coefficient Giving: FLI AI Risk Reduction Grant (2015)](https://www.openphilanthropy.org/grants/future-of-life-institute-artificial-intelligence-risk-reduction/)
### Conferences and Principles
- [Puerto Rico AI Safety Conference](https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/)
- [Beneficial AI 2017 (Asilomar)](https://futureoflife.org/event/bai-2017/)
- [Asilomar AI Principles](https://futureoflife.org/open-letter/ai-principles/)
- [Asilomar Conference on Beneficial AI - Wikipedia](https://en.wikipedia.org/wiki/Asilomar_Conference_on_Beneficial_AI)
### Pause Letter
- [Pause Giant AI Experiments: An Open Letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/)
- [Pause Giant AI Experiments - Wikipedia](https://en.wikipedia.org/wiki/Pause_Giant_AI_Experiments:_An_Open_Letter)
- [FLI FAQs about the Pause Letter](https://futureoflife.org/ai/faqs-about-flis-open-letter-calling-for-a-pause-on-giant-ai-experiments/)
### Autonomous Weapons
- [Slaughterbots - Wikipedia](https://en.wikipedia.org/wiki/Slaughterbots)
- [FLI Educating about Autonomous Weapons](https://futureoflife.org/project/autonomous-weapons-systems/)
- [autonomousweapons.org](https://autonomousweapons.org/)
### Leadership
- [Anthony Aguirre - Wikipedia](https://en.wikipedia.org/wiki/Anthony_Aguirre)
- [Anthony Aguirre - FLI Profile](https://futureoflife.org/person/anthony-aguirre/)
- [AXRP Episode 38.7: Anthony Aguirre on FLI](https://axrp.net/episode/2025/02/09/episode-38_7-anthony-aguirre-future-of-life-institute.html)
### Media and Analysis
- [Future of Life Institute - Wikipedia](https://en.wikipedia.org/wiki/Future_of_Life_Institute)
- [FLI - EA Forum Topic](https://forum.effectivealtruism.org/topics/future-of-life-institute)
- [FLI - InfluenceWatch](https://www.influencewatch.org/non-profit/future-of-life-institute-fli/)
- [IEEE Spectrum: AI Pause Letter Stokes Fear and Controversy](https://spectrum.ieee.org/ai-pause-letter-stokes-fear)
- [LessWrong: Elon Musk Donates \$10M to FLI](https://www.lesswrong.com/posts/FuCZdbQ3h6782bnY6/elon-musk-donates-usd10m-to-the-future-of-life-institute-to)
### Financial
- [FLI - ProPublica Nonprofit Explorer](https://projects.propublica.org/nonprofits/organizations/471052538)
- [FLI - GuideStar Profile](https://www.guidestar.org/profile/47-1052538)
- [Philanthropy News Digest: FLI Received \$665M in Crypto](https://philanthropynewsdigest.org/news/future-of-life-institute-received-665-million-in-crypto)
### Criticism
- [FLI Statement on Controversial Rejected Grant Proposal](https://futureoflife.org/recent-news/statement-on-a-controversial-rejected-grant-proposal/)
## External Links
- [FLI Website](https://futureoflife.org/)
- [FLI LinkedIn](https://www.linkedin.com/company/future-of-life-institute)
- [Asilomar AI Principles](https://futureoflife.org/open-letter/ai-principles/)
- [Pause Giant AI Experiments Letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/)
- [Research Grants](https://futureoflife.org/our-work/grantmaking-work/)
- [Policy and Research](https://futureoflife.org/our-work/policy-and-research/)
- [Autonomous Weapons](https://autonomousweapons.org/)
- [Autonomous Weapons Watch](https://autonomousweaponswatch.org/)