Longterm Wiki

Future of Humanity Institute

fhi (E140)
← Back to pagePath: /knowledge-base/organizations/fhi/
Page Metadata
{
  "id": "fhi",
  "numericId": null,
  "path": "/knowledge-base/organizations/fhi/",
  "filePath": "knowledge-base/organizations/fhi.mdx",
  "title": "Future of Humanity Institute (FHI)",
  "quality": 51,
  "importance": 34,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-01-29",
  "llmSummary": "The Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving $10M+ in funding before closing due to administrative conflicts. FHI produced seminal works (Superintelligence, The Precipice), trained leaders now at Anthropic/DeepMind/GovAI, and advised UN/UK government, demonstrating both transformative intellectual impact and the challenges of housing speculative research in traditional academia.",
  "structuredSummary": null,
  "description": "The Future of Humanity Institute was a pioneering interdisciplinary research center at Oxford University (2005-2024) that founded the fields of existential risk studies and AI alignment research. Under Nick Bostrom's direction, FHI produced seminal works including Superintelligence and The Precipice, trained a generation of researchers now leading organizations like GovAI, Anthropic, and DeepMind safety teams, and advised the UN and UK government on catastrophic risks before its closure in April 2024 due to administrative conflicts with Oxford's Faculty of Philosophy.",
  "ratings": {
    "novelty": 3.2,
    "rigor": 5.8,
    "actionability": 2.1,
    "completeness": 7.3
  },
  "category": "organizations",
  "subcategory": "safety-orgs",
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "metrics": {
    "wordCount": 4188,
    "tableCount": 32,
    "diagramCount": 2,
    "internalLinks": 11,
    "externalLinks": 24,
    "footnoteCount": 0,
    "bulletRatio": 0.05,
    "sectionCount": 57,
    "hasOverview": true,
    "structuralScore": 15
  },
  "suggestedQuality": 100,
  "updateFrequency": 21,
  "evergreen": true,
  "wordCount": 4188,
  "unconvertedLinks": [
    {
      "text": "fhi.ox.ac.uk",
      "url": "https://www.fhi.ox.ac.uk/",
      "resourceId": "1593095c92d34ed8",
      "resourceTitle": "**Future of Humanity Institute**"
    },
    {
      "text": "Nick Bostrom",
      "url": "https://nickbostrom.com/",
      "resourceId": "9cf1412a293bfdbe",
      "resourceTitle": "Theoretical work"
    },
    {
      "text": "Future of Humanity Institute Website",
      "url": "https://www.fhi.ox.ac.uk/",
      "resourceId": "1593095c92d34ed8",
      "resourceTitle": "**Future of Humanity Institute**"
    },
    {
      "text": "Nick Bostrom's Homepage",
      "url": "https://nickbostrom.com/",
      "resourceId": "9cf1412a293bfdbe",
      "resourceTitle": "Theoretical work"
    },
    {
      "text": "Superintelligence: Paths, Dangers, Strategies - Wikipedia",
      "url": "https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies",
      "resourceId": "0151481d5dc82963",
      "resourceTitle": "Superintelligence"
    }
  ],
  "unconvertedLinkCount": 5,
  "convertedLinkCount": 0,
  "backlinkCount": 1,
  "redundancy": {
    "maxSimilarity": 15,
    "similarPages": [
      {
        "id": "cser",
        "title": "CSER (Centre for the Study of Existential Risk)",
        "path": "/knowledge-base/organizations/cser/",
        "similarity": 15
      },
      {
        "id": "fli",
        "title": "Future of Life Institute (FLI)",
        "path": "/knowledge-base/organizations/fli/",
        "similarity": 14
      },
      {
        "id": "nick-bostrom",
        "title": "Nick Bostrom",
        "path": "/knowledge-base/people/nick-bostrom/",
        "similarity": 14
      },
      {
        "id": "80000-hours",
        "title": "80,000 Hours",
        "path": "/knowledge-base/organizations/80000-hours/",
        "similarity": 13
      },
      {
        "id": "centre-for-long-term-resilience",
        "title": "Centre for Long-Term Resilience",
        "path": "/knowledge-base/organizations/centre-for-long-term-resilience/",
        "similarity": 13
      }
    ]
  }
}
Entity Data
{
  "id": "fhi",
  "type": "organization",
  "title": "Future of Humanity Institute",
  "description": "Oxford University research center focused on existential risks, founded by Nick Bostrom. Closed in 2024.",
  "tags": [
    "research-org",
    "existential-risk",
    "oxford"
  ],
  "relatedEntries": [
    {
      "id": "nick-bostrom",
      "type": "researcher"
    },
    {
      "id": "existential-risk",
      "type": "concept"
    }
  ],
  "sources": [],
  "lastUpdated": "2025-12",
  "website": "https://www.fhi.ox.ac.uk",
  "customFields": []
}
Canonical Facts (0)

No facts for this entity

External Links
{
  "wikidata": "https://www.wikidata.org/wiki/Q5510826"
}
Backlinks (1)
idtitletyperelationship
miri-eraThe MIRI Erahistorical
Frontmatter
{
  "title": "Future of Humanity Institute (FHI)",
  "description": "The Future of Humanity Institute was a pioneering interdisciplinary research center at Oxford University (2005-2024) that founded the fields of existential risk studies and AI alignment research. Under Nick Bostrom's direction, FHI produced seminal works including Superintelligence and The Precipice, trained a generation of researchers now leading organizations like GovAI, Anthropic, and DeepMind safety teams, and advised the UN and UK government on catastrophic risks before its closure in April 2024 due to administrative conflicts with Oxford's Faculty of Philosophy.",
  "sidebar": {
    "order": 5
  },
  "quality": 51,
  "llmSummary": "The Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving $10M+ in funding before closing due to administrative conflicts. FHI produced seminal works (Superintelligence, The Precipice), trained leaders now at Anthropic/DeepMind/GovAI, and advised UN/UK government, demonstrating both transformative intellectual impact and the challenges of housing speculative research in traditional academia.",
  "lastEdited": "2026-01-29",
  "importance": 34.5,
  "update_frequency": 21,
  "ratings": {
    "novelty": 3.2,
    "rigor": 5.8,
    "actionability": 2.1,
    "completeness": 7.3
  },
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "subcategory": "safety-orgs",
  "entityType": "organization"
}
Raw MDX Source
---
title: Future of Humanity Institute (FHI)
description: The Future of Humanity Institute was a pioneering interdisciplinary research center at Oxford University (2005-2024) that founded the fields of existential risk studies and AI alignment research. Under Nick Bostrom's direction, FHI produced seminal works including Superintelligence and The Precipice, trained a generation of researchers now leading organizations like GovAI, Anthropic, and DeepMind safety teams, and advised the UN and UK government on catastrophic risks before its closure in April 2024 due to administrative conflicts with Oxford's Faculty of Philosophy.
sidebar:
  order: 5
quality: 51
llmSummary: The Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving $10M+ in funding before closing due to administrative conflicts. FHI produced seminal works (Superintelligence, The Precipice), trained leaders now at Anthropic/DeepMind/GovAI, and advised UN/UK government, demonstrating both transformative intellectual impact and the challenges of housing speculative research in traditional academia.
lastEdited: "2026-01-29"
importance: 34.5
update_frequency: 21
ratings:
  novelty: 3.2
  rigor: 5.8
  actionability: 2.1
  completeness: 7.3
clusters:
  - community
  - ai-safety
  - governance
subcategory: safety-orgs
entityType: organization
---
import {Mermaid, DataExternalLinks, EntityLink} from '@components/wiki';

<DataExternalLinks pageId="fhi" />

## Quick Assessment

| Dimension | Assessment | Evidence |
|-----------|------------|----------|
| **Focus** | Existential Risk Research | AI safety, global catastrophic risks, macrostrategy, human enhancement |
| **Status** | Closed (April 16, 2024) | Faculty of Philosophy ended contracts; staff dispersed |
| **Peak Size** | ≈50 researchers | Grew from 3 in 2005 to peak around 2018-2020 |
| **Duration** | 19 years (2005-2024) | Founded as 3-year pilot, became permanent institution |
| **Total Funding** | \$10M+ from <EntityLink id="E521">Coefficient Giving</EntityLink> | Plus £1M from <EntityLink id="E116">Elon Musk</EntityLink>, ERC grants, Leverhulme Trust |
| **Key Publications** | Superintelligence (2014), The Precipice (2020) | Both became international bestsellers |
| **Policy Reach** | UN, UK Government, EU | Advised UN Secretary General, quoted by UK PM at UN |
| **Spin-offs** | GovAI, influenced CSER, GPI | Multiple organizations founded by alumni |

## Organization Details

| Attribute | Details |
|-----------|---------|
| **Full Name** | Future of Humanity Institute |
| **Type** | University Research Institute |
| **Founded** | November 2005 |
| **Closed** | April 16, 2024 |
| **Location** | University of Oxford, Faculty of Philosophy |
| **Institutional Home** | Oxford Martin School (initially James Martin 21st Century School) |
| **Founder & Director** | <EntityLink id="E215">Nick Bostrom</EntityLink> |
| **Peak Staff** | ≈50 researchers |
| **Website** | [fhi.ox.ac.uk](https://www.fhi.ox.ac.uk/) (archived) |
| **Final Report** | [FHI Final Report (Sandberg, 2024)](https://www.futureofhumanityinstitute.org/) |
| **Major Funders** | <EntityLink id="E521">Coefficient Giving</EntityLink> (\$10M+), Elon Musk (£1M), ERC, Leverhulme Trust |

## Overview

The [Future of Humanity Institute](https://en.wikipedia.org/wiki/Future_of_Humanity_Institute) (FHI) was a multidisciplinary research center at the University of Oxford that fundamentally shaped how humanity thinks about long-term risks and the future of civilization. Founded by philosopher [Nick Bostrom](https://nickbostrom.com/) in November 2005 as part of the Oxford Martin School (then the James Martin 21st Century School), FHI brought together researchers from philosophy, computer science, mathematics, and economics to tackle questions that most of academia considered too speculative or far-fetched to study rigorously.

During its 19-year existence, FHI achieved an extraordinary record of intellectual impact relative to its modest size. The institute was involved in the germination of a wide range of ideas that have since become mainstream concerns: existential risk, effective altruism, longtermism, <EntityLink id="E439">AI alignment</EntityLink>, <EntityLink id="E608">AI governance</EntityLink>, global catastrophic risk, information hazards, the unilateralist's curse, and moral uncertainty. Starting with just three researchers in 2005, FHI grew to approximately fifty at its peak before administrative conflicts with Oxford's Faculty of Philosophy led to a hiring freeze in 2020 and ultimate closure in April 2024.

FHI's influence extends far beyond its publications. The institute trained a generation of researchers who now hold leadership positions at <EntityLink id="E22">Anthropic</EntityLink>, DeepMind, <EntityLink id="E218">OpenAI</EntityLink>, the <EntityLink id="E153">Centre for the Governance of AI</EntityLink> (GovAI), and numerous other organizations. <EntityLink id="E355">Toby Ord</EntityLink>'s _The Precipice_ was quoted by UK Prime Minister Boris Johnson in his 2021 UN General Assembly address, and FHI researchers advised the UN Secretary General's Office on existential risk and future generations. The institute's closure represents the end of an era, but its intellectual legacy continues through its alumni, spin-off organizations, and the fields it created.

## Historical Evolution

<Mermaid chart={`
flowchart TD
    subgraph Foundation["Foundation Period (2005-2008)"]
        F1[November 2005: FHI Founded]
        F2[3 researchers initially]
        F3[Part of Oxford Martin School]
    end

    subgraph Growth["Growth Period (2008-2014)"]
        G1[GCR Conference 2008-2010]
        G2[22 journal articles]
        G3[Global Catastrophic Risks book 2008]
        G4[Superintelligence published 2014]
    end

    subgraph Peak["Peak Period (2014-2020)"]
        P1[~50 researchers at peak]
        P2[GovAI team formed 2018]
        P3[The Precipice published 2020]
        P4[£13.3M Coefficient Giving grant 2018]
    end

    subgraph Decline["Decline Period (2020-2024)"]
        D1[2020: Faculty hiring freeze]
        D2[2021: GovAI spins out]
        D3[2023: Contracts not renewed]
        D4[April 16, 2024: FHI closes]
    end

    Foundation --> Growth
    Growth --> Peak
    Peak --> Decline

    style Foundation fill:#e6f3ff
    style Growth fill:#ccffcc
    style Peak fill:#ffffcc
    style Decline fill:#ffcccc
`} />

### Founding Era (2005-2008)

Nick Bostrom established FHI in November 2005 after recognizing that questions about humanity's long-term future and existential risks were being systematically neglected by mainstream academia. The institute was initially funded as a three-year pilot project but quickly demonstrated its value through a series of influential publications and conferences.

| Milestone | Date | Significance |
|-----------|------|--------------|
| FHI Founded | November 2005 | First academic institute dedicated to existential risk |
| Initial Team | 2005 | 3 researchers: Bostrom, plus initial hires |
| Oxford Martin School Integration | 2005 | Provided institutional legitimacy and infrastructure |
| _Global Catastrophic Risks_ (book) | 2008 | First comprehensive academic treatment of GCR |

In its early years, FHI focused on establishing existential risk as a legitimate field of academic inquiry. Bostrom's 2002 paper "Existential Risk: Analyzing Human Extinction Scenarios" had laid the conceptual groundwork, defining existential risk as "one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." FHI's task was to build an institutional home for this research.

### Growth Period (2008-2014)

Between 2008 and 2010, FHI hosted the Global Catastrophic Risks conference, wrote 22 academic journal articles, and published 34 chapters in academic volumes. The GCR conference was a pivotal moment in building an academic community around reducing risks to humanity's future.

| Achievement | Year | Impact |
|-------------|------|--------|
| GCR Conference Series | 2008-2010 | Built academic community around catastrophic risk |
| 22 Journal Articles | 2008-2010 | Established academic legitimacy |
| 34 Book Chapters | 2008-2010 | Spread ideas across disciplines |
| Bostrom begins _Superintelligence_ | 2009 | Originally one chapter on AI, grew into landmark book |
| _Superintelligence_ published | July 2014 | International bestseller; ignited AI safety movement |

This period saw FHI expand its research scope significantly. When Bostrom began work on a book about existential risk in 2009, he found one chapter on AI "getting out of hand." The issue of risks from superintelligent systems turned out to be much deeper than initially expected, eventually evolving into _Superintelligence: Paths, Dangers, Strategies_, published in 2014.

### Peak Period (2014-2020)

The publication of _Superintelligence_ marked the beginning of FHI's most influential period. The book became an international bestseller and is credited with convincing many technologists, including Elon Musk and Bill Gates, to take AI risks seriously. FHI grew to approximately 50 researchers and received its largest funding commitments.

| Development | Year | Details |
|-------------|------|---------|
| _Superintelligence_ impact | 2014+ | Read by Musk, Gates; influenced industry leaders |
| Elon Musk donation | 2015 | £1M for AI safety research |
| Governance of AI team formed | 2018 | Led by Allan Dafoe; later spins out |
| £13.3M Coefficient Giving grant | 2018 | Largest grant in Faculty of Philosophy history |
| _The Precipice_ published | March 2020 | First book-length treatment of existential risk for popular audience |

In 2018, FHI received a series of awards totaling up to £13.3 million over three years from <EntityLink id="E521">Coefficient Giving</EntityLink> (then Open Philanthropy), the largest donation in the history of the Faculty of Philosophy at Oxford. This funding supported work on risks from advanced AI, biosecurity and pandemic preparedness, and macrostrategy.

### Decline and Closure (2020-2024)

Despite its intellectual success, FHI's final years were marked by what Anders Sandberg called "gradual suffocation by Faculty bureaucracy." The flexible, fast-moving approach of the institute did not function well with the rigid rules and slow decision-making of the surrounding organization.

| Event | Date | Consequence |
|-------|------|-------------|
| Faculty hiring freeze begins | 2020 | No new researchers could be hired |
| Faculty fundraising freeze | 2020 | Couldn't pursue new grants |
| GovAI spins out | 2021 | Largest team leaves to escape restrictions |
| Bostrom email controversy | 2023 | 1996 email resurfaces; Oxford investigates |
| Faculty announces no contract renewals | Late 2023 | Remaining staff told contracts would end |
| FHI officially closes | April 16, 2024 | 19-year run ends |
| Bostrom resigns from Oxford | 2024 | Founds Macrostrategy Research Initiative |

The stated reason for closure, according to Bostrom, was that the university did not have the operational bandwidth to manage FHI. Sandberg explained the cultural mismatch: "I often described Oxford like a coral reef of calcified institutions built on top of each other... FHI was one such fish but grew too big for its hole. At that point it became either vulnerable to predators, or had to enlarge the hole, upsetting the neighbors."

## Research Programs and Contributions

### Existential Risk Studies

FHI essentially created the academic field of existential risk studies. Before FHI, the topic was considered too speculative for serious academic attention. FHI demonstrated that it was possible to do rigorous research on big-picture questions about humanity's future.

| Research Area | Key Contributions | Impact |
|---------------|-------------------|--------|
| **Definition and Taxonomy** | Bostrom's existential risk framework | Standard definitions used across field |
| **Probability Estimation** | Upper bounds on background extinction rate | Quantified risks for policy discussions |
| **Fermi Paradox** | "Dissolving the Fermi Paradox" (2018) | Showed we may be alone in observable universe |
| **Vulnerable World Hypothesis** | Bostrom (2019) | Framework for technology governance |

The 2018 paper "Dissolving the Fermi Paradox" by Sandberg, Drexler, and Ord was the first to estimate and rigorously account for uncertainties in each term of the Drake equation. They found a high likelihood that we are alone in our galaxy or even the entire observable universe.

### AI Safety Research

FHI was one of the earliest academic institutions to take AI safety seriously, working in close collaboration with labs such as DeepMind, OpenAI, and CHAI (Center for Human-Compatible AI).

| Publication | Authors | Year | Contribution |
|-------------|---------|------|--------------|
| "Racing to the precipice" | Armstrong, Shulman, Bostrom | 2013 | Modeled AI development race dynamics |
| _Superintelligence_ | Bostrom | 2014 | Comprehensive analysis of superintelligence risks |
| "Safely interruptible agents" | Orseau, Armstrong | 2016 | Technical AI safety contribution |
| "Reframing Superintelligence" | Drexler | 2019 | Alternative "CAIS" model of AI development |
| "Truthful AI" | Evans et al. | 2021 | Framework for developing AI that doesn't lie |

Stuart Armstrong's collaboration with DeepMind on "Interruptibility" was mentioned in over 100 media articles and represented one of FHI's more practical AI safety contributions.

### AI Governance

FHI's Governance of AI (GovAI) team, led by Allan Dafoe, became the largest research group focused on the policy implications of advanced AI before spinning out as an independent organization in 2021.

| Publication | Authors | Year | Focus |
|-------------|---------|------|-------|
| "AI Governance: A Research Agenda" | Dafoe | 2018 | Foundational governance framework |
| "The Malicious Use of AI" | Brundage et al. | 2018 | Security implications of AI |
| "Strategic implications of openness" | Bostrom | 2017 | Open vs. closed AI development |

GovAI spun out of Oxford in 2021 specifically to "escape bureaucratic restrictions" and has since become an independent nonprofit. Allan Dafoe now heads DeepMind's Long-Term AI Strategy and Governance Team.

### Macrostrategy Research

FHI's Macrostrategy group examined how long-term outcomes for humanity are connected to present-day actions—a research program that influenced the effective altruism movement's focus on cause prioritization.

| Concept | Originator | Significance |
|---------|------------|--------------|
| **Information Hazards** | Bostrom | Framework for managing dangerous knowledge |
| **Unilateralist's Curse** | Bostrom | Explains why groups make riskier decisions |
| **Moral Uncertainty** | MacAskill, Ord | How to act under ethical uncertainty |
| **Crucial Considerations** | Bostrom | Factors that could reverse strategic priorities |

### Biosecurity Research

FHI's Biosecurity group worked on making the world more secure against both natural and human-made catastrophic biological risks, anticipating many concerns that became urgent during the COVID-19 pandemic.

## Key Personnel

### Nick Bostrom (Founder and Director)

| Attribute | Details |
|-----------|---------|
| **Role** | Founder, Director (2005-2024) |
| **Background** | PhD Philosophy (LSE, 2000); BA Physics, Philosophy (Stockholm) |
| **Key Works** | _Anthropic Bias_ (2002), _Superintelligence_ (2014), _Deep Utopia_ (2024) |
| **Famous For** | Simulation argument, existential risk framework, superintelligence analysis |
| **Current Role** | Principal Researcher, Macrostrategy Research Initiative |

Bostrom is best known for his work in five areas: existential risk, the simulation argument, anthropics, impacts of future technology, and implications of consequentialism for global strategy. His simulation argument posits that one of three propositions must be true: civilizations almost never reach technological maturity, technologically mature civilizations are uninterested in running simulations, or we are almost certainly living in a simulation.

### Toby Ord (Senior Research Fellow)

| Attribute | Details |
|-----------|---------|
| **Role** | Senior Research Fellow |
| **Background** | Computer Science turned Philosophy |
| **Key Work** | _The Precipice: Existential Risk and the Future of Humanity_ (2020) |
| **Co-founded** | Giving What We Can (pledged to give most of earnings to charity) |
| **Current Role** | AI Governance, Oxford Martin School |
| **Policy Impact** | Advised UN Secretary General; quoted by UK PM at UN |

Ord's _The Precipice_ was the first book-length treatment of existential risk for a wide audience, influencing policy in the United Kingdom and at the United Nations. Multiple FHI staff were invited to present their work to the British Parliament.

### Anders Sandberg (Senior Research Fellow)

| Attribute | Details |
|-----------|---------|
| **Role** | Senior Research Fellow |
| **Background** | Neuroscience, Computational Neuroscience |
| **Research Focus** | Human enhancement, whole brain emulation, grand futures |
| **Key Papers** | "Dissolving the Fermi Paradox" (2018), FHI Final Report (2024) |
| **Upcoming Work** | _Grand Futures_ (mapping physical limits of advanced civilizations) |
| **Current Role** | Mimir Center for Long Term Futures Research |

Sandberg authored the [FHI Final Report](https://www.futureofhumanityinstitute.org/), which provides a detailed account of the institute's history and achievements. He is described as a futurist who explored the outer limits of what advanced civilizations might achieve.

### Stuart Armstrong (Research Fellow)

| Attribute | Details |
|-----------|---------|
| **Role** | Research Fellow |
| **Research Focus** | AI Safety, Value Alignment, Corrigibility |
| **Key Publications** | _Smarter Than Us_ (2014), "Safely Interruptible Agents" (2016) |
| **DeepMind Collaboration** | "Interruptibility" paper mentioned in 100+ media articles |
| **Current Role** | Co-founder, AI safety startup |

Armstrong's research centered on how to define AI goals and map humanity's partially-defined values into AI systems. His collaboration with DeepMind on interruptibility was one of FHI's most visible practical contributions.

### Eric Drexler (Senior Research Fellow)

| Attribute | Details |
|-----------|---------|
| **Role** | Senior Research Fellow |
| **Background** | Pioneer of nanotechnology; MIT PhD |
| **Key Works** | _Engines of Creation_ (1986), "Reframing Superintelligence" (2019) |
| **FHI Contribution** | CAIS (Comprehensive AI Services) framework |
| **Current Role** | Senior Research Fellow, RAND Europe |

Drexler, best known for pioneering the concept of molecular nanotechnology, brought a unique perspective to AI safety. His "Reframing Superintelligence" proposed an alternative to the "single superintelligent agent" model that dominated much AI safety thinking.

### Carl Shulman (Research Associate)

| Attribute | Details |
|-----------|---------|
| **Role** | Research Associate |
| **Research Focus** | AI forecasting, AI impacts, embryo selection |
| **Key Papers** | "Racing to the precipice" (2013), "Embryo selection for cognitive enhancement" (2014) |
| **Collaborations** | Multiple papers with Bostrom on AI development dynamics |

Shulman contributed to FHI's work on forecasting AI development timelines and understanding the strategic implications of advanced AI.

### Other Notable Researchers

| Researcher | Focus Area | Current Position |
|------------|------------|------------------|
| **Allan Dafoe** | AI Governance | DeepMind Long-Term AI Strategy |
| **Owain Evans** | AI value learning | Academic researcher |
| **Robin Hanson** | Economics, Prediction Markets | George Mason University |
| **Miles Brundage** | AI Policy | Policy researcher |

## Major Publications

### Books

| Title | Author(s) | Year | Impact |
|-------|-----------|------|--------|
| _Global Catastrophic Risks_ | Bostrom, Cirkovic (eds.) | 2008 | First comprehensive academic treatment |
| _Human Enhancement_ | Savulescu, Bostrom (eds.) | 2009 | Bioethics of human augmentation |
| _Superintelligence: Paths, Dangers, Strategies_ | Bostrom | 2014 | International bestseller; sparked AI safety movement |
| _Smarter Than Us_ | Armstrong | 2014 | Accessible introduction to AI alignment |
| _The Precipice: Existential Risk and the Future of Humanity_ | Ord | 2020 | Policy-influential treatment of existential risk |
| _Moral Uncertainty_ | MacAskill, Bykvist, Ord | 2020 | Philosophical foundations of EA |
| _Deep Utopia: Life and Meaning in a Solved World_ | Bostrom | 2024 | Post-scarcity philosophy |

### Influential Papers

| Paper | Authors | Year | Citations | Key Contribution |
|-------|---------|------|-----------|------------------|
| "Existential Risks: Analyzing Human Extinction Scenarios" | Bostrom | 2002 | 1000+ | Founded the field |
| "The Superintelligent Will" | Bostrom | 2012 | High | Instrumental convergence thesis |
| "Thinking Inside the Box: Oracle AI" | Armstrong, Sandberg, Bostrom | 2012 | Moderate | AI containment strategies |
| "Racing to the Precipice" | Armstrong, Shulman, Bostrom | 2013 | Moderate | AI race dynamics |
| "Future Progress in AI: Expert Survey" | Muller, Bostrom | 2016 | High | First systematic AI timeline survey |
| "Dissolving the Fermi Paradox" | Sandberg, Drexler, Ord | 2018 | High | Rigorous Drake equation analysis |
| "The Vulnerable World Hypothesis" | Bostrom | 2019 | High | Technology governance framework |
| "Reframing Superintelligence: CAIS" | Drexler | 2019 | Moderate | Alternative AI development model |

## Funding and Resources

### Coefficient Giving Grants

| Grant | Amount | Year | Purpose |
|-------|--------|------|---------|
| General Support | \$1,995,425 | 2016 | Unrestricted reserves, junior staff |
| Research Scholars Programme | \$1,586,224 | Various | Future scholars hiring |
| Major Grant | \$12,250,810 | 2018 | AI, biosecurity, macrostrategy (£13.3M total) |
| DPhil Positions | \$139,263 | Various | Doctoral student support |
| Admin/Operations | \$100,000 | Various | Via Effective Ventures |

Total Coefficient Giving funding exceeded \$10 million over FHI's lifetime.

### Other Major Funders

| Funder | Amount | Year | Focus |
|--------|--------|------|-------|
| Elon Musk (via FLI) | £1,000,000 | 2015 | AI safety research |
| European Research Council | Various | Multiple | Research grants |
| Leverhulme Trust | Various | Multiple | Research grants |
| Survival and Flourishing Fund | ≈\$150,000 | Various | General support |

### Budget and Operations

FHI's annual revenues and expenses were approximately £1 million per year at operational scale, with the bulk of funding from academic grants that were "lumpy and hard to predict."

## Policy Impact

### United Nations

| Activity | Details |
|----------|---------|
| Secretary General Advisory | Toby Ord advised on existential risk and future generations |
| Human Development Report 2020 | FHI contributed analysis |
| Boris Johnson UN Speech 2021 | Quoted Toby Ord's _The Precipice_ |

### United Kingdom

| Activity | Details |
|----------|---------|
| Parliamentary Presentations | Multiple staff invited to present to Parliament |
| Future Proof Report 2021 | Co-authored UK resilience strategy report |
| Paymaster General Speech | Favorably mentioned FHI's resilience work |

### European Union

FHI researchers contributed to policy discussions on AI governance that informed the development of the EU AI Act.

## Spin-offs and Related Organizations

<Mermaid chart={`
flowchart TD
    FHI[Future of Humanity Institute<br/>2005-2024] --> GovAI[Centre for the<br/>Governance of AI<br/>2018/2021]
    FHI --> GPI[Global Priorities<br/>Institute<br/>2017]
    FHI --> GWWC[Giving What<br/>We Can<br/>2009]
    FHI --> Influence1[Influenced CSER<br/>Cambridge]
    FHI --> Influence2[Trained Anthropic/<br/>DeepMind Staff]
    FHI --> MSR[Macrostrategy Research<br/>Initiative<br/>2024]
    FHI --> Mimir[Mimir Center for<br/>Long Term Futures<br/>2024]

    GovAI --> |Allan Dafoe| DeepMind[DeepMind AI<br/>Governance Team]

    style FHI fill:#e6f3ff
    style GovAI fill:#ccffcc
    style GPI fill:#ccffcc
    style MSR fill:#ffffcc
    style Mimir fill:#ffffcc
`} />

### Direct Spin-offs

| Organization | Founded | Connection | Current Status |
|--------------|---------|------------|----------------|
| **Centre for the Governance of AI (GovAI)** | 2018 (spun out 2021) | FHI's largest team | Independent nonprofit |
| **Giving What We Can** | 2009 | Co-founded by Toby Ord | Part of Effective Ventures |
| **Macrostrategy Research Initiative** | 2024 | Founded by Bostrom post-FHI | Active nonprofit |
| **Mimir Center for Long Term Futures Research** | 2024 | Anders Sandberg's new home | New research center |

### Related Organizations at Oxford

| Organization | Relationship | Focus |
|--------------|--------------|-------|
| **Global Priorities Institute** | Shared staff, similar mission | EA-aligned research on prioritization |
| **Oxford Martin School** | FHI's institutional home | Hosts multiple future-focused centers |
| **Centre for Effective Altruism** | Shared office space historically | EA movement hub |

### Organizations Influenced by FHI

| Organization | Influence Type |
|--------------|----------------|
| **Centre for the Study of Existential Risk (Cambridge)** | FHI provided intellectual model |
| **Anthropic** | Multiple FHI alumni |
| **DeepMind Safety Team** | FHI collaborations, alumni |
| **Future of Life Institute** | Shared funders, mission alignment |
| **MIRI** | Intellectual exchange, some shared funders |

## Reasons for Closure

### Official Reasons

The University stated it did not have "operational bandwidth" to manage FHI. The institute cited "increasing administrative headwinds within the Faculty of Philosophy."

### Detailed Analysis

| Factor | Details | Impact |
|--------|---------|--------|
| **Hiring Freeze (2020)** | Faculty prohibited new hires | Lost ability to replace departing researchers |
| **Fundraising Freeze (2020)** | Faculty prohibited new grant applications | Couldn't pursue growth opportunities |
| **Cultural Mismatch** | Flexible startup style vs. rigid academia | Constant friction over procedures |
| **Administrative Burden** | Faculty bureaucracy increased over time | "Gradual suffocation" per Sandberg |
| **Contract Non-Renewal (2023)** | Faculty decided not to renew remaining contracts | Made closure inevitable |

### Anders Sandberg's Explanation

In the [FHI Final Report](https://www.futureofhumanityinstitute.org/), Sandberg explained:

> "While FHI had achieved significant academic and policy impact, the final years were affected by a gradual suffocation by Faculty bureaucracy. The flexible, fast-moving approach of the institute did not function well with the rigid rules and slow decision-making of the surrounding organization."

He used the metaphor of Oxford as a "coral reef of calcified institutions built on top of each other," with FHI as a fish that grew too big for its hole.

### Contextual Factors

| Factor | Timing | Potential Impact |
|--------|--------|------------------|
| Bostrom email controversy | 2023 | 1996 racist email resurfaced; Oxford investigated |
| EA/FTX crisis | 2022 | Broader scrutiny of EA-affiliated organizations |
| Post-pandemic environment | 2020+ | University administrative changes |

The university explicitly stated that the email controversy was not a factor in the closure decision, according to Bostrom.

## Legacy and Assessment

### Intellectual Legacy

| Contribution | Significance | Current Status |
|--------------|--------------|----------------|
| **Existential Risk Studies** | Created the academic field | Now studied at multiple universities |
| **AI Safety Research** | Pioneered academic study | Major focus at top AI labs |
| **AI Governance** | Founded the subfield | GovAI and others continue work |
| **Longtermism** | Developed philosophical framework | Central to effective altruism |
| **Information Hazards** | Created conceptual framework | Standard consideration in biosecurity |

### Institutional Legacy

FHI demonstrated that it was possible to do rigorous academic research on big-picture questions about humanity's future. Topics that once "struggled to eke out a precarious existence at the margins of a single philosophy department are now pursued by leading AI labs, government agencies, nonprofits, and specialized academic research centers."

### Alumni Impact

| Destination | Notable Alumni |
|-------------|----------------|
| **DeepMind** | Allan Dafoe (AI Governance), others |
| **AI Safety Startups** | Stuart Armstrong (co-founder) |
| **Oxford Martin School** | Toby Ord (AI Governance) |
| **Mimir Center** | Anders Sandberg |
| **Macrostrategy Research Initiative** | Nick Bostrom |
| **RAND Europe** | Eric Drexler |

### Assessment of Impact

| Dimension | Assessment | Evidence |
|-----------|------------|----------|
| **Academic Influence** | Transformative | Created multiple fields; thousands of citations |
| **Policy Influence** | Significant | UN, UK government engagement |
| **Field Building** | Exceptional | Trained generation of researchers |
| **Organizational Model** | Partially Failed | Administrative conflicts ended the institute |
| **Timing** | Good | Existed during critical period for AI safety awareness |

## Lessons and Implications

### For Research Institutes

| Lesson | Context | Implication |
|--------|---------|-------------|
| **Institutional Fit Matters** | FHI's flexibility clashed with Oxford bureaucracy | Consider organizational culture carefully |
| **Success Can Create Problems** | Growth strained administrative relationships | Plan for scaling challenges |
| **Spin-outs as Strategy** | GovAI escaped by becoming independent | Independence may be worth pursuing early |

### For the AI Safety Field

FHI's closure coincides with AI safety becoming mainstream. As Bostrom noted, "There is now a much broader support base for the kind of work FHI was set up to enable, so the institute essentially served its purpose." The question is whether the distributed ecosystem of organizations can match FHI's record of fundamental advances.

### Open Questions

- Did FHI's administrative troubles reflect fixable problems or inherent tensions between academic institutions and existential risk research?
- Will the distributed ecosystem of FHI successor organizations be as productive as the concentrated institute?
- What institutional models best support long-term, speculative research?

## Sources and Citations

### Primary Sources

- [Future of Humanity Institute Website](https://www.fhi.ox.ac.uk/) (archived)
- [FHI New Website](https://www.futureofhumanityinstitute.org/)
- [FHI Final Report (Sandberg, 2024)](https://static1.squarespace.com/static/660e95991cf0293c2463bcc8/t/661a3fc3cecceb2b8ffce80d/1712996303164/FHI+Final+Report.pdf)
- [Nick Bostrom's Homepage](https://nickbostrom.com/)

### Wikipedia and Reference Sources

- [Future of Humanity Institute - Wikipedia](https://en.wikipedia.org/wiki/Future_of_Humanity_Institute)
- [Nick Bostrom - Wikipedia](https://en.wikipedia.org/wiki/Nick_Bostrom)
- [Anders Sandberg - Wikipedia](https://en.wikipedia.org/wiki/Anders_Sandberg)
- [Superintelligence: Paths, Dangers, Strategies - Wikipedia](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies)

### News Coverage

- [Nature: Future of Humanity Institute shuts](https://www.nature.com/articles/d41586-024-01229-8)
- [Daily Nous: The End of the Future of Humanity Institute](https://dailynous.com/2024/04/18/end-future-of-humanity-institute/)
- [Oxford Student: Oxford shuts down Elon Musk-funded FHI](https://www.oxfordstudent.com/2024/04/20/oxford-shuts-down-elon-musk-funded-future-of-humanity-institute/)
- [Asterisk Magazine: Looking Back at the Future of Humanity Institute](https://asteriskmag.com/issues/08/looking-back-at-the-future-of-humanity-institute)

### Grant Records

- [Coefficient Giving: FHI General Support](https://www.openphilanthropy.org/grants/future-of-humanity-institute-general-support/)
- [Coefficient Giving: FHI Work on Global Catastrophic Risks](https://www.openphilanthropy.org/grants/future-of-humanity-institute-work-on-global-catastrophic-risks/)
- [Oxford University: £13.3m boost for FHI](https://www.ox.ac.uk/news/2018-10-10-%C2%A3133m-boost-oxford%E2%80%99s-future-humanity-institute)

### Community Discussion

- [EA Forum: Future of Humanity Institute](https://forum.effectivealtruism.org/topics/future-of-humanity-institute)
- [EA Forum: FHI Final Report Discussion](https://forum.effectivealtruism.org/posts/uK27pds7J36asqJPt/future-of-humanity-institute-2005-2024-final-report)
- [LessWrong: FHI has shut down](https://www.lesswrong.com/posts/tu3CH22nFLLKouMKw/fhi-future-of-humanity-institute-has-shut-down-2005-2024)