Longterm Wiki
Updated 2026-01-29HistoryData
Page StatusContent
Edited 2 weeks ago4.2k words1 backlinks
51
QualityAdequate
34
ImportanceReference
15
Structure15/15
32211024%5%
Updated every 3 weeksDue in 6 days
Summary

The Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving $10M+ in funding before closing due to administrative conflicts. FHI produced seminal works (Superintelligence, The Precipice), trained leaders now at Anthropic/DeepMind/GovAI, and advised UN/UK government, demonstrating both transformative intellectual impact and the challenges of housing speculative research in traditional academia.

Issues2
QualityRated 51 but structure suggests 100 (underrated by 49 points)
Links5 links could use <R> components

Future of Humanity Institute (FHI)

Academic

Future of Humanity Institute

The Future of Humanity Institute (2005-2024) was a pioneering Oxford research center that founded existential risk studies and AI alignment research, growing from 3 to ~50 researchers and receiving $10M+ in funding before closing due to administrative conflicts. FHI produced seminal works (Superintelligence, The Precipice), trained leaders now at Anthropic/DeepMind/GovAI, and advised UN/UK government, demonstrating both transformative intellectual impact and the challenges of housing speculative research in traditional academia.

TypeAcademic
Founded2005
LocationOxford, UK
Related
People
Nick Bostrom
Concepts
Existential Risk from AI
4.2k words · 1 backlinks

Quick Assessment

DimensionAssessmentEvidence
FocusExistential Risk ResearchAI safety, global catastrophic risks, macrostrategy, human enhancement
StatusClosed (April 16, 2024)Faculty of Philosophy ended contracts; staff dispersed
Peak Size≈50 researchersGrew from 3 in 2005 to peak around 2018-2020
Duration19 years (2005-2024)Founded as 3-year pilot, became permanent institution
Total Funding$10M+ from Coefficient GivingPlus £1M from Elon Musk, ERC grants, Leverhulme Trust
Key PublicationsSuperintelligence (2014), The Precipice (2020)Both became international bestsellers
Policy ReachUN, UK Government, EUAdvised UN Secretary General, quoted by UK PM at UN
Spin-offsGovAI, influenced CSER, GPIMultiple organizations founded by alumni

Organization Details

AttributeDetails
Full NameFuture of Humanity Institute
TypeUniversity Research Institute
FoundedNovember 2005
ClosedApril 16, 2024
LocationUniversity of Oxford, Faculty of Philosophy
Institutional HomeOxford Martin School (initially James Martin 21st Century School)
Founder & DirectorNick Bostrom
Peak Staff≈50 researchers
Websitefhi.ox.ac.uk (archived)
Final ReportFHI Final Report (Sandberg, 2024)
Major FundersCoefficient Giving ($10M+), Elon Musk (£1M), ERC, Leverhulme Trust

Overview

The Future of Humanity Institute (FHI) was a multidisciplinary research center at the University of Oxford that fundamentally shaped how humanity thinks about long-term risks and the future of civilization. Founded by philosopher Nick Bostrom in November 2005 as part of the Oxford Martin School (then the James Martin 21st Century School), FHI brought together researchers from philosophy, computer science, mathematics, and economics to tackle questions that most of academia considered too speculative or far-fetched to study rigorously.

During its 19-year existence, FHI achieved an extraordinary record of intellectual impact relative to its modest size. The institute was involved in the germination of a wide range of ideas that have since become mainstream concerns: existential risk, effective altruism, longtermism, AI alignment, AI governance, global catastrophic risk, information hazards, the unilateralist's curse, and moral uncertainty. Starting with just three researchers in 2005, FHI grew to approximately fifty at its peak before administrative conflicts with Oxford's Faculty of Philosophy led to a hiring freeze in 2020 and ultimate closure in April 2024.

FHI's influence extends far beyond its publications. The institute trained a generation of researchers who now hold leadership positions at Anthropic, DeepMind, OpenAI, the Centre for the Governance of AI (GovAI), and numerous other organizations. Toby Ord's The Precipice was quoted by UK Prime Minister Boris Johnson in his 2021 UN General Assembly address, and FHI researchers advised the UN Secretary General's Office on existential risk and future generations. The institute's closure represents the end of an era, but its intellectual legacy continues through its alumni, spin-off organizations, and the fields it created.

Historical Evolution

Loading diagram...

Founding Era (2005-2008)

Nick Bostrom established FHI in November 2005 after recognizing that questions about humanity's long-term future and existential risks were being systematically neglected by mainstream academia. The institute was initially funded as a three-year pilot project but quickly demonstrated its value through a series of influential publications and conferences.

MilestoneDateSignificance
FHI FoundedNovember 2005First academic institute dedicated to existential risk
Initial Team20053 researchers: Bostrom, plus initial hires
Oxford Martin School Integration2005Provided institutional legitimacy and infrastructure
Global Catastrophic Risks (book)2008First comprehensive academic treatment of GCR

In its early years, FHI focused on establishing existential risk as a legitimate field of academic inquiry. Bostrom's 2002 paper "Existential Risk: Analyzing Human Extinction Scenarios" had laid the conceptual groundwork, defining existential risk as "one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." FHI's task was to build an institutional home for this research.

Growth Period (2008-2014)

Between 2008 and 2010, FHI hosted the Global Catastrophic Risks conference, wrote 22 academic journal articles, and published 34 chapters in academic volumes. The GCR conference was a pivotal moment in building an academic community around reducing risks to humanity's future.

AchievementYearImpact
GCR Conference Series2008-2010Built academic community around catastrophic risk
22 Journal Articles2008-2010Established academic legitimacy
34 Book Chapters2008-2010Spread ideas across disciplines
Bostrom begins Superintelligence2009Originally one chapter on AI, grew into landmark book
Superintelligence publishedJuly 2014International bestseller; ignited AI safety movement

This period saw FHI expand its research scope significantly. When Bostrom began work on a book about existential risk in 2009, he found one chapter on AI "getting out of hand." The issue of risks from superintelligent systems turned out to be much deeper than initially expected, eventually evolving into Superintelligence: Paths, Dangers, Strategies, published in 2014.

Peak Period (2014-2020)

The publication of Superintelligence marked the beginning of FHI's most influential period. The book became an international bestseller and is credited with convincing many technologists, including Elon Musk and Bill Gates, to take AI risks seriously. FHI grew to approximately 50 researchers and received its largest funding commitments.

DevelopmentYearDetails
Superintelligence impact2014+Read by Musk, Gates; influenced industry leaders
Elon Musk donation2015£1M for AI safety research
Governance of AI team formed2018Led by Allan Dafoe; later spins out
£13.3M Coefficient Giving grant2018Largest grant in Faculty of Philosophy history
The Precipice publishedMarch 2020First book-length treatment of existential risk for popular audience

In 2018, FHI received a series of awards totaling up to £13.3 million over three years from Coefficient Giving (then Open Philanthropy), the largest donation in the history of the Faculty of Philosophy at Oxford. This funding supported work on risks from advanced AI, biosecurity and pandemic preparedness, and macrostrategy.

Decline and Closure (2020-2024)

Despite its intellectual success, FHI's final years were marked by what Anders Sandberg called "gradual suffocation by Faculty bureaucracy." The flexible, fast-moving approach of the institute did not function well with the rigid rules and slow decision-making of the surrounding organization.

EventDateConsequence
Faculty hiring freeze begins2020No new researchers could be hired
Faculty fundraising freeze2020Couldn't pursue new grants
GovAI spins out2021Largest team leaves to escape restrictions
Bostrom email controversy20231996 email resurfaces; Oxford investigates
Faculty announces no contract renewalsLate 2023Remaining staff told contracts would end
FHI officially closesApril 16, 202419-year run ends
Bostrom resigns from Oxford2024Founds Macrostrategy Research Initiative

The stated reason for closure, according to Bostrom, was that the university did not have the operational bandwidth to manage FHI. Sandberg explained the cultural mismatch: "I often described Oxford like a coral reef of calcified institutions built on top of each other... FHI was one such fish but grew too big for its hole. At that point it became either vulnerable to predators, or had to enlarge the hole, upsetting the neighbors."

Research Programs and Contributions

Existential Risk Studies

FHI essentially created the academic field of existential risk studies. Before FHI, the topic was considered too speculative for serious academic attention. FHI demonstrated that it was possible to do rigorous research on big-picture questions about humanity's future.

Research AreaKey ContributionsImpact
Definition and TaxonomyBostrom's existential risk frameworkStandard definitions used across field
Probability EstimationUpper bounds on background extinction rateQuantified risks for policy discussions
Fermi Paradox"Dissolving the Fermi Paradox" (2018)Showed we may be alone in observable universe
Vulnerable World HypothesisBostrom (2019)Framework for technology governance

The 2018 paper "Dissolving the Fermi Paradox" by Sandberg, Drexler, and Ord was the first to estimate and rigorously account for uncertainties in each term of the Drake equation. They found a high likelihood that we are alone in our galaxy or even the entire observable universe.

AI Safety Research

FHI was one of the earliest academic institutions to take AI safety seriously, working in close collaboration with labs such as DeepMind, OpenAI, and CHAI (Center for Human-Compatible AI).

PublicationAuthorsYearContribution
"Racing to the precipice"Armstrong, Shulman, Bostrom2013Modeled AI development race dynamics
SuperintelligenceBostrom2014Comprehensive analysis of superintelligence risks
"Safely interruptible agents"Orseau, Armstrong2016Technical AI safety contribution
"Reframing Superintelligence"Drexler2019Alternative "CAIS" model of AI development
"Truthful AI"Evans et al.2021Framework for developing AI that doesn't lie

Stuart Armstrong's collaboration with DeepMind on "Interruptibility" was mentioned in over 100 media articles and represented one of FHI's more practical AI safety contributions.

AI Governance

FHI's Governance of AI (GovAI) team, led by Allan Dafoe, became the largest research group focused on the policy implications of advanced AI before spinning out as an independent organization in 2021.

PublicationAuthorsYearFocus
"AI Governance: A Research Agenda"Dafoe2018Foundational governance framework
"The Malicious Use of AI"Brundage et al.2018Security implications of AI
"Strategic implications of openness"Bostrom2017Open vs. closed AI development

GovAI spun out of Oxford in 2021 specifically to "escape bureaucratic restrictions" and has since become an independent nonprofit. Allan Dafoe now heads DeepMind's Long-Term AI Strategy and Governance Team.

Macrostrategy Research

FHI's Macrostrategy group examined how long-term outcomes for humanity are connected to present-day actions—a research program that influenced the effective altruism movement's focus on cause prioritization.

ConceptOriginatorSignificance
Information HazardsBostromFramework for managing dangerous knowledge
Unilateralist's CurseBostromExplains why groups make riskier decisions
Moral UncertaintyMacAskill, OrdHow to act under ethical uncertainty
Crucial ConsiderationsBostromFactors that could reverse strategic priorities

Biosecurity Research

FHI's Biosecurity group worked on making the world more secure against both natural and human-made catastrophic biological risks, anticipating many concerns that became urgent during the COVID-19 pandemic.

Key Personnel

Nick Bostrom (Founder and Director)

AttributeDetails
RoleFounder, Director (2005-2024)
BackgroundPhD Philosophy (LSE, 2000); BA Physics, Philosophy (Stockholm)
Key WorksAnthropic Bias (2002), Superintelligence (2014), Deep Utopia (2024)
Famous ForSimulation argument, existential risk framework, superintelligence analysis
Current RolePrincipal Researcher, Macrostrategy Research Initiative

Bostrom is best known for his work in five areas: existential risk, the simulation argument, anthropics, impacts of future technology, and implications of consequentialism for global strategy. His simulation argument posits that one of three propositions must be true: civilizations almost never reach technological maturity, technologically mature civilizations are uninterested in running simulations, or we are almost certainly living in a simulation.

Toby Ord (Senior Research Fellow)

AttributeDetails
RoleSenior Research Fellow
BackgroundComputer Science turned Philosophy
Key WorkThe Precipice: Existential Risk and the Future of Humanity (2020)
Co-foundedGiving What We Can (pledged to give most of earnings to charity)
Current RoleAI Governance, Oxford Martin School
Policy ImpactAdvised UN Secretary General; quoted by UK PM at UN

Ord's The Precipice was the first book-length treatment of existential risk for a wide audience, influencing policy in the United Kingdom and at the United Nations. Multiple FHI staff were invited to present their work to the British Parliament.

Anders Sandberg (Senior Research Fellow)

AttributeDetails
RoleSenior Research Fellow
BackgroundNeuroscience, Computational Neuroscience
Research FocusHuman enhancement, whole brain emulation, grand futures
Key Papers"Dissolving the Fermi Paradox" (2018), FHI Final Report (2024)
Upcoming WorkGrand Futures (mapping physical limits of advanced civilizations)
Current RoleMimir Center for Long Term Futures Research

Sandberg authored the FHI Final Report, which provides a detailed account of the institute's history and achievements. He is described as a futurist who explored the outer limits of what advanced civilizations might achieve.

Stuart Armstrong (Research Fellow)

AttributeDetails
RoleResearch Fellow
Research FocusAI Safety, Value Alignment, Corrigibility
Key PublicationsSmarter Than Us (2014), "Safely Interruptible Agents" (2016)
DeepMind Collaboration"Interruptibility" paper mentioned in 100+ media articles
Current RoleCo-founder, AI safety startup

Armstrong's research centered on how to define AI goals and map humanity's partially-defined values into AI systems. His collaboration with DeepMind on interruptibility was one of FHI's most visible practical contributions.

Eric Drexler (Senior Research Fellow)

AttributeDetails
RoleSenior Research Fellow
BackgroundPioneer of nanotechnology; MIT PhD
Key WorksEngines of Creation (1986), "Reframing Superintelligence" (2019)
FHI ContributionCAIS (Comprehensive AI Services) framework
Current RoleSenior Research Fellow, RAND Europe

Drexler, best known for pioneering the concept of molecular nanotechnology, brought a unique perspective to AI safety. His "Reframing Superintelligence" proposed an alternative to the "single superintelligent agent" model that dominated much AI safety thinking.

Carl Shulman (Research Associate)

AttributeDetails
RoleResearch Associate
Research FocusAI forecasting, AI impacts, embryo selection
Key Papers"Racing to the precipice" (2013), "Embryo selection for cognitive enhancement" (2014)
CollaborationsMultiple papers with Bostrom on AI development dynamics

Shulman contributed to FHI's work on forecasting AI development timelines and understanding the strategic implications of advanced AI.

Other Notable Researchers

ResearcherFocus AreaCurrent Position
Allan DafoeAI GovernanceDeepMind Long-Term AI Strategy
Owain EvansAI value learningAcademic researcher
Robin HansonEconomics, Prediction MarketsGeorge Mason University
Miles BrundageAI PolicyPolicy researcher

Major Publications

Books

TitleAuthor(s)YearImpact
Global Catastrophic RisksBostrom, Cirkovic (eds.)2008First comprehensive academic treatment
Human EnhancementSavulescu, Bostrom (eds.)2009Bioethics of human augmentation
Superintelligence: Paths, Dangers, StrategiesBostrom2014International bestseller; sparked AI safety movement
Smarter Than UsArmstrong2014Accessible introduction to AI alignment
The Precipice: Existential Risk and the Future of HumanityOrd2020Policy-influential treatment of existential risk
Moral UncertaintyMacAskill, Bykvist, Ord2020Philosophical foundations of EA
Deep Utopia: Life and Meaning in a Solved WorldBostrom2024Post-scarcity philosophy

Influential Papers

PaperAuthorsYearCitationsKey Contribution
"Existential Risks: Analyzing Human Extinction Scenarios"Bostrom20021000+Founded the field
"The Superintelligent Will"Bostrom2012HighInstrumental convergence thesis
"Thinking Inside the Box: Oracle AI"Armstrong, Sandberg, Bostrom2012ModerateAI containment strategies
"Racing to the Precipice"Armstrong, Shulman, Bostrom2013ModerateAI race dynamics
"Future Progress in AI: Expert Survey"Muller, Bostrom2016HighFirst systematic AI timeline survey
"Dissolving the Fermi Paradox"Sandberg, Drexler, Ord2018HighRigorous Drake equation analysis
"The Vulnerable World Hypothesis"Bostrom2019HighTechnology governance framework
"Reframing Superintelligence: CAIS"Drexler2019ModerateAlternative AI development model

Funding and Resources

Coefficient Giving Grants

GrantAmountYearPurpose
General Support$1,995,4252016Unrestricted reserves, junior staff
Research Scholars Programme$1,586,224VariousFuture scholars hiring
Major Grant$12,250,8102018AI, biosecurity, macrostrategy (£13.3M total)
DPhil Positions$139,263VariousDoctoral student support
Admin/Operations$100,000VariousVia Effective Ventures

Total Coefficient Giving funding exceeded $10 million over FHI's lifetime.

Other Major Funders

FunderAmountYearFocus
Elon Musk (via FLI)£1,000,0002015AI safety research
European Research CouncilVariousMultipleResearch grants
Leverhulme TrustVariousMultipleResearch grants
Survival and Flourishing Fund≈$150,000VariousGeneral support

Budget and Operations

FHI's annual revenues and expenses were approximately £1 million per year at operational scale, with the bulk of funding from academic grants that were "lumpy and hard to predict."

Policy Impact

United Nations

ActivityDetails
Secretary General AdvisoryToby Ord advised on existential risk and future generations
Human Development Report 2020FHI contributed analysis
Boris Johnson UN Speech 2021Quoted Toby Ord's The Precipice

United Kingdom

ActivityDetails
Parliamentary PresentationsMultiple staff invited to present to Parliament
Future Proof Report 2021Co-authored UK resilience strategy report
Paymaster General SpeechFavorably mentioned FHI's resilience work

European Union

FHI researchers contributed to policy discussions on AI governance that informed the development of the EU AI Act.

Spin-offs and Related Organizations

Loading diagram...

Direct Spin-offs

OrganizationFoundedConnectionCurrent Status
Centre for the Governance of AI (GovAI)2018 (spun out 2021)FHI's largest teamIndependent nonprofit
Giving What We Can2009Co-founded by Toby OrdPart of Effective Ventures
Macrostrategy Research Initiative2024Founded by Bostrom post-FHIActive nonprofit
Mimir Center for Long Term Futures Research2024Anders Sandberg's new homeNew research center

Related Organizations at Oxford

OrganizationRelationshipFocus
Global Priorities InstituteShared staff, similar missionEA-aligned research on prioritization
Oxford Martin SchoolFHI's institutional homeHosts multiple future-focused centers
Centre for Effective AltruismShared office space historicallyEA movement hub

Organizations Influenced by FHI

OrganizationInfluence Type
Centre for the Study of Existential Risk (Cambridge)FHI provided intellectual model
AnthropicMultiple FHI alumni
DeepMind Safety TeamFHI collaborations, alumni
Future of Life InstituteShared funders, mission alignment
MIRIIntellectual exchange, some shared funders

Reasons for Closure

Official Reasons

The University stated it did not have "operational bandwidth" to manage FHI. The institute cited "increasing administrative headwinds within the Faculty of Philosophy."

Detailed Analysis

FactorDetailsImpact
Hiring Freeze (2020)Faculty prohibited new hiresLost ability to replace departing researchers
Fundraising Freeze (2020)Faculty prohibited new grant applicationsCouldn't pursue growth opportunities
Cultural MismatchFlexible startup style vs. rigid academiaConstant friction over procedures
Administrative BurdenFaculty bureaucracy increased over time"Gradual suffocation" per Sandberg
Contract Non-Renewal (2023)Faculty decided not to renew remaining contractsMade closure inevitable

Anders Sandberg's Explanation

In the FHI Final Report, Sandberg explained:

"While FHI had achieved significant academic and policy impact, the final years were affected by a gradual suffocation by Faculty bureaucracy. The flexible, fast-moving approach of the institute did not function well with the rigid rules and slow decision-making of the surrounding organization."

He used the metaphor of Oxford as a "coral reef of calcified institutions built on top of each other," with FHI as a fish that grew too big for its hole.

Contextual Factors

FactorTimingPotential Impact
Bostrom email controversy20231996 racist email resurfaced; Oxford investigated
EA/FTX crisis2022Broader scrutiny of EA-affiliated organizations
Post-pandemic environment2020+University administrative changes

The university explicitly stated that the email controversy was not a factor in the closure decision, according to Bostrom.

Legacy and Assessment

Intellectual Legacy

ContributionSignificanceCurrent Status
Existential Risk StudiesCreated the academic fieldNow studied at multiple universities
AI Safety ResearchPioneered academic studyMajor focus at top AI labs
AI GovernanceFounded the subfieldGovAI and others continue work
LongtermismDeveloped philosophical frameworkCentral to effective altruism
Information HazardsCreated conceptual frameworkStandard consideration in biosecurity

Institutional Legacy

FHI demonstrated that it was possible to do rigorous academic research on big-picture questions about humanity's future. Topics that once "struggled to eke out a precarious existence at the margins of a single philosophy department are now pursued by leading AI labs, government agencies, nonprofits, and specialized academic research centers."

Alumni Impact

DestinationNotable Alumni
DeepMindAllan Dafoe (AI Governance), others
AI Safety StartupsStuart Armstrong (co-founder)
Oxford Martin SchoolToby Ord (AI Governance)
Mimir CenterAnders Sandberg
Macrostrategy Research InitiativeNick Bostrom
RAND EuropeEric Drexler

Assessment of Impact

DimensionAssessmentEvidence
Academic InfluenceTransformativeCreated multiple fields; thousands of citations
Policy InfluenceSignificantUN, UK government engagement
Field BuildingExceptionalTrained generation of researchers
Organizational ModelPartially FailedAdministrative conflicts ended the institute
TimingGoodExisted during critical period for AI safety awareness

Lessons and Implications

For Research Institutes

LessonContextImplication
Institutional Fit MattersFHI's flexibility clashed with Oxford bureaucracyConsider organizational culture carefully
Success Can Create ProblemsGrowth strained administrative relationshipsPlan for scaling challenges
Spin-outs as StrategyGovAI escaped by becoming independentIndependence may be worth pursuing early

For the AI Safety Field

FHI's closure coincides with AI safety becoming mainstream. As Bostrom noted, "There is now a much broader support base for the kind of work FHI was set up to enable, so the institute essentially served its purpose." The question is whether the distributed ecosystem of organizations can match FHI's record of fundamental advances.

Open Questions

  • Did FHI's administrative troubles reflect fixable problems or inherent tensions between academic institutions and existential risk research?
  • Will the distributed ecosystem of FHI successor organizations be as productive as the concentrated institute?
  • What institutional models best support long-term, speculative research?

Sources and Citations

Primary Sources

Wikipedia and Reference Sources

News Coverage

Grant Records

Community Discussion

Related Pages

Top Related Pages

Concepts

Nick BostromAI AlignmentGovAIAI GovernanceToby OrdCoefficient Giving

Key Debates

The Case For AI Existential RiskThe Case Against AI Existential RiskIs AI Existential Risk Real?

Models

Carlsmith's Six-Premise Argument

Organizations

CSER (Centre for the Study of Existential Risk)Centre for Long-Term Resilience