Skip to content
Longterm Wiki
Back

Future of Humanity Institute

web
futureofhumanityinstitute.org·futureofhumanityinstitute.org/

The Future of Humanity Institute (FHI) was Oxford University's pioneering multidisciplinary research group on existential risk, AI alignment, and long-term futures, founded by Nick Bostrom in 2005 and closed in 2024. It was foundational in establishing AI safety and existential risk as legitimate academic fields.

Metadata

Importance: 78/100homepage

Summary

FHI was a multidisciplinary Oxford research institute founded by Nick Bostrom that pioneered research on existential risk, AI alignment, effective altruism, and longtermism over 19 years before closing in April 2024. It produced landmark works including Bostrom's 'Superintelligence' and Ord's 'The Precipice', and helped establish several new research fields. The institute's closure followed administrative conflicts with Oxford's Faculty of Philosophy, though its intellectual legacy continues through alumni and successor organizations.

Key Points

  • Founded in 2005 by Nick Bostrom at Oxford; closed April 2024 after administrative conflicts with the Faculty of Philosophy froze hiring and fundraising from 2020.
  • Pioneered or significantly advanced fields including existential risk, AI alignment, AI governance, effective altruism, longtermism, and moral uncertainty.
  • Produced major books including Bostrom's 'Superintelligence' (2014), Ord's 'The Precipice' (2020), and MacAskill et al.'s 'Moral Uncertainty' (2020).
  • FHI alumni and ideas now permeate leading AI labs, government agencies, nonprofits, and academic research centers worldwide.
  • Demonstrated that rigorous academic research on big-picture questions about humanity's long-term future was possible and valuable.

Cited by 1 page

PageTypeQuality
Future of Humanity InstituteOrganization51.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20269 KB
Future of Humanity Institute 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 

 

 

 
 
 

 
 
 Menu 

 

 

 
 
 
 
 
 

 

 
 

 

 
 (2005–2024) 

 Established in 2005, initially for a 3-year period, the Future of Humanity Institute was a multidisciplinary research group at Oxford University. It was founded by Prof Nick Bostrom and brought together a select set of researchers from disciplines such as philosophy, computer science, mathematics, and economics to study big-picture questions for human civilization, attempting to shield them from ordinary academic pressures and create an organizational culture conducive to creativity and intellectual progress.

 During its 19-year existence, the team at FHI made a series of research contributions that helped change our conversation about the future and contributed to the creation of several new fields and paradigms. FHI was involved in the germination of a wide range of ideas including existential risk, effective altruism, longtermism, AI alignment, AI governance, global catastrophic risk, grand futures, information hazards, the unilateralist’s curse, and moral uncertainty. It also did significant work on anthropics, human enhancement ethics, systemic risk modeling, forecasting and prediction markets, the search for extraterrestrial intelligence, and on the attributes and strategic implications of key future technologies. One major contribution was in showing that it was even possible to do rigorous research on big picture questions about humanity’s future.

 Over time FHI faced increasing administrative headwinds within the Faculty of Philosophy (the Institute’s organizational home). Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down.

 Over the course of its nineteen years, FHI inspired the emergence of a vibrant ecosystem of organizations where the kinds of questions that FHI investigated can be explored. FHI alumni will continue to research these questions both within Oxford and at other places around the world. Topics that once struggled to eke out a precarious existence at the margins of a single philosophy department are now pursued by leading AI labs, government agencies, nonprofits, and specialized academic research centers (with many more in the process of creation).

 

 
 

 

 
 Resources

 FHI’s final tech report — an oral history of the institute

 A collection of FHI’s technical reports and other online pieces 

 Historical snapshots of FHI’s official website at the Internet Archive

 An extensive list of articles by FHI (or mentioning FHI) via Google Scholar

 A magazine article by journalist Tom Ough about the history of FHI

 An in-depth chapter about FHI from Tom Ough’s book The Anti-Catastrophe League 

 
 

 
 

 

 
 FHI Books

 Global Catastrophic Risks , Nick Bostrom and Milan Ćirko

... (truncated, 9 KB total)
Resource ID: 884c6a1d92408561