Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Future of Humanity Institute

FHI at Oxford was one of the foundational institutions in AI safety and existential risk research; this page documents its research agenda circa 2019, useful for understanding the intellectual origins of many key ideas in the field.

Metadata

Importance: 45/100homepage

Summary

This page outlines the major research areas pursued by the Future of Humanity Institute (FHI) at Oxford University, covering existential risk, AI safety, macrostrategy, and human enhancement. It serves as a hub for understanding FHI's interdisciplinary approach to long-term risks facing humanity. The institute applies philosophy, mathematics, and social sciences to identify and mitigate catastrophic and existential risks.

Key Points

  • FHI focuses on existential and global catastrophic risks, including those from advanced AI, biotechnology, and other emerging technologies.
  • Research areas include AI safety and alignment, macrostrategy (long-run trajectory of civilization), and biosecurity.
  • The institute takes an interdisciplinary approach combining philosophy, economics, mathematics, and empirical science.
  • FHI work informs global policy discussions on AI governance, arms control, and risk mitigation strategies.
  • The page reflects FHI's 2019 research priorities, providing historical context for how the field's agenda has evolved.

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Mar 31, 202624 KB
Research Areas - Future of Humanity Institute
 

 

 
 
 
 
 
 

 

 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 

 
 
 

 
 
 
 
 
 
 
 
 
 

 

 
 
 
 

 Mar
 APR
 May
 

 
 

 
 07
 
 

 
 

 2023
 2024
 2025
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Save Page Now Outlinks

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20240407050811/https://www.fhi.ox.ac.uk/research/research-areas/

 

 
 

 
 

 

 
 

 

 

 
 
 
 

 

 

Home

About

 
About FHI

 
News

 
Team

 
Reporting

 
Press

 
Logos and images

Research

 
Research Areas

 
Macrostrategy

 
AI Governance

 
AI Safety

 
Biosecurity

 
Research Scholars Programme

 
Technical Reports

Publications

Contact

 

 

 

 
 

 

 

 
 

Research Areas

 
 

 
 

 
 

 

 
 

 

 
 

 

 
 

 

 

 

 
Academics at FHI bring the tools of mathematics, philosophy, social sciences, and the natural sciences to bear on big-picture questions about humanity and its prospects. Our mission is to shed light on crucial considerations that might shape humanity’s long-term future.

We currently divide our work into four focus areas: Macrostrategy, AI Safety, AI Governance, and Biosecurity.

 

 

Macrostrategy

AI Governance

AI Safety

Biosecurity

Macrostrategy

 

 

 
Overview

 

 

 

 

 
Investigating which crucial considerations are shaping what is at stake for the future of humanity

 

 

 

 

 
FHI’s big picture research focuses on the long-term consequences of our actions today and the complicated dynamics that are bound to shape our future in significant ways. A key aspect to this is the study of existential risks – events that endanger the survival of Earth-originating, intelligent life or that threaten to drastically and permanently destroy our potential for realising a valuable future. Our focus within this area lies in the impact of future technology capabilities and impacts (including the possibility and impact of Artificial General Intelligence or ‘Superintelligence’), existential risk assessment, anthropics, population ethics, human enhancement ethics, game theory, and consideration of the Fermi paradox.

 

 

 

 

 

 

 

 

 

 
Many of the core concepts and techniques within this field originate from research by FHI scholars. They are already having a practical impact in the effective altruism movement.

 

 

 

 

 
Featured macrostrategy publications

 

 

 

 

 

 

 

 

 

 
The Vulnerable World Hypothesis (2019)

Nick Bostrom

Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some econo

... (truncated, 24 KB total)
Resource ID: 902320774d220a6c | Stable ID: NDE2YmI4Zm