Back
FHI expert elicitation
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Future of Humanity Institute
This FHI publication page relates to expert elicitation work on AI timelines and intervention effectiveness; limited content was available for analysis, so details are inferred from FHI's known research focus and associated tags.
Metadata
Importance: 45/100organizational reportreference
Summary
This resource from the Future of Humanity Institute (FHI) at Oxford involves expert elicitation surveys focused on AI development timelines, capability thresholds, and prioritization of interventions. It aggregates forecasts from researchers to inform understanding of when transformative AI might arrive and what safety measures may be most effective.
Key Points
- •Aggregates expert forecasts on AI capability milestones and development timelines from FHI-affiliated researchers.
- •Provides structured elicitation methodology to reduce individual bias and improve collective forecasting accuracy.
- •Addresses prioritization of AI safety interventions based on projected timelines and risk scenarios.
- •Relevant for informing policy and research strategy decisions in the AI safety and governance space.
- •Part of FHI's broader research program on existential risk and long-term futures.
Cited by 4 pages
| Page | Type | Quality |
|---|---|---|
| AI Safety Intervention Effectiveness Matrix | Analysis | 73.0 |
| AI Risk Activation Timeline Model | Analysis | 66.0 |
| AI Risk Interaction Network Model | Analysis | 64.0 |
| AI Proliferation | Risk | 60.0 |
Cached Content Preview
HTTP 200Fetched Mar 31, 202648 KB
Selected Publications Archive - Future of Humanity Institute
Jul
AUG
Sep
10
2024
2025
2026
success
fail
About this capture
COLLECTED BY
Collection: Save Page Now
TIMESTAMPS
The Wayback Machine - http://web.archive.org/web/20250810192452/https://www.fhi.ox.ac.uk/publications/
Selected Publications
2021
Future Proof – The opportunity to transform the UK’s resilience to extreme risks (Toby Ord, Angus Mercer, Sophie Dannreuther)
Biosecurity risks associated with vaccine platform technologies (Jonas Sandbrink, Gregory Koblentz)
Promoting versatile vaccine development for emerging pandemics (Joshua Monrad, Jonas Sandbrink, Neil Cherian)
Safety and security concerns regarding transmissible vaccines (Jonas Sandbrink, Matthew Watson, Andrew Hebbeler, Kevin Esvelt)
RNA Vaccines: A Suitable Platform for Tackling Emerging Pandemics? (Jonas Sandbrink, Robin Shattock)
Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI (Cremer and Whittlestone) (also best paper at EPAI 2020)
QNRs: Toward Language for Intelligent Machines (Eric Drexler)
What is the Upper Limit of Value? (Anders Sandberg, David Manheim)
Institutionalizing ethics in AI through broader impact requirements (Carina Prunkl, Carolyn Ashurst, Markus Anderljung, Helena Webb, Jan Leike & Allan Dafoe)
Fully General Online Imitation Learning (Michael K Cohen, Marcus Hutter, Neel Nanda)
Agent Incentives: A Causal Perspective (Tom Everitt, Ryan Carey, Eric Langlois, Pedro A Ortega, Shane Legg)
Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice (Lewis Hammond, James Fox, Tom Everitt, Alessandro Abate, Michael Wooldridge)
Reputations for Resolve and Higher-Order Beliefs in Crisis Bargaining (Allan Dafoe, Remco Zwetsloot, Matthew Cebul)
Rapid Proliferation of Pandemic Research: Implications for Dual-Use Risks (Sriharshita Musunuri, Jonas B. Sandbrink, Joshua Teperowski Monrad, Megan J. Palmer, Gregory D. Koblentz)
Protocols and risks: when less is more (Jaspreet Pannu, Jonas B. Sandbrink, Matthew Watson, Megan J. Palmer & David A. Relman)
2020
The Incentives that Shape Behaviour. (Ryan Carey, Eric Langlois, Tom Everitt, and Shane Legg, SafeAI@AAAI)
Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter (Owen Cotton‐Barratt, Max Daniel & Anders Sandberg in Global Policy; DOI 10.1111/1758-5899.1
... (truncated, 48 KB total)Resource ID:
d6955ff937bf386d | Stable ID: NDkxOTAzZT