Skip to content
Longterm Wiki
Back

Announcing "Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament"

web

Author

Forecasting Research Institute

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

This is the announcement and summary of the XPT working paper; a key empirical data point on how expert vs. superforecaster risk estimates diverge, especially for AI, and relevant to debates about whose forecasts to trust for long-run AI safety planning.

Forum Post Details

Karma
161
Comments
35
Forum
eaforum
Forum Tags
Existential riskForecastingOrganization updates
Part of sequence: Results from the Existential Risk Persuasion Tournament

Metadata

Importance: 72/100blog postprimary source

Summary

The Forecasting Research Institute announces results from the Existential-Risk Persuasion Tournament (XPT), which gathered probabilistic forecasts on existential risks from domain experts and superforecasters. Key findings reveal substantial disagreement: domain experts estimated 20% catastrophe probability by 2100 versus 9% for superforecasters, with the largest gap on AI risk. Notably, higher intersubjective accuracy correlated with lower risk estimates, and few participants updated their views despite persuasion incentives.

Key Points

  • Domain experts estimated 20% catastrophe / 6% extinction by 2100; superforecasters estimated 9% catastrophe / 1% extinction—a large systematic divergence.
  • AI risk showed the greatest expert-superforecaster disagreement; nuclear war risk showed the least divergence between the two groups.
  • Risk estimates were highly correlated across domains: participants who rated AI risk higher also rated bio and nuclear risks higher.
  • Forecasters with higher intersubjective accuracy (predicting others' views) consistently estimated lower existential risks across all categories.
  • Very few participants updated their views during the tournament despite monetary incentives designed to reward persuasion.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202629 KB
Announcing "Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament" — EA Forum 
 
 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Results from the Existential Risk Persuasion Tournament Announcing "Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament" 

 by Forecasting Research Institute Jul 10 2023 2 min read 35 161

 Existential risk Forecasting Organization updates Frontpage This is a linkpost for "Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament," accessible here:  https://forecastingresearch.org/s/XPT.pdf 

 
Today, the  Forecasting Research Institute (FRI) released  "Forecasting Existential Risks: Evidence from a Long-Run Forecasting Tournament" , which describes the results of the Existential-Risk Persuasion Tournament (XPT).

 The XPT, which ran from June through October of 2022, brought together forecasters from two groups with distinctive claims to knowledge about humanity’s future — experts in various domains relevant to existential risk, and " superforecasters " with a track record of predictive accuracy over short time horizons. We asked tournament participants to predict the likelihood of global risks related to nuclear weapon use, biorisks, and AI, along with dozens of other related, shorter-run forecasts.

 Some major takeaways from the XPT include:

 The median domain expert predicted a  20% chance of catastrophe and a  6% chance of human extinction by 2100. The median superforecaster predicted a  9% chance of catastrophe and a  1% chance of extinction . 
 Superforecasters predicted considerably lower chances of both catastrophe and extinction than did experts, but the disagreement between experts and superforecasters was not uniform across topics. Experts and superforecasters were furthest apart (in percentage point terms) on AI risk, and most similar on the risk of nuclear war.
 Predictions about risk were highly correlated across topics. For example, participants who gave higher risk estimates for AI also gave (on average) higher risk estimates for biorisks and nuclear weapon use.
 Forecasters with higher “intersubjective accuracy”—i.e., those best at predicting the views of other participants—estimated lower probabilities of catastrophic and extinction risks from all sources.
 Few minds were changed during the XPT, even among the most active participants, and despite monetary incentives for persuading others.
 See the full working paper  here .

 FRI hopes that the XPT will not only inform our understanding of existential risks, but will also advance the science of forecasting by:

 Collecting a large set of forecasts resolving on a long timescale, in a rigorous setting. This will allow us to measure correlations between short-run (2024), medium-run (2030) and longer-run (2050) accuracy in the coming decades.
 Exploring the use of bonus payments for  participants who b

... (truncated, 29 KB total)
Resource ID: 48e3a3e49e5d0947 | Stable ID: ZDBmZmQ3ZG