Back
Forecasting Research Institute
webforecastingresearch.org·forecastingresearch.org/research
FRI's research on forecasting methodology is a useful reference for AI safety researchers interested in how to rigorously quantify and communicate uncertainty about AI risks and transformative AI timelines.
Metadata
Importance: 52/100homepage
Summary
The Forecasting Research Institute (FRI) conducts empirical research on forecasting methodologies, judgment aggregation, and the use of prediction markets and expert elicitation to improve decision-making under uncertainty. Their work is particularly relevant to AI safety and governance insofar as it informs how we assess and communicate risks from emerging technologies. FRI aims to make forecasting tools more rigorous and widely applicable to high-stakes domains.
Key Points
- •FRI studies how forecasting and prediction aggregation methods can improve accuracy in complex, high-stakes domains including AI risk.
- •Research covers judgment aggregation, superforecasting, and structured expert elicitation to reduce uncertainty in long-horizon predictions.
- •Their work informs AI governance efforts by providing better tools for estimating probabilities of transformative or catastrophic AI outcomes.
- •FRI collaborates with policy and research communities to translate forecasting insights into actionable guidance.
- •The institute's output is relevant to operationalizing AI risk estimates used in safety roadmaps and governance frameworks.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Instrumental Convergence | Risk | 64.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20268 KB
[0](https://forecastingresearch.org/cart)
### The Longitudinal Expert AI Panel (LEAP)
> LEAP is a three-year project tracking the views of leading computer scientists, AI industry professionals, policy researchers, and economists on the trajectory of artificial intelligence. Every month, LEAP participants provide thousands of forecasts on key AI progress indicators including benchmarks, labor market impacts, and scientific discovery.
>
> For more about LEAP, and to view reports from each month of surveys and analysis of every question, visit the [LEAP website](https://leap.forecastingresearch.org/).
### ForecastBench
ForecastBench is a dynamic, contamination-free benchmark of large language model (LLM) forecasting accuracy. The benchmark compares the performance of LLMs to both the general public and superforecasters, and it serves as a valuable proxy for general intelligence. Originally launched in September 2024, ForecastBench received a major update in October 2025 and is now open to public submissions.
For more about ForecastBench and to see the latest leaderboard, visit [www.forecastbench.org](http://www.forecastbench.org/)
### Assessing Near-Term Accuracy in the Existential Risk Persuasion Tournament
This report assesses the accuracy of short-term forecasts made during the Existential Risk Persuasion Tournament (XPT)—a 2022 study that convened 169 superforecasters and domain experts to make predictions on long-term risks including AI, climate change, nuclear war, and pandemic.
Find more about near-term accuracy in the XPT [here](https://forecastingresearch.org/near-term-xpt-accuracy).
### Forecasting Biosecurity Risks from LLMs
This forecasting study on biological risks from large language models (LLMs) examined expert views on AI-enabled biosecurity threats. The study saw 46 biosecurity and biology experts, along with 22 superforecasters, predict how advancing LLM capabilities might increase the risk of a human-caused epidemic. The median expert estimated that certain AI milestones would increase the annual risk of a >100,000 death human-caused epidemic from 0.3% to 1.5%. The study found that mitigation measures, including synthetic DNA screening and AI safeguards, could reduce risks almost back to baseline levels.
Find more about the biological risks forecasting study, including the preprint, [here](https://forecastingresearch.org/ai-enabled-biorisk).
### Existential Risk Persuasion Tournament (XPT)
The XPT explores potential threats to humanity in this century, with a focus on artificial intelligence, biosecurity, climate, and nuclear arms. In the first tournament, over 200 experts and highly skilled forecasters worked individually and in teams to craft forecasts and persuasive explanations. The initial tournament ran from June-October 2022.
Find more about the XPT, including the full policy report for the 2022 tournament, [here](https://forecastingresearch.org/xpt).
### Adversarial collaboration on AI risk
The adversaria
... (truncated, 8 KB total)Resource ID:
bcb075f246413790 | Stable ID: MTE0YzAwMG