The Good Judgment Project: A Large Scale Test - Semantic Scholar
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Semantic Scholar
Foundational forecasting research relevant to AI safety timelines and risk estimation; GJP methods have influenced how researchers approach uncertain predictions about transformative AI development.
Metadata
Summary
The Good Judgment Project (GJP) was a large-scale forecasting research initiative that tested whether aggregated human predictions could outperform intelligence analysts and prediction markets. It identified 'superforecasters' — individuals with exceptional predictive accuracy — and demonstrated that structured forecasting techniques significantly improve geopolitical and probabilistic predictions.
Key Points
- •GJP demonstrated that aggregated crowd forecasts can outperform professional intelligence analysts with access to classified information.
- •Identified 'superforecasters' — a subset of forecasters with consistently superior accuracy across diverse geopolitical questions.
- •Showed that training forecasters in probabilistic thinking and Bayesian updating significantly improves prediction accuracy.
- •Provided empirical evidence that epistemic calibration and structured forecasting methodologies are learnable skills.
- •Relevant to AI safety as a methodology for evaluating expert predictions about transformative AI timelines and risks.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Good Judgment (Forecasting) | Organization | 50.0 |
Cached Content Preview
JavaScript is disabled In order to continue, we need to verify that you're not a robot. This requires JavaScript. Enable JavaScript and then reload the page.
1a675b746d9656fb | Stable ID: M2NlNTMyMz