Tetlock: Superforecasting
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Good Judgment
Tetlock's superforecasting work is frequently cited in AI safety communities as a methodological foundation for rigorous reasoning about AI timelines, risk estimates, and capability predictions under deep uncertainty.
Metadata
Summary
Philip Tetlock's superforecasting research demonstrates that trained individuals using systematic probabilistic thinking can significantly outperform experts and prediction markets on a wide range of forecasting questions. The approach emphasizes calibration, updating beliefs on new evidence, and aggregating diverse forecaster perspectives. These methods are directly applicable to forecasting AI development timelines and risks.
Key Points
- •Superforecasters consistently outperform experts by using calibrated probabilistic reasoning rather than vague qualitative predictions.
- •Key practices include breaking problems into components, actively updating on new evidence, and avoiding overconfidence or underconfidence.
- •Aggregating forecasts from diverse, well-calibrated individuals (crowd wisdom) improves accuracy beyond any single forecaster.
- •The methodology has direct applications to AI capabilities forecasting, risk estimation, and evaluating safety-relevant timelines.
- •Superforecasting principles can reduce uncertainty in domains critical to AI safety planning, such as predicting transformative AI development windows.
Review
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI-Augmented Forecasting | Approach | 54.0 |
55e4c8653a8ad2d2 | Stable ID: Mzc3ZjI5Mz