The Rationale-Shaped Hole At The Heart Of Forecasting
webAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
Relevant for AI safety researchers interested in forecasting methodology for AGI timelines and risk; critiques current forecasting practices and proposes structural improvements to preserve and leverage forecaster reasoning.
Forum Post Details
Metadata
Summary
This EA Forum post argues that forecasting systems systematically discard the reasoning and models behind predictions, losing more value than the final probability estimates themselves contain. The authors contend that adversarial collaboration and transparent reasoning are essential for improving forecast quality on complex topics like AGI risk, and introduce FutureSearch as a platform designed to make reasoning legible and central.
Key Points
- •Forecasters' underlying rationales and models are often more valuable than their final probability estimates, but current incentive structures discard this reasoning.
- •Widely divergent AGI forecasts persist partly because forecasting platforms aggregate point estimates without preserving the models and assumptions behind them.
- •Adversarial collaboration on AI risk forecasting still devolves into 'vertically stacked arguments' without structured reasoning frameworks.
- •Existing elite forecasting efforts (superforecasters, LLM pipelines) acknowledge weak argument generation but lack mechanisms to surface and critique underlying models.
- •FutureSearch proposes making reasons and models first-class outputs of forecasting, enabling structured critique and improving epistemic quality.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| FutureSearch | Organization | 50.0 |
Cached Content Preview
The Rationale-Shaped Hole At The Heart Of Forecasting — EA Forum
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Hide table of contents The Rationale-Shaped Hole At The Heart Of Forecasting
by dschwarz , FutureSearch , Lawrence Phillips , Daniel Hnyk , Peter Mühlbacher Apr 2 2024 13 min read 14 161
Forecasting Existential risk Philosophy Algorithmic Forecasting Opinion Frontpage The Rationale-Shaped Hole At The Heart Of Forecasting The Curious Case of the Missing Reasoning Those Who Seek Rationales, And Those Who Do Not So What Do Elite Forecasters Actually Know? The Rationale-Shaped Hole At The Heart Of Forecasting Facts: Cite Your Sources Reasons: So You Think You Can Persuade With Words Models: So You Think You Can Model the World There Is No Microeconomics of AGI 700 AI questions you say? Aren’t We In the Age of AI Forecasters? Towards “Towards Rationality Engines” Sample Forecasts With Reasons and Models 14 comments Thanks to Eli Lifland, Molly Hickman, Değer Turan, and Evan Miyazono for reviewing drafts of this post. The opinions expressed here are my own.
Summary:
Forecasters produce reasons and models that are often more valuable than the final forecasts
Most of this value is being lost due to the historical practice & incentives of forecasting, and the difficulty of crowds to “adversarially collaborate”
FutureSearch is a forecasting system with legible reasons and models at its core (examples at the end)
The Curious Case of the Missing Reasoning
Ben Landau-Taylor of Bismarck Analysis wrote a piece on March 6 called “ Probability Is Not A Substitute For Reasoning ”, citing a piece where he writes:
There has been a great deal of research on what criteria must be met for forecasting aggregations to be useful, and as Karger, Atanasov, and Tetlock argue , predictions of events such as the arrival of AGI are a very long way from fulfilling them.
Last summer, Tyler Cowen wrote on AGI ruin forecasts :
Publish, publish, not on blogs, not long stacked arguments or six hour podcasts or tweet storms, no, rather peer review, peer review, peer review, and yes with models too... if you wish to convince your audience of one of the most radical conclusions of all time…well, more is needed than just a lot of vertically stacked arguments.
Widely divergent views and forecasts on AGI persist, leading to FRI’s excellent adversarial collaboration on forecasting AI risk this month. Reading it, I saw… a lot of vertically stacked arguments.
There have been other big advances in judgmental forecasting recently, on non-AGI AI, Covid19 origins and scientific progress. How well justified are the forecasts?
Feb 28: Steinhardt’s lab’s impressive paper on “Approaching Human-Level Forecasting with Language Models” ( press ). The pipeline rephrases the question, lists arguments, ranks them, adjusts for biases, and then guesses the forecast. They note “The m
... (truncated, 39 KB total)23cca6e04c58e52d | Stable ID: Y2ZiZWYyMz