Semi-Informative Priors Over AI Timelines
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Coefficient Giving
A key Open Philanthropy methodology paper influencing how the organization and others in the AI safety community think about AGI timeline uncertainty; frequently cited alongside biological anchors and other forecasting frameworks.
Metadata
Summary
This Open Philanthropy research piece by Tom Davidson develops a framework for constructing semi-informative prior probability distributions over when transformative AI might be developed, using historical base rates of technological breakthroughs and computational progress. It attempts to quantify uncertainty about AGI timelines in a principled way, combining outside-view evidence with minimal assumptions. The analysis produces probability distributions suggesting meaningful probability of transformative AI within decades.
Key Points
- •Uses a 'semi-informative prior' approach that incorporates outside-view base rates (e.g., how often major technologies arise) rather than relying purely on subjective expert opinion.
- •Models AI development as a trial-based process, estimating how many 'trials' humanity has had at developing transformative AI and updating probability accordingly.
- •Combines multiple lines of evidence including historical R&D investment, compute scaling, and researcher effort to estimate timeline distributions.
- •Finds roughly 10-20% probability of transformative AI by 2036 and higher cumulative probabilities by mid-century, though with large uncertainty bands.
- •Intended as a complement to more detailed inside-view forecasting models, providing a sanity check grounded in historical base rates.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Timelines | Concept | 95.0 |
Cached Content Preview
Semi-Informative Priors Over AI Timelines | Coefficient Giving Skip to Content *+*]:mt-5"> March 25, 2021 Semi-Informative Priors Over AI Timelines By Tom Davidson Editor’s note : This article was published under our former name, Open Philanthropy. Some content may be outdated. You can see our latest writing here . One of Open Phil’s major focus areas is technical research and policy work aimed at reducing potential risks from advanced AI . As part of this, we aim to anticipate and influence the development and deployment of advanced AI systems. To inform this work, I have written a report developing one approach to forecasting when artificial general intelligence (AGI) will be developed. This is the full report. An accompanying blog post starts with a short non-mathematical summary of the report, and then contains a long summary . Introduction Executive summary The goal of this report is to reason about the likely timing of the development of artificial general intelligence (AGI). By AGI, I mean computer program(s) that can perform virtually any cognitive task as well as any human, [1] Notice that this definition applies equally whether it is a single artificial agent that can perform all these tasks, or a collection of narrower systems working together. The ‘single agent’ perspective is the focus of Bostrom’s Superintelligence, while Drexler (2019) argues that general … Continue reading for no more money than it would cost for a human to do it. The field of AI is largely held to have begun in Dartmouth in 1956, and since its inception one of its central aims has been to develop AGI. [2] The proposal for the Dartmouth conference states that ‘The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find … Continue reading I forecast when AGI might be developed using a simple Bayesian framework, and choose the inputs to this framework using commonsense intuitions and reference classes from historical technological developments. The probabilities in the report represent reasonable degrees of belief, not objective chances. One rough-and-ready way to frame our question is this: Suppose you had gone into isolation in 1956 and only received annual updates about the inputs to AI R&D (e.g. # of researcher-years, amount of compute [3] ‘Compute’ means computation. In this report I operationalize this as the number of floating point operations (FLOP). used in AI R&D) and the binary fact that we have not yet built AGI? What would be a reasonable pr(AGI by year X ) for you to have in 2021? There are many ways one could go about trying to determine pr(AGI by year X ). Some are very judgment-driven and involve taking stances on difficult questions like “since AI research began in 1956, what percentage of the way are we to developing AGI?” or “what steps are needed to build
... (truncated, 843 KB total)8fe422457a2c2560 | Stable ID: ODYyYjc2MD