80,000 Hours AGI Timelines Review
webAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: 80,000 Hours
Published March 2025 by 80,000 Hours, this review is useful for understanding the current state of expert consensus on AGI timelines and is relevant for prioritizing AI safety work given potentially compressed timelines.
Metadata
Summary
A comprehensive synthesis by 80,000 Hours reviewing expert predictions on AGI timelines from multiple groups including AI lab leaders, researchers, and forecasters. The review finds a notable convergence toward shorter timelines, with many estimates suggesting AGI could arrive before 2030. Different expert communities that previously disagreed are now showing increasingly similar estimates.
Key Points
- •Multiple expert groups—AI company leaders, academic researchers, and forecasters—are converging on shorter AGI timelines, many pointing to before 2030.
- •Previously divergent expert communities are now showing more similar timeline estimates, suggesting broader consensus is forming.
- •The review synthesizes data across forecasting platforms, surveys, and public statements from prominent figures in AI development.
- •Shortened timelines have significant implications for AI safety planning, governance, and the urgency of alignment research.
- •The convergence of predictions across groups with different methodologies and incentives adds credibility to near-term AGI concerns.
Review
Cited by 9 pages
| Page | Type | Quality |
|---|---|---|
| The Case For AI Existential Risk | Argument | 66.0 |
| Is AI Existential Risk Real? | Crux | 12.0 |
| AGI Development | -- | 52.0 |
| AGI Timeline | Concept | 59.0 |
| Novel / Unknown Approaches | Capability | 53.0 |
| AI Risk Critical Uncertainties Model | Crux | 71.0 |
| Foresight Institute | Organization | -- |
| Metaculus | Organization | 50.0 |
| Long-Timelines Technical Worldview | Concept | 91.0 |
Cached Content Preview
Shrinking AGI timelines: a review of expert forecasts | 80,000 Hours Search for: On this page: 1 AI experts 1.1 1. Leaders of AI companies 1.2 2. AI researchers in general 2 Expert forecasters 2.1 3. Metaculus 2.2 4. Superforecasters in 2022 (XPT survey) 2.3 5. Samotsvety in 2023 3 Summary of expert views on when AGI will arrive 4 Learn more As a non-expert, it would be great if there were experts who could tell us when we should expect artificial general intelligence (AGI) to arrive. Unfortunately, there aren’t. There are only different groups of experts with different weaknesses. This article is an overview of what five different types of experts say about when we’ll reach AGI, and what we can learn from them (that feeds into my full article on forecasting AI ). In short: Every group shortened their estimates in recent years. AGI before 2030 seems within the range of expert opinion, even if many disagree. None of the forecasts seem especially reliable, so they neither rule in nor rule out AGI arriving soon. Table of Contents 1 AI experts 1.1 1. Leaders of AI companies 1.2 2. AI researchers in general 2 Expert forecasters 2.1 3. Metaculus 2.2 4. Superforecasters in 2022 (XPT survey) 2.3 5. Samotsvety in 2023 3 Summary of expert views on when AGI will arrive 4 Learn more In four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to five years. There are problems with the definition used, but the graph reflects a broader pattern of declining estimates. Here’s an overview of the five groups: AI experts 1. Leaders of AI companies The leaders of AI companies are saying that AGI arrives in 2–5 years , and appear to have recently shortened their estimates. This is easy to dismiss. This group is obviously selected to be bullish on AI and wants to hype their own work and raise funding. However, I don’t think their views should be totally discounted. They’re the people with the most visibility into the capabilities of next-generation systems, and the most knowledge of the technology. And they’ve also been among the most right about recent progress, even if they’ve been too optimistic. Most likely, progress will be slower than they expect, but maybe only by a few years. 2. AI researchers in general One way to reduce selection effects is to look at a wider group of AI researchers than those working on AGI directly, including in academia. This is what Katja Grace did with a survey of thousands of recent AI publication authors . The survey asked for forecasts of “high-level machine intelligence,” defined as when AI can accomplish every task better or more cheaply than humans. The median estimate was a 25% chance in the early 2030s and 50% by 2047 — with some giving answers in the next few years and others hundreds of years in the future. The median estimate of the chance of an AI being able to do the job of an AI researcher by 2033 was 5%. 1 They were also aske
... (truncated, 121 KB total)f2394e3212f072f5 | Stable ID: NjNiMTk2ZD