Leopold Aschenbrenner (2024)
webCredibility Rating
Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.
Rating inherited from publication venue: Substack
Written by Benjamin Todd (80,000 Hours founder), this piece engages with one of the most prominent 2024 documents arguing for near-term AGI, making it useful context for understanding current debates about timelines and their strategic implications.
Metadata
Summary
Benjamin Todd reviews Leopold Aschenbrenner's 'Situational Awareness' essay series, analyzing its claims about accelerating AGI timelines, the plausibility of rapid capability gains, and implications for AI safety and strategy. The review assesses the evidence and reasoning behind Aschenbrenner's bullish timeline predictions and their significance for the AI safety community.
Key Points
- •Engages critically with Aschenbrenner's 'Situational Awareness' essays which predict rapid progression to AGI and superintelligence within this decade.
- •Evaluates the key arguments for compressed timelines, including scaling laws, algorithmic progress, and anticipated compute growth.
- •Considers strategic and policy implications if Aschenbrenner's timeline predictions are correct or approximately correct.
- •Discusses how shorter timelines affect prioritization decisions for people working on AI safety and governance.
- •Provides an 80,000 Hours perspective on how to respond to high-uncertainty but potentially high-stakes timeline forecasts.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| The Case For AI Existential Risk | Argument | 66.0 |
Cached Content Preview
[](https://benjamintodd.substack.com/)
# [Benjamin Todd](https://benjamintodd.substack.com/)
SubscribeSign in

Discover more from Benjamin Todd
Trying to understand AGI and what to do about it
Over 3,000 subscribers
Subscribe
By subscribing, you agree Substack's [Terms of Use](https://substack.com/tos), and acknowledge its [Information Collection Notice](https://substack.com/ccpa#personal-data-collected) and [Privacy Policy](https://substack.com/privacy).
Already have an account? Sign in
# Shortening AGI timelines: a review of expert forecasts
[](https://substack.com/@benjamintodd)
[Benjamin Todd](https://substack.com/@benjamintodd)
Apr 09, 2025
23
10
2
Share
As a non-expert, it would be great if there were experts who could tell us when we should expect artificial general intelligence (AGI) to arrive.
Unfortunately, there aren’t.
There are only different groups of experts with different weaknesses.
This article is an overview of what five different types of experts say about when we’ll reach AGI, and what we can learn from them (that feeds into my [full article on forecasting AI](https://80000hours.org/agi/guide/when-will-agi-arrive/)).
In short:
- Every group shortened their estimates in recent years.
- AGI before 2030 seems within the range of expert opinion, even if many disagree.
- None of the forecasts seem especially reliable, so they neither rule in nor rule out AGI arriving soon.
[](https://substackcdn.com/image/fetch/$s_!3C3I!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c97509-d713-4cf6-8e65-2d61ee6bc314_2064x1489.png) In four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to 5. There are problems with the definition used, but the graph reflects a broader pattern of declining estimates.
Here’s an overview of the five groups:
## **AI experts**
### **1\. Leaders of AI companies**
The leaders of AI companies [are sa
... (truncated, 21 KB total)9b2e0ac4349f335e | Stable ID: MTRmMTg2ZW