Skip to content
Longterm Wiki
Back

Credibility Rating

2/5
Mixed(2)

Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.

Rating inherited from publication venue: Substack

Written by Benjamin Todd (80,000 Hours founder), this piece engages with one of the most prominent 2024 documents arguing for near-term AGI, making it useful context for understanding current debates about timelines and their strategic implications.

Metadata

Importance: 55/100blog postcommentary

Summary

Benjamin Todd reviews Leopold Aschenbrenner's 'Situational Awareness' essay series, analyzing its claims about accelerating AGI timelines, the plausibility of rapid capability gains, and implications for AI safety and strategy. The review assesses the evidence and reasoning behind Aschenbrenner's bullish timeline predictions and their significance for the AI safety community.

Key Points

  • Engages critically with Aschenbrenner's 'Situational Awareness' essays which predict rapid progression to AGI and superintelligence within this decade.
  • Evaluates the key arguments for compressed timelines, including scaling laws, algorithmic progress, and anticipated compute growth.
  • Considers strategic and policy implications if Aschenbrenner's timeline predictions are correct or approximately correct.
  • Discusses how shorter timelines affect prioritization decisions for people working on AI safety and governance.
  • Provides an 80,000 Hours perspective on how to respond to high-uncertainty but potentially high-stakes timeline forecasts.

Cited by 1 page

PageTypeQuality
The Case For AI Existential RiskArgument66.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202621 KB
[![Benjamin Todd](https://substackcdn.com/image/fetch/$s_!bm4z!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24fb7c14-4fc6-4886-992a-aa8177348931_438x438.png)](https://benjamintodd.substack.com/)

# [Benjamin Todd](https://benjamintodd.substack.com/)

SubscribeSign in

![User's avatar](https://substackcdn.com/image/fetch/$s_!-kPF!,w_64,h_64,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d0e4705-0c8a-46a2-98e8-1dff8d79dcbd_1366x1366.jpeg)

Discover more from Benjamin Todd

Trying to understand AGI and what to do about it

Over 3,000 subscribers

Subscribe

By subscribing, you agree Substack's [Terms of Use](https://substack.com/tos), and acknowledge its [Information Collection Notice](https://substack.com/ccpa#personal-data-collected) and [Privacy Policy](https://substack.com/privacy).

Already have an account? Sign in

# Shortening AGI timelines: a review of expert forecasts

[![Benjamin Todd's avatar](https://substackcdn.com/image/fetch/$s_!-kPF!,w_36,h_36,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d0e4705-0c8a-46a2-98e8-1dff8d79dcbd_1366x1366.jpeg)](https://substack.com/@benjamintodd)

[Benjamin Todd](https://substack.com/@benjamintodd)

Apr 09, 2025

23

10

2

Share

As a non-expert, it would be great if there were experts who could tell us when we should expect artificial general intelligence (AGI) to arrive.

Unfortunately, there aren’t.

There are only different groups of experts with different weaknesses.

This article is an overview of what five different types of experts say about when we’ll reach AGI, and what we can learn from them (that feeds into my [full article on forecasting AI](https://80000hours.org/agi/guide/when-will-agi-arrive/)).

In short:

- Every group shortened their estimates in recent years.

- AGI before 2030 seems within the range of expert opinion, even if many disagree.

- None of the forecasts seem especially reliable, so they neither rule in nor rule out AGI arriving soon.


[![Graph of forecasts of years to AGI](https://substackcdn.com/image/fetch/$s_!3C3I!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c97509-d713-4cf6-8e65-2d61ee6bc314_2064x1489.png)](https://substackcdn.com/image/fetch/$s_!3C3I!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F72c97509-d713-4cf6-8e65-2d61ee6bc314_2064x1489.png) In four years, the mean estimate on Metaculus for when AGI will be developed has plummeted from 50 years to 5. There are problems with the definition used, but the graph reflects a broader pattern of declining estimates.

Here’s an overview of the five groups:

## **AI experts**

### **1\. Leaders of AI companies**

The leaders of AI companies [are sa

... (truncated, 21 KB total)
Resource ID: 9b2e0ac4349f335e | Stable ID: MTRmMTg2ZW