What the hell happened with AGI timelines in 2025?
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: 80,000 Hours
A 2026 retrospective podcast episode useful for understanding how AI capability forecasting evolved through 2025, relevant to anyone tracking AGI timelines and their implications for safety and career decisions.
Metadata
Summary
80,000 Hours host Rob Wiblin analyzes the dramatic swing in AGI timeline forecasts during 2025: first contracting sharply after OpenAI's o1/o3 reasoning models, then expanding back out as limitations became apparent. The episode synthesizes perspectives from researchers including Toby Ord, Dwarkesh Patel, and Epoch AI to explain what drove these shifts and what the uncertainty means for career planning and existential risk.
Key Points
- •Early 2025 saw AGI timeline compression after o1/o3 reasoning models impressed researchers, driving widespread short-timeline sentiment.
- •Second half of 2025 reversed this trend, with timelines expanding even beyond pre-reasoning-model baselines due to RL scaling inefficiencies and other bottlenecks.
- •Key technical factors included extreme inefficiency of RL for frontier models, limits of inference-scaling, and the 'continual learning' challenge.
- •Despite timeline expansion, the episode argues 'radical pessimists' who dismiss near-term AGI risk are still wrong, as even long timelines are short in planning terms.
- •Recorded January 2026, it synthesizes a rich set of analyses from Epoch AI, METR, Metaculus, and individual researchers to ground the discussion empirically.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Timelines | Concept | 95.0 |
Cached Content Preview
What the hell happened with AGI timelines in 2025? | 80,000 Hours Search for: On this page: Introduction 1 Articles, books, and other media discussed in the show 2 Transcript 2.1 Making sense of the timelines madness in 2025 [00:00:00] 2.2 The great timelines contraction [00:00:46] 2.3 Why timelines went back out again [00:02:10] 2.4 Other longstanding reasons AGI could take a good while [00:11:13] 2.5 So what's the upshot of all of these updates? [00:14:47] 2.6 5 reasons the radical pessimists are still wrong [00:16:54] 2.7 Even long timelines are short [00:23:54] 3 Learn more 4 Related episodes Read transcript See all episodes In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of 2025, sentiment swung all the way back in the other direction, with people’s forecasts for when AI might really shake up the world blowing out even further than they had been before reasoning models came along. What the hell happened? Was it just swings in vibes and mood? Confusion? A series of fundamentally unexpected and unpredictable research results? Host Rob Wiblin has been trying to make sense of it for himself, and here’s the best explanation he’s come up with so far. This episode was recorded on January 29, 2026. Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour Music: CORBIT Camera operator: Dominic Armstrong Coordination, transcripts, and web: Katy Moore Articles, books, and other media discussed in the show Toby Ord’s analyses: The extreme inefficiency of RL for frontier models Evidence that recent AI gains are mostly from inference-scaling How well does RL scale? Are the costs of AI agents also rising exponentially? The scaling series on the EA Forum See all Toby’s writing on his website Epoch AI data: AI capabilities progress has sped up Epoch Capabilities Index How well did forecasters predict 2025 AI progress? METR Time Horizons Can AI companies become profitable? Other predictions and analyses: Metaculus forecast: When will the first general AI system be devised, tested, and publicly announced? AI 2027 Model: Dec 2025 Update The case for AGI by 2030 by Benjamin Todd Timelines forecast used by AI 2027 authors now Leading AI expert delays timeline for its possible destruction of humanity — The Guardian coverage What’s up with Anthropic predicting AGI by early 2027? by Ryan Greenblatt Is 90% of code at Anthropic being written by AIs? by Ryan Greenblatt My AGI timeline updates from GPT-5 (and 2025 so far) by Ryan Greenblatt Thoughts on AI progress (Dec 2025) by Dwarkesh Patel AGI is still 30 years away — Ege Erdil & Tamay Besiroglu — appearance on the Dwarkesh Podcast The “continual learning” challenge: Why I don’t think AGI is right around the corner by Dwarkesh Patel — also reviewed by Zvi Mowshowitz:
... (truncated, 162 KB total)4f37d8ffbab43aed | Stable ID: OThhYzMxYm