Long Timelines to Advanced AI Have Changed My Mind
blogCredibility Rating
Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.
Rating inherited from publication venue: Substack
Written by Helen Toner (CSET, former OpenAI board), this post is notable for its insider perspective on how expert consensus on AI timelines has shifted, making it relevant for governance and strategy discussions in the AI safety community.
Metadata
Summary
Helen Toner, former OpenAI board member, argues that AI timelines have compressed so dramatically that the old debate between 'short' (10-20 year) and 'long' timelines is now obsolete. With leading AI company heads forecasting AGI-like systems by 2026-2027, she contends the urgent challenge is no longer whether to prepare for advanced AI but how to govern and respond to its near-term arrival.
Key Points
- •What was once considered 'short timeline' thinking (10-20 years) now seems slow compared to current predictions of human-level AI within 1-5 years.
- •Scaling laws and reasoning models have been key drivers compressing expert forecasts for advanced AI capabilities.
- •Major AI lab leaders (e.g., at OpenAI, Anthropic) are publicly forecasting AGI-like systems arriving around 2026-2027.
- •The old framing of 'should we prepare for advanced AI?' is obsolete; the new question is how to govern imminent near-term risks.
- •This timeline shift has profound implications for AI governance, policy urgency, and the prioritization of safety research.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Short AI Timeline Policy Implications | Analysis | 62.0 |
Cached Content Preview
[](https://helentoner.substack.com/)
# [Rising Tide](https://helentoner.substack.com/)
SubscribeSign in

Discover more from Rising Tide
Intermittent thoughts on navigating the transition to a world with extremely advanced AI systems
Over 6,000 subscribers
Subscribe
By subscribing, you agree Substack's [Terms of Use](https://substack.com/tos), and acknowledge its [Information Collection Notice](https://substack.com/ccpa#personal-data-collected) and [Privacy Policy](https://substack.com/privacy).
Already have an account? Sign in
# "Long" timelines to advanced AI have gotten crazy short
### The prospect of reaching human-level AI in the 2030s should be jarring
[](https://substack.com/@helentoner)
[Helen Toner](https://substack.com/@helentoner)
Apr 01, 2025
161
21
21
Share
_Welcome to Rising Tide! I’m publishing 3 posts this week to celebrate the launch of this Substack—this is post #1. New posts will be more intermittent after this week. Subscribe to get them straight in your inbox:_
Subscribe
[](https://substackcdn.com/image/fetch/$s_!29vy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab336c19-be22-4ec0-88bf-a0509c238f34_1550x570.png) [Source](https://x.com/HumanHarlan/status/1866216644089205039)
It used to be a bold claim, requiring strong evidence, to argue that we might see anything like human-level AI any time in the first half of the 21st century. [This 2016 post](https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/), for instance, spends 8,500 words justifying the claim that there is a greater than 10% chance of advanced AI being developed by 2036.
_(Arguments about timelines typically refer to “timelines to AGI,” but throughout this post I’ll mostly refer to “advanced AI” or “human-level AI” rather than “AGI.” In my view, “AGI” as a term of art tends to confuse more than it clarifies, since different experts use it in such different ways.[1](https://hel
... (truncated, 18 KB total)c60a74482c11b551 | Stable ID: NjlkZTc1Zm