Skip to content
Longterm Wiki
Back

AI Scaling Laws Are Showing Diminishing Returns, Forcing AI Labs to Change Course

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: TechCrunch

A November 2024 TechCrunch report summarizing industry consensus that pretraining scaling has hit diminishing returns; relevant for understanding shifts in AI capability trajectories and their implications for safety and timeline forecasting.

Metadata

Importance: 55/100news articlenews

Summary

AI labs are confronting diminishing returns from traditional compute and data scaling (pretraining), prompting a shift toward 'test-time compute' scaling—giving models more computational resources to reason during inference. Industry figures including Ilya Sutskever, Marc Andreessen, and Satya Nadella have acknowledged this transition, pointing to OpenAI's o1 model as exemplifying the new paradigm.

Key Points

  • Traditional scaling laws (more compute + more data = better models) are showing diminishing returns, with model improvement rates slowing at leading labs.
  • Ilya Sutskever and Marc Andreessen publicly acknowledged the ceiling, noting convergence in model capabilities across the industry.
  • 'Test-time compute' scaling—letting models 'think longer' before responding—is emerging as the next major scaling paradigm, exemplified by OpenAI's o1.
  • Microsoft CEO Satya Nadella and a16z's Anjney Midha declared a 'second era of scaling laws' centered on inference-time optimization.
  • This shift has implications for AGI timelines and the capital assumptions underpinning massive compute investments by leading labs.

Cited by 3 pages

Cached Content Preview

HTTP 200Fetched Mar 15, 202615 KB
Current AI scaling laws are showing diminishing returns, forcing AI labs to change course | TechCrunch 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 
 

 

 
 
 
 

 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 

 

 
 
 
 
 

 

 
 
 

 
 
 
 
 –:–:–:– 
 

 
 

 TechCrunch Founder Summit 2026: Last day for ticket savings of up to $300. Register Now. 

 

 
 

 Save up to $680 on your Disrupt 2026 pass. Ends 11:59 p.m. PT tonight. REGISTER NOW .

 

 
 
 Close 
 
 
 
 
 
 

 
 
 
 
 
 

 
 
 
 
 

 

 

 
 
 Image Credits: Bryce Durbin / TechCrunch 
 
 
 
 
 
 
 
 AI 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Current AI scaling laws are showing diminishing returns, forcing AI labs to change course

 
 
 
 
 
 
 
 
 
 
 Maxwell Zeff 
 

 
 
 
 
 
 6:00 AM PST · November 20, 2024 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 AI labs traveling the road to super-intelligent systems are realizing they might have to take a detour.

 “AI scaling laws,” the methods and expectations that labs have used to increase the capabilities of their models for the last five years, are now showing signs of diminishing returns, according to several AI investors, founders, and CEOs who spoke with TechCrunch. Their sentiments echo recent reports that indicate models inside leading AI labs are improving more slowly than they used to.

 
 
 
 

 
 
 
 

 Everyone now seems to be admitting you can’t just use more compute and more data while pretraining large language models and expect them to turn into some sort of all-knowing digital god. Maybe that sounds obvious, but these scaling laws were a key factor in developing ChatGPT, making it better, and likely influencing many CEOs to make bold predictions about AGI arriving in just a few years .

 OpenAI and Safe Super Intelligence co-founder Ilya Sutskever told Reuters last week that “ everyone is looking for the next thing ” to scale their AI models. Earlier this month, a16z co-founder Marc Andreessen said in a podcast that AI models currently seem to be converging at the same ceiling on capabilities. 

 But now, almost immediately after these concerning trends started to emerge, AI CEOs, researchers, and investors are already declaring we’re in a new era of scaling laws. “Test-time compute,” which gives AI models more time and compute to “think” before answering a question, is an especially promising contender to be the next big thing.

 “We are seeing the emergence of a new scaling law,” said Microsoft CEO Satya Nadella onstage at Microsoft Ignite on Tuesday, referring to the test-time compute research underpinning OpenAI’s o1 model .

 He’s not the only one now pointing to o1 as the future.

 
 
 
 
 Techcrunch event 
 
 
 
 Disrupt 2026: The tech ecosystem, all in one room

 Your next round. Your next hire. Your next breakout opport

... (truncated, 15 KB total)
Resource ID: 1ed975df72c30426 | Stable ID: NjNkNjNiNj