Skip to content
Longterm Wiki
Back

Will Compute Bottlenecks Prevent an Intelligence Explosion?

paper

Authors

Parker Whitfill·Cheryl Wu

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Empirical economic study directly relevant to debates about AI takeoff speed and recursive self-improvement; useful for those evaluating compute-as-a-bottleneck arguments in AI safety and forecasting contexts.

Paper Details

Citations
1
0 influential
Year
2025

Metadata

Importance: 62/100arxiv preprintanalysis

Abstract

The possibility of a rapid, "software-only" intelligence explosion brought on by AI's recursive self-improvement (RSI) is a subject of intense debate within the AI community. This paper presents an economic model and an empirical estimation of the elasticity of substitution between research compute and cognitive labor at frontier AI firms to shed light on the possibility. We construct a novel panel dataset for four leading AI labs (OpenAI, DeepMind, Anthropic, and DeepSeek) from 2014 to 2024 and fit the data to two alternative Constant Elasticity of Substitution (CES) production function models. Our two specifications yield divergent results: a baseline model estimates that compute and labor are substitutes, whereas a 'frontier experiments' model, which accounts for the scale of state-of-the-art models, estimates that they are complements. We conclude by discussing the limitations of our analysis and the implications for forecasting AI progress.

Summary

This paper uses economic production function models and a novel panel dataset from four leading AI labs (2014-2024) to empirically estimate whether compute and cognitive labor are substitutes or complements in AI research. The key finding is that model specification determines the answer: a baseline model suggests substitutability (enabling recursive self-improvement), while a frontier-experiments model suggests complementarity (constraining it). The divergent results underscore deep uncertainty about whether hardware constraints could halt a software-only intelligence explosion.

Key Points

  • Constructs a novel panel dataset from OpenAI, DeepMind, Anthropic, and DeepSeek (2014-2024) to estimate elasticity of substitution between compute and cognitive labor.
  • Two CES production function specifications yield conflicting results: baseline suggests compute/labor are substitutes; frontier model suggests they are complements.
  • If compute and labor are complements, recursive self-improvement would eventually bottleneck on hardware, preventing a rapid software-only intelligence explosion.
  • Findings highlight fundamental empirical uncertainty in forecasting AI progress trajectories and the plausibility of near-term superintelligence scenarios.
  • Engages directly with prominent RSI forecasts (AI-2027, Situational Awareness) and provides an empirical economic framework to evaluate their assumptions.

Cited by 1 page

PageTypeQuality
Self-Improvement and Recursive EnhancementCapability69.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202637 KB
# Will Compute Bottlenecks Prevent an Intelligence Explosion?

Parker Whitfill, Cheryl Wu
MIT, whitfill@mit.eduYale University, cheryl.wu@yale.edu

(June 1, 2025)

###### Abstract

The possibility of a rapid, ”software-only” intelligence explosion brought on by AI’s recursive self-improvement (RSI) is a subject of intense debate within the AI community. This paper presents an economic model and an empirical estimation of the elasticity of substitution between research compute and cognitive labor at frontier AI firms to shed light on the possibility. We construct a novel panel dataset for four leading AI labs (OpenAI, DeepMind, Anthropic, and DeepSeek) from 2014 to 2024 and fit the data to two alternative Constant Elasticity of Substitution (CES) production function models. Our two specifications yield divergent results: a baseline model estimates that compute and labor are substitutes, whereas a ’frontier experiments’ model, which accounts for the scale of state-of-the-art models, estimates that they are complements. We conclude by discussing the limitations of our analysis and the implications for forecasting AI progress.

††We thank the Effective Altruism Forum’s comments on the post version of this article. We would like to especially thank Basil Halprin and Phillip Trammel for valuable feedback.

## 1 Intro

There has recently been significant advancements in the capabilities of Artificial Intelligence (AI) in domains such as coding and mathematics. Given that these skills are fundamental to AI research and development, this progress has raised the prospect of leveraging AI to accelerate AI research itself, a process termed recursive self-improvement (RSI). For example, Google Deepmind’s AlphaEvolve is an LLM-based AI agent that discovered algorithmic advances that reduced LLM training time by 1%1\\%(Google
DeepMind, [2025](https://ar5iv.labs.arxiv.org/html/2507.23181#bib.bib4 "")).

Consequently, many industry insiders have argued that we are on the cusp of an intelligence explosion via recursive self-improvement - where AI perform AI research to train smarter models who do even more research to train even smarter models and so on. For example, the CEO of Meta described the prospect of an intelligence explosion as ”compelling” (Patel, [2025](https://ar5iv.labs.arxiv.org/html/2507.23181#bib.bib8 "")). Similarly, the CEO of Anthropic noted that ”because AI systems can eventually help make even smarter AI systems, a temporary lead could be parlayed into a durable advantage” (Amodei, [2025](https://ar5iv.labs.arxiv.org/html/2507.23181#bib.bib1 "")). Influential essays, such as ‘Situational Awareness’ (Aschenbrenner, [2025](https://ar5iv.labs.arxiv.org/html/2507.23181#bib.bib2 "")) and ‘AI-2027’ (Kokotajlo et al., [2025](https://ar5iv.labs.arxiv.org/html/2507.23181#bib.bib7 "")), authored by former OpenAI researchers, project the emergence of super-intelligence through recursive self-improvement by the end of this decade and 2027, respectively. I

... (truncated, 37 KB total)
Resource ID: 1ad610998319c382 | Stable ID: NGEzOTg1Nj