MIRI's recursive self-improvement analysis
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: MIRI
A foundational MIRI technical report by Yudkowsky applying microeconomic modeling to intelligence explosion scenarios; important background for understanding fast vs. slow takeoff debates and arguments about discontinuous capability gains in AI development.
Metadata
Summary
Eliezer Yudkowsky's 2013 MIRI technical report provides a formal microeconomic framework for analyzing recursive self-improvement and intelligence explosions. It examines the conditions under which an AI system improving its own capabilities could lead to rapid, discontinuous capability gains, modeling optimization power, returns to cognitive reinvestment, and the factors governing takeoff speed and dynamics.
Key Points
- •Introduces economic concepts like 'returns to scale' and 'optimization power' to formally model recursive self-improvement dynamics
- •Analyzes conditions for fast vs. slow takeoff scenarios based on the shape of returns to cognitive reinvestment
- •Argues that the steepness of intelligence explosion depends on hardware overhang, software optimization curves, and serial vs. parallel bottlenecks
- •Provides theoretical grounding for why a sufficiently capable self-improving system could rapidly become vastly superhuman
- •Frames intelligence explosion as an economic phenomenon amenable to formal analysis rather than purely speculative argument
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Corrigibility Failure Pathways | Analysis | 62.0 |
Cached Content Preview
[Skip to content](https://intelligence.org/2013/05/05/intelligence-explosion-microeconomics/#content) # Not Found (Error 404) ## Page Not Found Sorry, but we can’t find what you were looking for.
c134150bb0c55e87 | Stable ID: N2ZiN2E3ZG