Intelligence Explosion Microeconomics
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: MIRI
A foundational MIRI technical report by Eliezer Yudkowsky attempting to formalize AI takeoff dynamics; important for understanding the theoretical basis of fast-takeoff arguments and discontinuous capability growth concerns in AI safety discourse.
Metadata
Summary
Eliezer Yudkowsky's technical monograph formalizing the microeconomic and decision-theoretic foundations of the 'intelligence explosion' hypothesis, examining how self-improving AI systems might rapidly increase in capability. It analyzes the conditions under which recursive self-improvement leads to discontinuous jumps in intelligence, drawing on economic concepts like returns to scale and optimization pressure. The paper attempts to provide rigorous grounding for claims about AI takeoff dynamics that had previously been argued informally.
Key Points
- •Formalizes the intelligence explosion concept using microeconomic frameworks, analyzing returns on cognitive reinvestment in self-improving systems.
- •Examines conditions for fast vs. slow AI takeoff, including whether capability gains compound rapidly or level off due to diminishing returns.
- •Introduces the concept of 'optimization daemons' and recursive self-improvement loops as key mechanisms for potential discontinuous capability jumps.
- •Argues that small differences in initial capability or values could be amplified enormously through recursive improvement, raising alignment stakes.
- •Provides theoretical grounding for MIRI's concern that superintelligent AI could emerge rapidly and without adequate human control mechanisms.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Timelines | Concept | 95.0 |
| Eliezer Yudkowsky | Person | 35.0 |
Cached Content Preview
# MIRI
M ACHINE INTELLIGENCE RESEARCH INSTITUTE
## Intelligence Explosion Microeconomics
### Eliezer Yudkowsky
_Machine Intelligence Research Institute_
### Abstract
I. J. Good’s thesis of the “intelligence explosion” states that a sufficiently advanced ma- chine intelligence could build a smarter version of itself, which could in turn build an even smarter version, and that this process could continue to the point of vastly exceeding human intelligence. As Sandberg (2010) correctly notes, there have been several attempts to lay down return on investment formulas intended to represent sharp speedups in economic or technological growth, but very little attempt has been made to deal formally with Good’s intelligence explosion thesis as such. I identify the key issue as returns on cognitive reinvestment—the ability to invest more computing power, faster computers, or improved cognitive algorithms to yield cognitive labor which produces larger brains, faster brains, or better mind designs. There are many phenomena in the world which have been argued to be evidentially relevant to this question, from the observed course of hominid evolution, to Moore’s Law, to the competence over time of machine chess-playing systems, and many more. I go into some depth on some debates which then arise on how to interpret such evidence. I propose that the next step in analyzing positions on the intelligence explosion would be to for- malize return on investment curves, so that each stance can formally state which possible microfoundations they hold to be falsified by historical observations. More generally, Yudkowsky, Eliezer. 2013. Intelligence Explosion Microeconomics. Technical report 2013-1. Berkeley, CA: Machine Intelligence Research Institute. Last modified September 13, 2013.
I pose multiple open questions of “returns on cognitive reinvestment” or “intelligence explosion microeconomics.” Although such questions have received little attention thus far, they seem highly relevant to policy choices affecting outcomes for Earth-originating intelligent life.
### Contents
**1 The Intelligence Explosion: Growth Rates of Cognitive Reinvestment 1**
1.1 On (Extensionally) Defining Terms. .. .. ... 7
1.2 Issues to Factor Out . . .. .. .. .. .. .. ... 11
1.3 AI Preferences: A Brief Summary of Core Theses ... 12
**2 Microfoundations of Growth 14**
2.1 The Outside View versus the Lucas Critique .. ... 19
**3 Some Defenses of a Model of Hard Takeoff 28**
3.1 Returns on Brain Size . .. .. .. .. .. .. ... 35
3.2 One-Time Gains. . . .. .. .. .. .. .. ... 39
3.3 Returns on Speed. . . .. .. .. .. .. .. ... 43
3.4 Returns on Population. .. .. .. .. .. .. ... 50
3.5 The Net Efficiency of Human Civilization. .. ... 53
3.6 Returns on Cumulative Evolutionary Selection Pressure ... 56
3.7 Relating Curves of Evolutionary Difficulty and Engineering Difficulty. 61
3.8 Anthropic Bias in Our Observation of Evolved Hominids . ... 64
3.9 Local versus Distributed Intelligence Explosi
... (truncated, 98 KB total)a1186c87f23ab9ce | Stable ID: OGUwNmFkNG