Back
defines slow takeoff
websideways-view.com·sideways-view.com/2018/02/24/takeoff-speeds/
A widely cited post by Paul Christiano that introduced precise terminology for AI takeoff speed debates, frequently referenced in AI safety discussions to distinguish between gradual and discontinuous paths to transformative AI.
Metadata
Importance: 78/100blog postprimary source
Summary
Paul Christiano's influential blog post defining and distinguishing 'slow takeoff' from 'fast takeoff' scenarios for AI development. It argues that a slow takeoff—where AI capabilities grow gradually over years or decades—is more likely than a sudden jump, and explores the implications of each scenario for AI safety and societal preparedness.
Key Points
- •Defines 'slow takeoff' as a scenario where AI capabilities and economic impact grow gradually, giving society time to adapt and course-correct.
- •Contrasts slow takeoff with 'fast takeoff' (hard takeoff/FOOM), where a single AI system rapidly self-improves to vastly superhuman levels in a short period.
- •Argues slow takeoff is more probable based on historical trends in technology and the incremental nature of ML progress.
- •Slow takeoff does not eliminate existential risk but changes its character—competitive dynamics and misalignment could still be catastrophic.
- •The distinction matters for prioritization: slow takeoff favors technical alignment research that can iterate, while fast takeoff demands solving alignment before deployment.
Cached Content Preview
HTTP 200Fetched Mar 20, 202652 KB
[Skip to content](https://sideways-view.com/2018/02/24/takeoff-speeds/#content)
# [The sideways view](https://sideways-view.com/)
## Looking askance at reality
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).
I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.
(Note: this is _not_ a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go [along these lines](https://sideways-view.com/2017/10/04/hyperbolic-growth/). So e.g. while I disagree with many of the claims and assumptions in [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf), I don’t disagree with the central thesis or with most of the arguments.)
(See also: [AI Impacts page](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/) on the same topic.)
### Slow takeoff
#### **Slower takeoff means faster progress**
Fast takeoff is often justified by pointing to the incredible transformative potential of intelligence; by enumerating the many ways in which AI systems will outperform humans; by pointing to historical examples of rapid change; _etc._
This gives the impression that people who expect a slow takeoff think AI will have a smaller impact, or will take longer to transform society.
But I think that’s backwards. The main disagreement is not about what will happen once we have a superintelligent AI, it’s about what will happen _before_ we have a superintelligent AI. So slow takeoff seems to mean that AI has a larger impact on the world, sooner.

In the fast takeoff scenario, weaker AI systems may have significant impacts but they are nothing compared to the “real” AGI. Whoever builds AGI has a decisive strategic advantage. Growth accelerates from 3%/year to 3000%/year without stopping at 30%/year. And so on.
In the slow takeoff scenario, pre-AGI systems have a transformative impact that’s only slightly smaller than AGI. AGI appears in a world where everything already happens incomprehensibly quickly and everyone is incredibly powerful. Being 12 months ahead in AGI might get you a decisive strategic advantage, but the world has accelerated so much that that’s just about as hard as getting to airplanes 30 years before anyone else.
#### **Operationalizing slow takeoff**
_There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.
... (truncated, 52 KB total)Resource ID:
d70ecd90990cdd58 | Stable ID: NzhiYWM4OD