Skip to content
Longterm Wiki
Back

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Future of Humanity Institute

A 2014 FHI working paper by Cotton-Barratt and Ord that helped establish the 'takeoff speed' framing in AI safety discourse; notable for shifting focus from predicting takeoff speed to evaluating which speed is strategically preferable.

Metadata

Importance: 62/100working paperanalysis

Summary

Owen Cotton-Barratt and Toby Ord analyze the strategic implications of fast versus slow AI takeoff scenarios, arguing that slow takeoffs are generally more desirable because they allow more time for safety research, broader societal awareness, and multipolar power distributions. The paper examines how takeoff speed affects the value of safety work done today and the likelihood of achieving good outcomes.

Key Points

  • Slow takeoffs allow safety researchers to better understand the specific nature of the threat and optimize their work accordingly, potentially multiplying its value.
  • Slow takeoffs enable widespread societal awareness of AGI risks, potentially generating support comparable to climate change and vastly more safety work.
  • Slow takeoffs are more likely to produce multipolar scenarios, preserving existing power balances and reducing concentration of transformative AI capabilities.
  • Fast takeoff scenarios compress the window for corrective action, making early safety investments riskier due to uncertainty about final AI form.
  • The paper distinguishes takeoff speed from likelihood questions, focusing instead on the desirability question and its implications for present-day strategy.

Cached Content Preview

HTTP 200Fetched Mar 31, 20267 KB
Strategic considerations about different speeds of AI takeoff - Future of Humanity Institute
 

 

 
 
 
 
 
 

 

 

 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 

 

 
 
 

 
 
 
 
 
 
 
 
 
 

 

 
 
 
 

 Jun
 JUL
 Aug
 

 
 

 
 17
 
 

 
 

 2024
 2025
 2026
 

 
 
 

 

 

 
 
success

 
fail

 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 

 

 About this capture
 

 

 

 

 

 

 
COLLECTED BY

 

 

 
 
Collection: Common Crawl

 

 

 Web crawl data from Common Crawl.
 

 

 

 

 

 
TIMESTAMPS

 

 

 

 

 

 

The Wayback Machine - http://web.archive.org/web/20250717152745/https://www.fhi.ox.ac.uk/strategic-considerations-about-different-speeds-of-ai-takeoff/

 

 
 

 
 

 

 
 

 

 

 
 
 
 

 

 

 

 

 
 
 
 

 

 
 

 

 

 

 
 

 

 
 

 

 

 
 
Strategic considerations about different speeds of AI takeoff

 

 

 Posted on 12 August 20142 November 2016 

 
 

 

 
Owen Cotton-Barratt and Toby Ord

There are several different kinds of artificial general intelligence (AGI) which might be developed, and there are different scenarios which could play out after one of them reaches a roughly human level of ability across a wide range of tasks. We shall discuss some of the implications we can see for these different scenarios, and what that might tell us about how we should act today.

A key difference between different types of post-AGI scenario is the ‘speed of takeoff’. This could be thought of as the time between first reaching a near human-level artificial intelligence and reaching one that far exceeds our capacities in almost all areas (or reaching a world where almost all economically productive work is done by artificial intelligences). In fast takeoff scenarios, this might happen over a scale of months, weeks, or days. In slow takeoff scenarios, it might take years or decades. There has been considerable discussion about which speed of takeoff is more likely, but less discussion about which is more desirable and what that implies.

Are slow takeoffs more desirable?

There are a few reasons to think that we’re more likely to get a good outcome in a slow takeoff scenario.

First, safety work today has an issue of neartsightedness. Since we don’t know quite what form artificial intelligence will eventually take, specific work today may end up being of no help on the problem we eventually face. If we had a slow takeoff scenario, there would be a period of time in which AGI safety researchers had a much better idea of the nature of the threat, and were able to optimise their work accordingly. This could make their work several times more valuable.

Second, and perhaps more crucially, in a slow takeoff the concerns about AGI safety are likely to spread much more widely through society. It is easy to imagine this producing widespread societal support of a level at or exceeding that for work on climate change, because the issue would be seen to be imminent. This could translate to much more work on securing a good outcome

... (truncated, 7 KB total)
Resource ID: 2e70c8bf22b57596 | Stable ID: NTFlMzYzNj