Skip to content
Longterm Wiki
Back

What do ML researchers think about AI in 2022? - AI Impacts

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: AI Impacts

A landmark 2022 survey of ML researcher opinions on AI timelines and risks; frequently cited in AI safety discussions as evidence of expert concern and uncertainty. Follow-up to earlier AI Impacts surveys from 2016.

Metadata

Importance: 82/100organizational reportdataset

Summary

AI Impacts surveyed hundreds of machine learning researchers to gauge expert opinion on AI progress, timelines, and risks. The survey covers beliefs about transformative AI timelines, probability of catastrophic outcomes, and views on AI safety research priorities. It represents one of the most comprehensive and cited surveys of ML researcher opinion on AI development trajectories.

Key Points

  • Surveyed ~700 ML researchers published at top venues (NeurIPS, ICML) about AI timelines, risks, and progress
  • Median estimate for high-level machine intelligence arriving within ~50 years, with significant variance across respondents
  • A substantial minority of researchers assigned >10% probability to AI causing human extinction or severely curtailing human autonomy
  • Many researchers expressed belief that AI safety research deserves more attention and resources than it currently receives
  • Results show notable disagreement among experts, highlighting uncertainty about both capabilities timelines and risk severity

Cited by 1 page

PageTypeQuality
AI ImpactsOrganization53.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202611 KB
What do ML researchers think about AI in 2022? – AI Impacts 
 
 
 
 
 
 
 
 

 
 
 
 

 
 
 
 

 
 
 
 
 

 

 

 

 

 
 
 
 
 

 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Katja Grace, 4 August 2022 

 AI Impacts just finished collecting data from a new survey of ML researchers, as similar to the 2016 one as practical, aside from a couple of new questions that seemed too interesting not to add.

 This page reports on it preliminarily, and we’ll be adding more details there. But so far, some things that might interest you:

 
 37 years until a 50% chance of HLMI according to a complicated aggregate forecast (and biasedly not including data from questions about the conceptually similar Full Automation of Labor, which in 2016 prompted strikingly later estimates) . This 2059 aggregate HLMI timeline has become about eight years shorter in the six years since 2016, when the aggregate prediction was 2061, or 45 years out. Note that all of these estimates are conditional on “human scientific activity continu[ing] without major negative disruption.”

 P(extremely bad outcome)=5% The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. This is the same as it was in 2016 (though Zhang et al 2022 found 2% in a similar but non-identical question). Many respondents put the chance substantially higher: 48% of respondents gave at least 10% chance of an extremely bad outcome. Though another 25% put it at 0%.

 Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI 1 was 10%, weirdly more than median chance of human extinction from AI in general 2 , at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcomes. 

 Support for AI safety research is up : 69% of respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016. 

 The median respondent thinks there is an “about even chance” that an argument given for an intelligence explosion is broadly correct. The median respondent also believes machine intelligence will probably (60%) be “vastly better than humans at all professions” within 30 years of HLMI, and that the rate of global technological improvement will probably (80%) dramatically increase (e.g., by a factor of ten) as a result of machine intelligence within 30 years of HLMI.

 Years/probabilities framing effect persists: if you a

... (truncated, 11 KB total)
Resource ID: 1594ec74ad11d254 | Stable ID: NWM3NWRhYj