Skip to content
Longterm Wiki
Back

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: AI Impacts

This is the 2023 edition of AI Impacts' recurring expert survey, one of the most cited empirical sources on AI researcher beliefs about timelines, risks, and safety priorities; results are frequently referenced in policy and research discussions.

Metadata

Importance: 88/100organizational reportdataset

Summary

A large-scale survey of 2,778 AI researchers found a median 5% (mean 16%) probability of human extinction or permanent severe disempowerment from AI. Experts also significantly accelerated their timelines, estimating a 50% chance of AI outperforming humans in all tasks by 2047—13 years sooner than the 2022 survey estimate—and 70% believed AI safety should receive greater prioritization.

Key Points

  • Median 5% and mean 16% probability assigned to AI causing human extinction or permanent severe disempowerment of humanity.
  • Experts moved up their estimate for human-level AI in all tasks by 13 years, now predicting 50% probability by 2047 vs. 2060 in 2022.
  • 1-in-10 experts assigned at least 25% probability to extremely bad outcomes like extinction; 1% assigned 75% or higher.
  • 70% of respondents believe AI safety research should be prioritized more than it currently is.
  • Top concerns include AI-enabled disinformation/deepfakes, authoritarian empowerment, and biosecurity risks from AI-assisted weapons development.

Cited by 1 page

PageTypeQuality
The Case For AI Existential RiskArgument66.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20268 KB
# Survey: Median AI expert says $5 %$ chance of human extinction from AI

BERKELEY, CALIFORNIA: In a new survey of 2,778 AI experts, experts gave a median $5 %$ chance that AI would cause human extinction.

In the survey conducted by AI Impacts, a Berkeley-based think tank, in collaboration with researchers at the University of Bonn and University of Oxford, experts offered a median response of $5 %$ for the chance of future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species. Mean responses indicated an even higher risk, suggesting a nearly one-in-six $( 1 6 % )$ chance of catastrophic outcomes — the same odds as dying in a game of Russian roulette.

All respondents had recently published a paper in one of six top peer-reviewed AI venues, demonstrating that this is a prevailing view among experts about the potential risks of an emerging technology, the inner workings of which are poorly understood. One in ten respondents put at least a $2 5 %$ chance on extremely bad outcomes like human extinction, with $1 %$ of respondents thinking there was at least a $7 5 %$ chance of such outcomes.

Experts also expressed particular concern over the spread of disinformation (including deepfakes), the potential of AI to exacerbate authoritarianism, and its ability to assist dangerous groups in making powerful tools like engineered viruses.

The survey also found that powerful AI may arrive much sooner than many people expect. Experts now see a $50 %$ chance of AI models outperforming humans in every task by 2047, assuming no major disruption to scientific activity — just over 20 years from now. Notably, this estimated date is 13 years sooner than the date arrived at by a similar survey in 2022, in which respondents estimated such AI would not arrive until 2060.

When asked a similar question — when all occupations would be fully automatable — experts had a much longer time horizon, saying there was a $50 %$ chance of this happening by 2116 — 92 years from now. This also represents an advance of 48 years on their 2022 prediction, however: in 2022, experts thought this wouldn’t happen until 2164.

Experts expect systems will be able to do many tasks long before then, though. They thought there was a $50 %$ chance that AI models would be able to produce a new song indistinguishable from a Taylor Swift song by 2027, and a $50 %$ chance AI could write NYT best-selling fiction by 2031. They also predicted a $50 %$ chance that AI models would be able to perform as well as the best humans in the prestigious Putnam math competition within eight years.

In response to the rapid advance of potentially dangerous AI, $70 %$ of experts thought that AI safety should be prioritised more than it currently is, with $36 %$ saying it should be prioritised “more” and a further $34 %$ saying it should be prioritised “much more.”

Commenting on the results, Katja Grace, lead researcher at AI Impacts, said: “These results show t

... (truncated, 8 KB total)
Resource ID: 3b5912fe113394f3 | Stable ID: N2I3MTY4ZT