Skip to content
Longterm Wiki
Back

Thousands of AI Authors on the Future of AI (2023 Expert Survey)

report

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: AI Impacts

This is one of the largest and most cited surveys of AI researcher opinions on safety and timelines; a key empirical reference for understanding expert consensus and disagreement on transformative AI risks as of 2022-2023.

Metadata

Importance: 82/100organizational reportprimary source

Summary

A large-scale survey of 2,778 AI researchers published by AI Impacts in 2023 examines expert predictions on AI milestone timelines, transformative AI risks, and potential societal impacts. Respondents expressed significant concern about catastrophic and existential risks from advanced AI, with many believing there is a non-trivial probability of very bad outcomes. The survey updates and expands on prior AI Impacts forecasting work.

Key Points

  • Survey of 2,778 ML researchers finds median estimate of ~50% chance of high-level machine intelligence within decades, with wide uncertainty.
  • A substantial fraction of respondents (>30%) assigned 10% or more probability to AI causing outcomes that are 'catastrophic or worse' for humanity.
  • Researchers expressed concern about AI-enabled disinformation, autonomous weapons, and loss of human control as near-term risks.
  • Many researchers believe the AI safety field is underfunded and that alignment research deserves significantly more attention.
  • The survey provides a rare large-sample empirical baseline for expert opinion on AI timelines and risks, useful for calibrating forecasts.

Review

This groundbreaking survey provides unprecedented insights into AI researchers' perspectives on technological progress and potential risks. The study captured predictions across 39 AI task milestones, with most expected to be feasible within the next decade, and revealed a striking level of uncertainty about AI's long-term implications. Researchers consistently estimated a 10-50% chance of human-level AI capabilities emerging between 2027-2047, with a notable shift towards earlier expectations compared to previous years. The research's key strength lies in its comprehensive approach, surveying experts from top AI conferences and probing complex questions about technological progress, societal impacts, and existential risks. Notably, between 38-51% of respondents assigned at least a 10% probability to extinction-level risks from advanced AI. The survey highlighted broad agreement that AI safety research should be prioritized more, while simultaneously revealing deep disagreement about the precise nature and timeline of potential AI developments.
Resource ID: a0e5c1ff413bb7d8 | Stable ID: ZWY5NjcyNG