Thousands of AI Authors on the Future of AI (2023 Expert Survey)
reportCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: AI Impacts
This is one of the largest and most cited surveys of AI researcher opinions on safety and timelines; a key empirical reference for understanding expert consensus and disagreement on transformative AI risks as of 2022-2023.
Metadata
Summary
A large-scale survey of 2,778 AI researchers published by AI Impacts in 2023 examines expert predictions on AI milestone timelines, transformative AI risks, and potential societal impacts. Respondents expressed significant concern about catastrophic and existential risks from advanced AI, with many believing there is a non-trivial probability of very bad outcomes. The survey updates and expands on prior AI Impacts forecasting work.
Key Points
- •Survey of 2,778 ML researchers finds median estimate of ~50% chance of high-level machine intelligence within decades, with wide uncertainty.
- •A substantial fraction of respondents (>30%) assigned 10% or more probability to AI causing outcomes that are 'catastrophic or worse' for humanity.
- •Researchers expressed concern about AI-enabled disinformation, autonomous weapons, and loss of human control as near-term risks.
- •Many researchers believe the AI safety field is underfunded and that alignment research deserves significantly more attention.
- •The survey provides a rare large-sample empirical baseline for expert opinion on AI timelines and risks, useful for calibrating forecasts.
Review
a0e5c1ff413bb7d8 | Stable ID: ZWY5NjcyNG