2023 Expert Survey on AI Risk
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: AI Impacts
This is a landmark empirical survey of AI researcher opinions on risk and timelines, frequently cited in AI safety discussions as evidence of expert consensus (or lack thereof) on existential risk from AI.
Metadata
Summary
AI Impacts surveyed 2,778 AI researchers in 2023 about their expectations for AI timelines, risks, and the probability of catastrophic outcomes. The survey found significant concern among researchers about existential risk from AI, with median estimates for transformative AI arriving sooner than in prior surveys. It represents one of the largest and most cited expert elicitation efforts on AI risk.
Key Points
- •Median estimate for high-level machine intelligence (HLMI) moved substantially earlier compared to the 2022 survey, reflecting accelerating expectations.
- •Roughly half of respondents assigned 10% or greater probability to AI causing human extinction or severe disempowerment.
- •Many researchers expressed concern that AI safety is under-prioritized relative to capabilities research.
- •Survey covered timelines, catastrophic risk probabilities, governance preferences, and views on current AI development pace.
- •Results highlight significant disagreement among experts, with wide variance in risk estimates and timeline predictions.
af9593f4824ee2c7 | Stable ID: Y2M0N2M3MG