Skip to content
Longterm Wiki
Back

AI Timeline Surveys: A Comparative Analysis (1972–2016)

web

Author

https://aiimpacts.org/author/katja/

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: AI Impacts

Produced by AI Impacts, this resource is frequently cited in discussions about AI forecasting reliability and is useful background for anyone evaluating current claims about AGI timelines or assessing expert consensus on transformative AI development.

Metadata

Importance: 62/100organizational reportanalysis

Summary

A meta-analysis of twelve expert surveys on AI timelines spanning 1972 to 2016, examining predictions about when human-level AI might be achieved. The analysis highlights wide variation in median estimates (2020s to 2085) and significant methodological differences across surveys. This resource helps contextualize expert uncertainty and disagreement about transformative AI development timelines.

Key Points

  • Covers twelve distinct AI timeline surveys from 1972 to 2016, providing a longitudinal view of expert forecasting on human-level AI.
  • Median estimates for human-level AI vary dramatically across surveys, ranging from the 2020s to 2085, reflecting deep uncertainty.
  • Surveys differ substantially in methodology, respondent selection, and definitions of 'human-level AI,' making direct comparisons difficult.
  • Highlights persistent overconfidence and short-termism in some predictions, as well as the challenge of defining meaningful AI milestones.
  • Useful reference for understanding the historical track record of AI forecasting and informing current predictions about transformative AI.

Review

The AI Impacts Survey provides a critical meta-analysis of expert predictions regarding the development of human-level artificial intelligence, synthesizing results from twelve different surveys conducted between 1972 and 2016. The research highlights significant methodological variations, including differences in participant backgrounds, survey framing, and definitions of 'human-level AI', which contribute to the wide range of predicted timelines. Key methodological insights include potential bias from AGI researchers who may be overly optimistic, the impact of 'inside' versus 'outside' view estimation approaches, and the challenge of consistently defining human-level AI. The surveys predominantly feature AI researchers, conference attendees, and technical experts, with median estimates for a 10% chance of human-level AI clustering in the 2020s and 50% chance estimates ranging between 2035 and 2050. This comprehensive review underscores the uncertainty and complexity of predicting technological breakthroughs, emphasizing the need for nuanced, multidisciplinary approaches to forecasting transformative AI capabilities.
Resource ID: cd463c82ab0cd4f8 | Stable ID: NGIxZDE0MW