Skip to content
Longterm Wiki
Back

2023 AI researcher survey

paper

Authors

Katja Grace·Harlan Stewart·Julia Fabienne Sandkühler·Stephen Thomas·Ben Weinstein-Raun·Jan Brauner·Richard C. Korzekwa

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

Large-scale survey of 2,778 AI researchers providing empirical forecasts on AI progress timelines and capabilities, offering valuable data on expert predictions regarding advanced AI system development and milestones relevant to AI safety planning.

Paper Details

Citations
99
4 influential
Year
2024
Methodology
survey

Metadata

arxiv preprintprimary source

Abstract

In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.

Summary

A survey of 2,778 AI researchers from top-tier venues reveals significant shifts in timelines for AI capabilities and widespread concern about advanced AI risks. Researchers predict at least 50% probability of several AI milestones by 2028 (including autonomous site construction and LLM fine-tuning), and estimate a 50% chance of AI outperforming humans in all tasks by 2047—13 years earlier than predicted in 2022. While 68% believe good outcomes from superhuman AI are more likely than bad, substantial majorities express concern about extinction-level risks (38-51% give ≥10% probability) and other scenarios like misinformation and inequality. Notably, researchers broadly agree that AI risk research should be prioritized more, despite disagreement on whether faster or slower progress is preferable.

Cited by 2 pages

PageTypeQuality
AGI Development--52.0
AI TimelinesConcept95.0

Cached Content Preview

HTTP 200Fetched Feb 26, 202650 KB
[2401.02843] Thousands of AI Authors on the Future of AI --> Computer Science > Computers and Society arXiv:2401.02843 (cs) [Submitted on 5 Jan 2024 ( v1 ), last revised 8 Oct 2025 (this version, v3)] Title: Thousands of AI Authors on the Future of AI Authors: Katja Grace , Harlan Stewart , Julia Fabienne Sandkühler , Stephen Thomas , Ben Weinstein-Raun , Jan Brauner , Richard C. Korzekwa View a PDF of the paper titled Thousands of AI Authors on the Future of AI, by Katja Grace and 6 other authors View PDF HTML (experimental) Abstract: In the largest survey of its kind, 2,778 researchers who had published in top-tier artificial intelligence (AI) venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more. Comments: The asterisk indicates the corresponding author. The dagger indicates equal contribution Subjects: Computers and Society (cs.CY) ; Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2401.02843 [cs.CY] (or arXiv:2401.02843v3 [cs.CY] for this version) https://doi.org/10.48550/arXiv.2401.02843 Focus to learn more arXiv-issued DOI via DataCite Journal reference: Journal of Artificial Intelligence Research 84:9 (2025) Related DOI : https://doi.org/10.1613/jair.1.19087 Focus to learn more DOI(s) linking to related res

... (truncated, 50 KB total)
Resource ID: 420c48ee4c61fe6c | Stable ID: M2UwOWE2Yz