Skip to content
Longterm Wiki
Back

[2502.14870] Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts

paper

Author

Severin Field

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: arXiv

A survey examining disagreement among AI experts about existential risk and probability of AI doom, addressing why catastrophic AI risk research faces skepticism despite being promoted by leading AI labs and scientists.

Paper Details

Citations
3
1 influential
Year
2025
Methodology
peer-reviewed
Categories
AI and Ethics

Metadata

arxiv preprintanalysis

Abstract

The development of artificial general intelligence (AGI) is likely to be one of humanity's most consequential technological advancements. Leading AI labs and scientists have called for the global prioritization of AI safety citing existential risks comparable to nuclear war. However, research on catastrophic risks and AI alignment is often met with skepticism, even by experts. Furthermore, online debate over the existential risk of AI has begun to turn tribal (e.g. name-calling such as "doomer" or "accelerationist"). Until now, no systematic study has explored the patterns of belief and the levels of familiarity with AI safety concepts among experts. I surveyed 111 AI experts on their familiarity with AI safety concepts, key objections to AI safety, and reactions to safety arguments. My findings reveal that AI experts cluster into two viewpoints -- an "AI as controllable tool" and an "AI as uncontrollable agent" perspective -- diverging in beliefs toward the importance of AI safety. While most experts (78%) agreed or strongly agreed that "technical AI researchers should be concerned about catastrophic risks", many were unfamiliar with specific AI safety concepts. For example, only 21% of surveyed experts had heard of "instrumental convergence," a fundamental concept in AI safety predicting that advanced AI systems will tend to pursue common sub-goals (such as self-preservation). The least concerned participants were the least familiar with concepts like this, suggesting that effective communication of AI safety should begin with establishing clear conceptual foundations in the field.

Summary

This survey of 111 AI experts examines beliefs about AI safety and familiarity with key safety concepts. The study reveals that experts cluster into two opposing viewpoints—those viewing AI as a controllable tool versus an uncontrollable agent—with significant divergence in their assessment of AI safety importance. While 78% of experts agreed that researchers should be concerned about catastrophic risks, many demonstrated low familiarity with fundamental AI safety concepts like instrumental convergence. The findings suggest that low concern about AI safety correlates with unfamiliarity with core safety concepts, indicating that improved communication and conceptual clarity could enhance expert engagement with AI safety research.

Cited by 1 page

PageTypeQuality
The Case Against AI Existential RiskArgument58.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202657 KB
# Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts

\\fnmSeverin \\surField
[severin.field@louisville.edu](mailto:severin.field@louisville.edu)Cambridge ERA:AI Fellowship, Cambridge, UK

###### Abstract

The development of artificial general intelligence111The survey defines AGI as “AI systems
that are better at STEM research than the best human scientists, in addition to potentially having other advanced capabilities.” (AGI) is likely to be one of humanity’s most consequential technological advancements. Leading AI labs and scientists have called for the global prioritization of AI safety\[ [1](https://ar5iv.labs.arxiv.org/html/2502.14870#bib.bib1 "")\] citing existential risks comparable to nuclear war. However, research on catastrophic risks and AI alignment is often met with skepticism, even by experts. Furthermore, online debate over the existential risk of AI has begun to turn tribal (e.g. name-calling such as “doomer” or “accelerationist”). Until now, no systematic study has explored the patterns of belief and the levels of familiarity with AI safety concepts among experts. I surveyed 111 AI experts on their familiarity with AI safety concepts, key objections to AI safety, and reactions to safety arguments. My findings reveal that AI experts cluster into two viewpoints – an “AI as controllable tool” and an “AI as uncontrollable agent” perspective – diverging in beliefs toward the importance of AI safety. While most experts (78%) agreed or strongly agreed that “technical AI researchers should be concerned about catastrophic risks”, many were unfamiliar with specific AI safety concepts. For example, only 21% of surveyed experts had heard of “instrumental convergence,” a fundamental concept in AI safety predicting that advanced AI systems will tend to pursue common sub-goals (such as self-preservation). The least concerned participants were the least familiar with concepts like this, suggesting that effective communication of AI safety should begin with establishing clear conceptual foundations in the field.

###### keywords:

AI Safety, Surveying Experts, p(doom), Existential Risk

## 1 Introduction

Since the foundation of modern computer science, scientists such as Turing\[ [2](https://ar5iv.labs.arxiv.org/html/2502.14870#bib.bib2 "")\] have explored the possibility of achieving human-like intelligence. Over the past few decades, researchers have built a substantial body of work examining the risks posed by AI systems, an area of study termed “AI safety.”

Today, many prominent AI researchers including Nobel Laureate Geoffrey Hinton and Turing Laureate Yoshua Bengio argue that intelligent machines could endanger human civilization\[ [3](https://ar5iv.labs.arxiv.org/html/2502.14870#bib.bib3 "")\]. In May of 2023, many of the most notable scientists and figures in AI signed a statement stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pa

... (truncated, 57 KB total)
Resource ID: 4a838ac42dc6e2fc | Stable ID: YzA2MGNhMj