Skip to content
Longterm Wiki
Back

Author

Rob Bensinger

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

A rare empirical snapshot of expert opinion on AI existential risk probabilities circa 2021, useful for tracking community beliefs over time but limited by small sample size and self-selection bias.

Metadata

Importance: 45/100blog postdataset

Summary

Rob Bensinger surveyed ~117 AI safety researchers on two questions: the existential risk from insufficient technical AI safety research, and from AI misalignment with deployer intentions. With 44 respondents (38% response rate), the post shares raw probability estimates without analysis, noting individual caveats and cautioning against strong conclusions from aggregate numbers.

Key Points

  • Survey asked two questions: risk from insufficient technical AI safety research, and risk from AI systems not doing what deployers intended.
  • 44 of ~117 AI safety researchers responded (~38% rate); results represent individual views, not organizational positions.
  • Raw anonymized data shared publicly without analysis, inviting community interpretation.
  • Respondents provided many caveats not captured by aggregate numbers, urging caution in interpretation.
  • A companion post allowed readers to make predictions before seeing results, enabling calibration exercises.

Cited by 1 page

PageTypeQuality
Worldview-Intervention MappingAnalysis62.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202628 KB
["Existential risk from AI" survey results](https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/ai-alignment-2018-2019-review#)

13 min read

•

[Methods](https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/ai-alignment-2018-2019-review#Methods)

•

[Results](https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/ai-alignment-2018-2019-review#Results)

•

[Background and predictions](https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/ai-alignment-2018-2019-review#Background_and_predictions)

•

[Respondents' comments](https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/ai-alignment-2018-2019-review#Respondents__comments)

•

[Footnotes](https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/ai-alignment-2018-2019-review#Footnotes)

[Surveys](https://www.alignmentforum.org/w/surveys)[AI](https://www.alignmentforum.org/w/ai)
Frontpage

# 24

# ["Existential risk from AI" surveyresults](https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results)

by [Rob Bensinger](https://www.alignmentforum.org/users/robbbb?from=post_header)

1st Jun 2021

13 min read

[7](https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/ai-alignment-2018-2019-review#comments)

# 24

I sent a two-question survey to ~117 people working on long-term AI risk, asking about the level of existential risk from "humanity not doing enough technical AI safety research" and from "AI systems not doing/optimizing what the people deploying them wanted/intended".

44 people responded (~38% response rate). In all cases, these represent the views of specific individuals, not an official view of any organization. Since some people's views may have made them more/less likely to respond, I suggest caution in drawing strong conclusions from the results below. Another reason for caution is that respondents added a lot of caveats to their responses (see the[anonymized spreadsheet](https://docs.google.com/spreadsheets/d/1grw0kFSTsZzB93stOpWaOj4CgoTftl9jO50FeHnmlgM/edit)),1 which the aggregate numbers don't capture.

I don’t plan to do any analysis on this data, just share it; anyone who wants to analyze it is of course welcome to.

If you'd like to make your own predictions before seeing the data, I made [a separate spoiler-free post for that](https://www.alignmentforum.org/posts/3LtDzDJc32TF6bJEY/predict-responses-to-the-existential-risk-from-ai-survey).

## Methods

You can find a copy of the survey [here](https://docs.google.com/forms/d/13u7164jFHxcE6qWDcsgsLUsMZr72VpEvpxGcJsRp--E/edit). The main questions (including clarifying notes) were:2

> 1\. How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of humanity not doing enough technical AI safety research?
>
> 2\. How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended?

... (truncated, 28 KB total)
Resource ID: 69b320e83d92f2a0 | Stable ID: MDg2YWM4Zj