"Existential risk from AI" survey results
webAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
A community survey from the EA Forum capturing probability estimates and attitudes toward AI existential risk, useful for understanding how the AI safety and EA communities collectively assess catastrophic AI scenarios.
Forum Post Details
Metadata
Summary
This EA Forum post presents survey results on expert and EA community opinions regarding existential risk from artificial intelligence, including probability estimates and key concerns. It aggregates views on AI x-risk timelines and the relative importance of different risk factors. The data provides a snapshot of community beliefs about catastrophic AI outcomes.
Key Points
- •Surveys EA community members and/or experts on their probability estimates for AI-caused existential catastrophe.
- •Aggregates quantitative risk estimates, offering a structured look at how informed observers assess AI x-risk.
- •Highlights variation in beliefs about timelines, risk magnitude, and which AI failure modes are most concerning.
- •Serves as a useful reference point for tracking how AI safety community views on x-risk evolve over time.
- •Results can inform prioritization decisions within the EA and AI safety communities.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Accident Risk Cruxes | Crux | 67.0 |
Cached Content Preview
"Existential risk from AI" survey results — EA Forum
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Hide table of contents "Existential risk from AI" survey results
by RobBensinger Jun 1 2021 13 min read 35 80
AI safety Forecasting AI alignment AI forecasting Estimation of existential risk Surveys Frontpage "Existential risk from AI" survey results Methods Results Background and predictions Respondents' comments Footnotes 36 comments I sent a two-question survey to ~117 people working on long-term AI risk, asking about the level of existential risk from "humanity not doing enough technical AI safety research" and from "AI systems not doing/optimizing what the people deploying them wanted/intended".
44 people responded (~38% response rate). In all cases, these represent the views of specific individuals, not an official view of any organization. Since some people's views may have made them more/less likely to respond, I suggest caution in drawing strong conclusions from the results below. Another reason for caution is that respondents added a lot of caveats to their responses (see the anonymized spreadsheet ), 1 which the aggregate numbers don't capture.
I don’t plan to do any analysis on this data, just share it; anyone who wants to analyze it is of course welcome to.
If you'd like to make your own predictions before seeing the data, I made a separate spoiler-free post for that .
Methods
You can find a copy of the survey here . The main questions (including clarifying notes) were: 2
1. How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of humanity not doing enough technical AI safety research?
2. How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended?
_________________________________________
Note A: "Technical AI safety research" here means good-quality technical research aimed at figuring out how to get highly capable AI systems to produce long-term outcomes that are reliably beneficial.
Note B: The intent of question 1 is something like "How likely is it that our future will be drastically worse than the future of an (otherwise maximally similar) world where we put a huge civilizational effort into technical AI safety?" (For concreteness, we might imagine that human whole-brain emulation tech lets you gather ten thousand well-managed/coordinated top researchers to collaborate on technical AI safety for 200 subjective years well before the advent of AGI; and somehow this tech doesn't cause any other changes to the world.)
The intent of question 1 *isn't* "How likely is it that our future will be astronomically worse than the future of a world where God suddenly handed u
... (truncated, 60 KB total)0dee84dcc4f4076f | Stable ID: YzBkMGNkZm