Back
Scientific American, 2024
webscientificamerican.com·scientificamerican.com/article/ai-survey-exaggerates-apoc...
A skeptical counterpoint to AI doom narratives, useful for understanding critiques of existential risk framing and survey methodology; relevant for wiki sections on risk estimation and public discourse around AI safety.
Metadata
Importance: 45/100news articlecommentary
Summary
This Scientific American article critiques a prominent AI safety survey that claimed high probabilities of catastrophic or existential risk from AI, arguing the survey's methodology and framing exaggerate apocalyptic scenarios. It raises concerns about how such surveys influence public perception and policy by presenting speculative tail risks as more empirically grounded than they are. The piece calls for more rigorous, balanced reporting on AI risk estimates.
Key Points
- •Prominent AI risk surveys may overstate catastrophic and existential risk probabilities due to sampling bias toward safety-concerned researchers.
- •Framing and question design in AI surveys can prime respondents toward extreme outcomes, inflating perceived risk levels.
- •Media and policy communities often uncritically amplify high-risk survey figures, distorting public understanding of AI dangers.
- •The article advocates for more methodologically rigorous approaches to eliciting and reporting expert opinion on AI risk.
- •Skepticism of apocalyptic AI risk framing does not necessarily deny near-term harms or the importance of AI safety research.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| The Case Against AI Existential Risk | Argument | 58.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 202610 KB
AI Survey Exaggerates Apocalyptic Risks | Scientific American
January 26, 2024
5 min read
Add Us On Google Add SciAm AI Survey Exaggerates Apocalyptic Risks
A speculative survey about AI’s future may have been biased toward an alarmist perspective
By Chris Stokel-Walker edited by Allison Parshall
KTSDESIGN/Science Photo Library/Getty Images
The headlines in early January didn’t mince words, and all were variations on one theme: researchers think there’s a 5 percent chance artificial intelligence could wipe out humanity.
That was the sobering finding of a paper posted on the preprint server arXiv.org . In it, the authors reported the results of a survey of 2,778 researchers who had presented and published work at high-profile AI research conferences and journals—the biggest such poll to date in a once-obscure field that has suddenly found itself navigating core issues of humanity’s future. “People are interested in what AI researchers think about these things,” says Katja Grace, co-lead author of the paper and lead researcher at AI Impacts , the organization that conducted the survey. “They have an important role in the conversation about what happens with AI.”
But some AI researchers say they’re concerned the survey results were biased toward an alarmist perspective. AI Impacts has been partially funded by several organizations, such as Open Philanthropy, that promote effective altruism—an emerging philosophical movement that is popular in Silicon Valley and known for its doom-laden outlook on AI’s future interactions with humanity. These funding links, along with the framing of questions within the survey, have led some AI researchers to speak up about the limitations of using speculative poll results to evaluate AI’s true threat.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Effective altruism, or EA, is presented by its backers as an “ intellectual project ” aimed at using resources for the greatest possible benefit to human lives. The movement has increasingly focused on AI as one of humanity’s existential threats, on par with nuclear weapons . But critics say this preoccupation with speculative future scenarios distracts society from the discussion, research and regulation of the risks AI already poses today—including those involving discrimination, privacy and labor rights, among other pressing problems.
The recent survey, AI Impacts’ third such poll of the field since 2016, asked researchers to estimate the probability of AI causing the
... (truncated, 10 KB total)Resource ID:
0f93fdc32b08ffd7 | Stable ID: Nzk5NGIyNT