Back
What Are Reasonable AI Fears?
blogAuthor
Arjun Panickssery
Credibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
A linkpost on the EA Forum to Robin Hanson's 2023 essay offering a skeptical, contrarian take on AI risk narratives; useful for understanding critiques of mainstream AI safety concerns and stress-testing common assumptions in the field.
Forum Post Details
Karma
41
Comments
3
Forum
eaforum
Forum Tags
AI safetyExistential riskForecastingAI risk skepticismAI alignmentAI forecastingRobin HansonCriticism of longtermism and existential risk studiesPublic communication on AI safetyAI governance
Metadata
Importance: 42/100opinion piececommentary
Summary
Robin Hanson examines which concerns about AI are well-founded versus overblown, distinguishing between near-term harms and speculative long-term catastrophes. He applies skeptical scrutiny to common AI safety narratives, arguing that many fears are exaggerated or poorly reasoned while some more mundane risks deserve more attention.
Key Points
- •Hanson distinguishes between near-term, tractable AI risks and speculative long-term existential scenarios, questioning the prioritization of the latter.
- •Argues that many popular AI fears reflect social and psychological biases rather than careful probabilistic reasoning.
- •Suggests that concerns about AI misalignment and takeover scenarios may be less grounded than proponents believe.
- •Advocates for clearer reasoning about which AI risks are proportionate to the level of concern and resources directed at them.
- •Provides a contrarian perspective that challenges mainstream EA and AI safety community assumptions about risk prioritization.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| AI Timelines | Concept | 95.0 |
| Robin Hanson | Person | 53.0 |
Cached Content Preview
HTTP 200Fetched Feb 26, 2026237 KB
[linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23 — EA Forum This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. [linkpost] "What Are Reasonable AI Fears?" by Robin Hanson, 2023-04-23 by Arjun Panickssery Apr 14 2023 4 min read 3 41 AI safety Existential risk Forecasting AI risk skepticism AI alignment AI forecasting Robin Hanson Criticism of longtermism and existential risk studies Public communication on AI safety AI governance Frontpage This is a linkpost for https://quillette.com/2023/04/14/what-are-reasonable-ai-fears/ Selected quotes (all emphasis mine): Why are we so willing to “other” AIs? Part of it is probably prejudice: some recoil from the very idea of a metal mind. We have, after all, long speculated about possible future conflicts with robots. But part of it is simply fear of change, inflamed by our ignorance of what future AIs might be like. Our fears expand to fill the vacuum left by our lack of knowledge and understanding. The result is that AI doomers entertain many different fears, and addressing them requires discussing a great many different scenarios. Many of these fears, however, are either unfounded or overblown. I will start with the fears I take to be the most reasonable, and end with the most overwrought horror stories, wherein AI threatens to destroy humanity. As an economics professor, I naturally build my analyses on economics, treating AIs as comparable to both laborers and machines, depending on context. You might think this is mistaken since AIs are unprecedentedly different, but economics is rather robust. Even though it offers great insights into familiar human behaviors, most economic theory is actually based on the abstract agents of game theory, who always make exactly the best possible move. Most AI fears seem understandable in economic terms; we fear losing to them at familiar games of economic and political power. He separates a few concerns: "Doomers worry about AIs developing “misaligned” values. But in this scenario, the “values” implicit in AI actions are roughly chosen by the organisations who make them and by the customers who use them. Such value choices are constantly revealed in typical AI behaviors, and tested by trying them in unusual situations." "Some fear that, in this scenario, many disliked conditions of our world—environmental destruction, income inequality, and othering of humans—might continue and even increase . Militaries and police might integrate AIs into their surveillance and weapons. It is true that AI may not solve these problems, and may even empower those who exacerbate them. On the other hand, AI may also empower those seeking solutions. AI just doesn’t seem to be the fundamental problem here." "A related fear is that allowing technical and social change to continue indefinitely might eventually take civilization to places that we don’t want to be . Looking backward, we have benefit
... (truncated, 237 KB total)Resource ID:
0993959ce3bdb812 | Stable ID: NjM4YzFkYz