AI Safety for Everyone review
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
A 2025 paper from University of Edinburgh and CMU that challenges x-risk-centric narratives in AI safety discourse, relevant for understanding definitional debates and inclusivity issues within the AI safety research community.
Paper Details
Metadata
Abstract
Recent discussions and research in AI safety have increasingly emphasized the deep connection between AI safety and existential risk from advanced AI systems, suggesting that work on AI safety necessarily entails serious consideration of potential existential threats. However, this framing has three potential drawbacks: it may exclude researchers and practitioners who are committed to AI safety but approach the field from different angles; it could lead the public to mistakenly view AI safety as focused solely on existential scenarios rather than addressing a wide spectrum of safety challenges; and it risks creating resistance to safety measures among those who disagree with predictions of existential AI risks. Through a systematic literature review of primarily peer-reviewed research, we find a vast array of concrete safety work that addresses immediate and practical concerns with current AI systems. This includes crucial areas like adversarial robustness and interpretability, highlighting how AI safety research naturally extends existing technological and systems safety concerns and practices. Our findings suggest the need for an epistemically inclusive and pluralistic conception of AI safety that can accommodate the full range of safety considerations, motivations, and perspectives that currently shape the field.
Summary
This paper argues against framing AI safety primarily through existential risk, conducting a systematic literature review to show the field encompasses diverse practical work on current system vulnerabilities. The authors contend that existential-risk-centric framing excludes researchers, misleads the public, and creates resistance to safety measures, advocating instead for an epistemically inclusive and pluralistic conception of AI safety.
Key Points
- •Overemphasizing existential risk framing may exclude safety researchers with different motivations and create public misconceptions about AI safety's scope.
- •Systematic literature review reveals AI safety encompasses adversarial robustness, interpretability, and other practical work on current systems.
- •AI safety research naturally extends traditional technological and systems safety practices rather than being solely a futurist concern.
- •The existential risk framing has ties to specific normative movements (rationalism, EA, longtermism), which may alienate mainstream researchers.
- •Authors advocate for a pluralistic conception of AI safety accommodating the full range of motivations, perspectives, and safety challenges.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Corrigibility Failure | Risk | 62.0 |
Cached Content Preview
HTML conversions [sometimes display errors](https://info.dev.arxiv.org/about/accessibility_html_error_messages.html) due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
- failed: manyfoot
Authors: achieve the best HTML results from your LaTeX submissions by following these [best practices](https://info.arxiv.org/help/submit_latex_best_practices.html).
[License: CC BY 4.0](https://info.arxiv.org/help/license/index.html#licenses-available)
arXiv:2502.09288v2 \[cs.CY\] 14 Feb 2025
\\equalcont
These authors contributed equally to this work.
\[2\]\\fnmAtoosa \\surKasirzadeh
\\equalcontThese authors contributed equally to this work.
Report issue for preceding element
1\]\\orgdivSchool of Informatics, \\orgnameUniversity of Edinburgh, \\orgaddress\\street10 Crichton Street, \\cityEdinburgh, \\postcode EH8 9AB, \\countryUnited Kingdom
Report issue for preceding element
\[2\]\\orgdivDepartments of Philosophy & Software and Societal Systems, \\orgnameCarnegie Mellon University, \\orgaddress\\streetBaker Hall 161
5000 Forbes Avenue, \\cityPittsburgh, \\postcode15213, \\countryUnited States
Report issue for preceding element
# AI Safety for Everyone
Report issue for preceding element
\\fnmBalint \\surGyevnar
[balint.gyevnar@ed.ac.uk](mailto:balint.gyevnar@ed.ac.uk)[atoosa.kasirzadeh@gmail.com](mailto:atoosa.kasirzadeh@gmail.com)\[\
\*\
\
Report issue for preceding element\
\
###### Abstract\
\
Report issue for preceding element\
\
Recent discussions and research in AI safety have increasingly emphasized the deep connection between AI safety and existential risk from advanced AI systems, suggesting that work on AI safety necessarily entails serious consideration of potential existential threats. However, this framing has three potential drawbacks: it may exclude researchers and practitioners who are committed to AI safety but approach the field from different angles; it could lead the public to mistakenly view AI safety as focused solely on existential scenarios rather than addressing a wide spectrum of safety challenges; and it risks creating resistance to safety measures among those who disagree with predictions of existential AI risks. Through a systematic literature review of primarily peer-reviewed research, we find a vast array of concrete safety work that addresses immediate and practical concerns with current AI systems. This includes crucial areas like adversarial robustness and interpretability, highlighting how AI safety research naturally extends existing technological and systems safety concerns and practices. Our findings suggest the need for an epistemically inclusive and pluralistic conception of AI safety that can accommodate the full range of safety considerations, motivations, and perspectives that currently shape the field.\
\
Report issue
... (truncated, 93 KB total)2350940574257648 | Stable ID: ZmVlMGM0NG