Skip to content
Longterm Wiki
Back

What mistakes has the AI safety movement made?

web

Author

EuanMcLean

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

A useful critical perspective on the AI safety field's strategic and cultural shortcomings, based on semi-structured interviews; valuable for understanding internal debates about movement effectiveness and direction.

Forum Post Details

Karma
66
Comments
3
Forum
eaforum
Forum Tags
AI safetyCause prioritizationExistential riskCollections and resourcesCriticism of effective altruismOpinion
Part of sequence: Big Picture AI Safety

Metadata

Importance: 62/100blog postanalysis

Summary

A qualitative synthesis of interviews with 17 AI safety experts identifying systemic mistakes in the AI safety movement, including overreliance on abstract reasoning, insularity, counterproductive messaging, and neglect of policy pathways. The post provides rare critical self-reflection from within the community about strategic and epistemic failures. Some interviewees questioned whether the movement's overall track record has been net positive.

Key Points

  • Overreliance on abstract theoretical arguments over empirical approaches has limited the movement's credibility and effectiveness.
  • Insularity and groupthink within AI safety circles have reduced independent thinking and suppressed dissenting perspectives.
  • Off-putting or alarmist messaging has alienated mainstream researchers, policymakers, and potential allies.
  • Close relationships with leading AGI labs may compromise the movement's independence and ability to advocate for safety.
  • Insufficient attention to policy and public outreach has left important safety levers underutilized.

Cited by 1 page

PageTypeQuality
Frontier Model ForumOrganization58.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202631 KB
Hide table of contents

[Big Picture AI Safety](https://forum.effectivealtruism.org/s/Y5fSxqGRAqJvba8sv)

# [What mistakes has the AI safety movementmade?](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made)

by [EuanMcLean](https://forum.effectivealtruism.org/users/euanmclean?from=post_header)

May 23 202414 min read3

# 66

[AI safety](https://forum.effectivealtruism.org/topics/ai-safety)[Cause prioritization](https://forum.effectivealtruism.org/topics/cause-prioritization)[Existential risk](https://forum.effectivealtruism.org/topics/existential-risk)[Collections and resources](https://forum.effectivealtruism.org/topics/collections-and-resources)[Criticism of effective altruism](https://forum.effectivealtruism.org/topics/criticism-of-effective-altruism)[Opinion](https://forum.effectivealtruism.org/topics/opinion) [Frontpage](https://forum.effectivealtruism.org/about#Finding_content)

Show all topics

[What mistakes has the AI safety movement made?](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#)

[How to read this post](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#How_to_read_this_post)

[Too many galaxy-brained arguments & not enough empiricism](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#Too_many_galaxy_brained_arguments___not_enough_empiricism)

[Problems with research](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#Problems_with_research)

[Too insular](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#Too_insular)

[Bad messaging](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#Bad_messaging)

[AI safety’s relationship with the leading AGI companies](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#AI_safety_s_relationship_with_the_leading_AGI_companies)

[The bandwagon](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#The_bandwagon)

[Pausing is bad](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#Pausing_is_bad)

[Discounting public outreach & governance as a route to safety](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#Discounting_public_outreach___governance_as_a_route_to_safety)

[Conclusion](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#Conclusion)

[3 comments](https://forum.effectivealtruism.org/posts/tEmQrfMs9qdBPrGKh/what-mistakes-has-the-ai-safety-movement-made#comments)

This is the third of three posts summarizing what I learned when I 

... (truncated, 31 KB total)
Resource ID: 8aec760a9e7e4c93 | Stable ID: YmY4NjYwOT