Skip to content
Longterm Wiki
Back

AI Safety and Neighboring Communities: A Quick Start Guide - Alignment Forum

blog

Author

Sam Bowman

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Alignment Forum

A community-written onboarding resource for AI safety newcomers; useful for understanding the broader ecosystem of AI safety-adjacent fields but may reflect the perspectives and blind spots of the Alignment Forum community at time of writing.

Metadata

Importance: 55/100blog posteducational

Summary

A practical orientation guide for newcomers to AI safety that maps the landscape of related communities, research areas, and ideological perspectives. It explains core AI safety concepts like misalignment, instrumental convergence, and deceptive alignment, then surveys adjacent fields including ML safety, AI ethics, and AI governance to help readers understand how different groups approach AI risk.

Key Points

  • Explains foundational AI safety concerns: misaligned superhuman AI, instrumental convergence, and deceptive alignment in accessible terms for newcomers.
  • Maps neighboring communities including ML safety, AI ethics, AI governance, and various philosophical/ideological perspectives and their relationships to core AI safety.
  • Acknowledges inherent biases in the guide's framing and notes its time-limited relevance given the fast-moving field.
  • Serves as a practical entry point for those new to AI safety, helping them navigate the diverse landscape of related research and advocacy communities.
  • Highlights overlaps and tensions between AI safety and adjacent fields, helping newcomers understand different threat models and priorities.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 202617 KB
[AI Safety and Neighboring Communities: A Quick-Start Guide, as of Summer 2022](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#)

9 min read

•

[AI Safety/AI Alignment/AGI Safety/AI Existential Safety/AI X-Risk](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#AI_Safety_AI_Alignment_AGI_Safety_AI_Existential_Safety_AI_X_Risk)

•

[Effective Altruism/EA](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#Effective_Altruism_EA)

•

[Longtermism](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#Longtermism)

•

[The Rationalist Subculture/The LessWrong Crowd/Berkeley-Style Rationalism/The Rats](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#The_Rationalist_Subculture_The_LessWrong_Crowd_Berkeley_Style_Rationalism_The_Rats)

•

[AGI Optimism](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#AGI_Optimism)

•

[AI Ethics/Responsible AI/The FAccT Community](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#AI_Ethics_Responsible_AI_The_FAccT_Community)

•

[(Long-Term) AI Governance](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#_Long_Term__AI_Governance)

•

[Acknowledgments](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#Acknowledgments)

[AI](https://www.alignmentforum.org/w/ai)[Community](https://www.alignmentforum.org/w/community)
Frontpage

# 34

# [AI Safety and Neighboring Communities: A Quick-Start Guide, as of Summer2022](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as)

by [Sam Bowman](https://www.alignmentforum.org/users/sbowman?from=post_header)

1st Sep 2022

9 min read

[2](https://www.alignmentforum.org/posts/EFpQcBmfm2bFfM4zM/ai-safety-and-neighboring-communities-a-quick-start-guide-as#comments)

# 34

_Getting into AI safety involves working with a mix of communities, subcultures, goals, and ideologies that you may not have encountered in the context of mainstream AI technical research. This document attempts to briefly map these out for newcomers._

_This is inevitably going to be biased by what sides of these communities I (Sam) have encountered, and it will quickly become dated. I expect it will still be a useful resource for some people anyhow, at least in the short term._

## **AI Safety/AI Alignment/AGI Safety/AI Existential Safety/AI X-Risk**

_The research project of ensuring that future AI progress doesn’t yield civilization-endingly catastrophic results._

- Goo

... (truncated, 17 KB total)
Resource ID: 82a461d0b3d02ca3 | Stable ID: N2VkY2VmZG