Skip to content
Longterm Wiki
Back

Relationship between EA Community and AI safety

web

Author

Tom Barnes🔸

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

A 2022-era EA Forum discussion piece reflecting community debate about whether AI safety should remain closely tied to EA or develop as a standalone field; useful for understanding the sociological dynamics of the AI safety ecosystem.

Forum Post Details

Karma
157
Comments
15
Forum
eaforum
Forum Tags
CommunityAI safetyBuilding effective altruismBuilding the field of AI safety

Metadata

Importance: 35/100blog postcommentary

Summary

Tom Barnes examines the growing entanglement between the Effective Altruism community and AI safety, questioning whether continued convergence is desirable. He argues the ideal outcome would be for AI safety to mature as an independent field within AI/ML communities, while EA refocuses on its broader mission and other cause areas.

Key Points

  • AI safety has grown substantially within EA, but the two communities may benefit from diverging rather than further converging.
  • Barnes proposes AI safety should develop independently within mainstream AI/ML communities, analogous to how EA relates to global health or animal welfare.
  • EA's identity and effectiveness may be diluted if it becomes too closely identified with a single cause area like AI safety.
  • The post raises open questions about institutional and community design, inviting discussion on EA's future strategic direction.
  • Separation could help AI safety gain broader legitimacy and reach beyond the EA ecosystem.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202618 KB
Relationship between EA Community and AI safety — EA Forum 
 
 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Relationship between EA Community and AI safety 

 by Tom Barnes🔸 Sep 18 2023 1 min read 15 157

 Community AI safety Building effective altruism Building the field of AI safety Frontpage Personal opinion only. Inspired by filling out the  Meta coordination forum   survey . 

 Epistemic status: Very uncertain, rough speculation. I’d be keen to see more public discussion on this question 

 One open question about the EA community is it’s relationship to AI safety (see e.g. MacAskill) . I think the relationship EA and AI safety (+ GHD & animal welfare) previously looked something like this (up until 2022ish): [1] 

 With the growth of AI safety, I think the field now looks something like this:

 
It's an open question whether the EA Community should further grow the AI safety field, or whether the EA Community should become a distinct field from AI safety. I think my preferred approach is something like: EA and AI safety grow into new fields rather than into eachother:

 AI safety grows in AI/ML communities
 EA grows in other specific causes, as well as an “EA-qua-EA” movement.
 As an ideal state, I could imagine the EA community being in a similar state w.r.t AI safety that it currently has in animal welfare or global health and development.

 
However I’m very uncertain about this, and curious to here what other people’s takes are.

 ^ I’ve ommited non-AI longtermism, along with other fields, for simplicity. I strongly encourage not interpreting these diagrams too literally

  

 
 157

 22 3 7 1 Reactions

 22 3 7 1 Mentioned in 206 New Spanish-language book on "classical EA" 132 Why experienced professionals fail to land high-impact roles (FBB #5) 88 Effective Altruism Infrastructure Fund: March 2024 recommendations 21 Blueprints for AI Safety conferences (FBB #9) 9 Four Goals for EA Community Building, After Running out of Obvious Cause Areas Comments 15 Comment Sorted by New & upvoted Click to highlight new comments since: Today at 8:21 PM MichaelPlant Sep 19 2023 41 16 1 Thanks for this and great diagrams! To think about what the relationship between EA and AI safety, it might help about what EA is for in general. I see a/the purpose of EA is helping people figure out how they can do the most good - to learn about the different paths, the options, and the landscape. In that sense, EA is a bit like a university, or a market , or maybe even just a signpost: once you've learnt what you needed, or found what you want and where to go, you don't necessarily stick around: maybe you need to 'go out' in the world to do what calls you. 

 This explains your venn diagram: GHD and animal welfare are causes that exist prior to, and independent of EA. They, rather than EA, are where the action is if you prioritise those things. AI safety grew

... (truncated, 18 KB total)
Resource ID: e0c9049daf6bcd60 | Stable ID: NDk3Zjc2M2