Skip to content
Longterm Wiki
Back

EA and Longtermism: not a crux for saving the world

web

Author

ClaireZabel

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: EA Forum

Written by an Open Phil Program Officer based on internal experience; relevant to debates about EA meta-strategy, community building, and how to frame AI safety and x-risk work to attract diverse talent beyond the EA ecosystem.

Forum Post Details

Karma
214
Comments
19
Forum
eaforum
Forum Tags
Building effective altruismEffective altruism messagingLongtermism

Metadata

Importance: 52/100blog postcommentary

Summary

Claire Zabel argues that x-risk reduction work should be framed around concrete concerns about existential risks and transformative AI rather than EA or longtermist philosophy, as the latter framing may exclude talented contributors who care about x-risk but don't identify with EA. She contends that 'existential risk' and 'most important century' framings can attract capable people regardless of their broader philosophical commitments.

Key Points

  • EA/longtermism framing is no longer a necessary prerequisite for recruiting people to do x-risk work; direct concern for existential risks can suffice.
  • Many valuable potential contributors care about x-risk but are put off or excluded by the EA/longtermist label and its associated baggage.
  • Outreach should emphasize transformative AI, biosecurity, and the 'most important century' hypothesis over broader EA philosophical frameworks.
  • The 'EA-first' recruitment pipeline may be suboptimal compared to directly appealing to people motivated by existential risk concerns.
  • Acknowledges a counterargument: in 2022, few full-time x-risk workers had been recruited purely via the existential risk frame alone.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 15, 202638 KB
EA and Longtermism: not a crux for saving the world — EA Forum 
 
 This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Hide table of contents EA and Longtermism: not a crux for saving the world 

 by ClaireZabel Jun 2 2023 12 min read 19 214

 Building effective altruism Effective altruism messaging Longtermism Frontpage EA and Longtermism: not a crux for saving the world Things have changed An alternative frame Who cares? I think readers of this post are, in expectation, overvaluing the EA and longtermism frames Concrete things I think are good Concrete things I’m uncertain about Counterarguments: 19 comments This is partly based on my experiences working as a Program Officer leading Open Phil’s Longtermist EA Community Growth team, but it’s a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position.

 Context:  I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on  engaging more EAs vs. other  non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it. 

 Tl;dr : I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won’t apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century’ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach. 

 A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use “EAs” or “longtermists” as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, I’m concerned that this is a reason we’re failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the “existential risk reduction” frame [this seemed more true in 2022 than 2023]. 

 This is in the vein of Neel Nanda’s  "Simplify EA Pitches to "Holy Shit, X-Risk"" and Scott Alexander’s  “Long-termism vs. Existential Risk” , but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA. 

 
 EA and longtermism: not a crux for doing the most importan

... (truncated, 38 KB total)
Resource ID: 6810c25ba4e28f29 | Stable ID: NTAzNDM3Mz