EA Forum - Overview of AI Safety Outreach Grassroots Orgs
blogAuthor
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: EA Forum
Useful for individuals looking to get involved in AI safety advocacy; provides a comparative snapshot of grassroots orgs as of the post's publication, though organizational details may change over time.
Forum Post Details
Metadata
Summary
This EA Forum post surveys five grassroots organizations focused on AI safety outreach and activism—ControlAI, EncodeAI, PauseAI, PauseAI US, and StopAI—describing their approaches, accomplishments, and volunteer opportunities. The author assesses each organization's theory of change and capacity to absorb new volunteers, offering practical guidance for people wanting to engage in AI safety advocacy. PauseAI is highlighted as most accessible for new volunteers, while ControlAI is noted for having the most compelling theory of change.
Key Points
- •Five grassroots AI safety orgs are profiled: ControlAI, EncodeAI, PauseAI, PauseAI US, and StopAI, each with distinct approaches to outreach and activism.
- •PauseAI is identified as best positioned to absorb new volunteers due to existing infrastructure and onboarding capacity.
- •ControlAI is credited with the most convincing theory of change but is still developing its volunteer infrastructure.
- •The post serves as a practical resource guide for EA-aligned individuals seeking to contribute to AI safety through direct advocacy and grassroots organizing.
- •The overview reflects growing interest within the EA community in translating AI safety concerns into public-facing activism and policy pressure.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| ControlAI | Organization | 63.0 |
Cached Content Preview
Overview: AI Safety Outreach Grassroots Orgs — EA Forum
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality. Hide table of contents Overview: AI Safety Outreach Grassroots Orgs
by Severin May 12 2025 3 min read 0 11
AI safety AI safety groups AI safety resources and materials Public communication on AI safety Frontpage Overview: AI Safety Outreach Grassroots Orgs ControlAI EncodeAI PauseAI PauseAI US StopAI Collective Action for Existential Safety (CAES) Call to action No comments We’ve been looking for joinable endeavors in AI safety outreach over the past weeks and would like to share our findings with you. Let us know if we missed any and we’ll add them to the list.
For comprehensive directories of AI safety communities spanning general interest, technical focus, and local chapters, check out https://www.aisafety.com/communities and https://www.aisafety.com/map . If you're uncertain where to start, https://aisafety.quest/ offers personalized guidance.
ControlAI
ControlAI started out as a think tank. Over the past months, they developed a theory of change for how to prevent ASI development (“ Direct Institutional Plan ”). As a pilot campaign they cold-mailed British MPs and Lords to talk to them about AI risk. So far, they talked to 70 representatives of which 31 agreed to publicly stand against ASI development.
Control AI is also supporting grassroots activism: On https://controlai.com/take-action , you can find templates to send to your representatives yourself, as well as guides for how to constructively inform people about AI risk. They are also reaching out to influencers and supporting content creation .
While they are the org on this list whose theory of change and actions we found most convincing, so far, they are still at the start of building infrastructure that would allow them to take in considerable numbers of volunteers. We expect them to react positively anyways if you reach out to them with requests for talks, training or similar. You can join the Control AI Discord here .
ControlAI is currently hiring !
EncodeAI
EncodeAI is an organization of high school and college students that addresses all kinds of AI risks. Their past endeavors and successes include a bipartisan event advocating for anti-deepfake laws, and co-sponsoring SB 1047 , California’s landmark AI safety legislation that would, if passed, have been a tremendous contribution to AI existential safety.
You can find an overview of their past activities here and join their local chapters or start a new one here .
PauseAI
PauseAI is a community-focused organization dedicated to AI safety activism. Their primary aim is to normalize discussions about AI existential risk and advocate for a pause in advanced AI development. They contact policymakers, influencers and experts, organize protests, hand out leaflets, do tabling , and anything else that seems
... (truncated, 5 KB total)d996c395cc955f7f | Stable ID: ODhlNGM0OD