Berkeley nonprofit founded 2012 teaching applied rationality through workshops ($3,900 for 4.5 days), trained 1,300+ alumni reporting 9.2/10 satisfaction and 0.17σ life satisfaction increase at 1-year follow-up. Received $3.5M+ from Open Philanthropy and $5M from FTX (later clawed back); faced major organizational challenges in the early 2020s.
Timelines WikiProjectTimelines WikiTimelines Wiki is a specialized MediaWiki project documenting chronological histories of AI safety and EA organizations, created by Issa Rice with funding from Vipul Naik in 2017. While useful as a...Quality: 45/100AI WatchProjectAI WatchAI Watch is a tracking database by Issa Rice that monitors AI safety organizations, people, funding, and publications as part of his broader knowledge infrastructure ecosystem. The article provides...Quality: 23/100
Other
Nuño SemperePersonNuño SempereNuño Sempere is a Spanish superforecaster who co-founded the highly successful Samotsvety forecasting group and now runs Sentinel for global catastrophe early warning, while being known for skeptic...Quality: 50/100Jaan TallinnPersonJaan TallinnProfile of Jaan Tallinn documenting $150M+ lifetime AI safety giving (86% of $51M in 2024), primarily through SFF ($34.33M distributed in a 2025 grant round). Co-founded CSER (2012) and FLI (2014),...Quality: 53/100Gwern BranwenPersonGwern BranwenComprehensive biographical profile of pseudonymous researcher Gwern Branwen, documenting his early advocacy of AI scaling laws (predicting AGI by 2030), extensive self-experimentation work, and inf...Quality: 52/100Vidur KapurPersonVidur KapurVidur Kapur is a superforecaster and AI policy researcher involved in multiple forecasting organizations and the Sentinel early warning system, contributing to AI risk assessment and EA Forum discu...Quality: 38/100Andrew CritchPersonAndrew CritchCEO and Cofounder of Encultured AI. Board Member of QURI. Previously a research fellow at MIRI, algorithmic trader at Jane Street, and cofounder of CFAR and BERI. Founded the Berkeley Existential R...Nate SoaresPersonNate SoaresPresident of the Machine Intelligence Research Institute (MIRI) since June 2023 (previously Executive Director 2015-2023), focused on mathematical foundations of AI alignment.
Organizations
Survival and Flourishing FundOrganizationSurvival and Flourishing FundSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100Machine Intelligence Research InstituteOrganizationMachine Intelligence Research InstituteComprehensive organizational history documenting MIRI's trajectory from pioneering AI safety research (2000-2020) to policy advocacy after acknowledging research failure, with detailed financial da...Quality: 50/100Lighthaven (Event Venue)OrganizationLighthaven (Event Venue)Lighthaven is a Berkeley conference venue operated by Lightcone Infrastructure that serves as physical infrastructure for AI safety, rationality, and EA communities. While well-documented as a faci...Quality: 40/100Center for Human-Compatible AIOrganizationCenter for Human-Compatible AICHAI is UC Berkeley's AI safety research center founded by Stuart Russell in 2016, pioneering cooperative inverse reinforcement learning and human-compatible AI frameworks. The center has trained 3...Quality: 37/100William and Flora Hewlett FoundationOrganizationWilliam and Flora Hewlett FoundationThe Hewlett Foundation is a $14.8 billion philanthropic organization that focuses primarily on AI cybersecurity rather than AI alignment or existential risk, distinguishing it from AI safety-focuse...Quality: 55/100
Concepts
Community Building OverviewCommunity Building OverviewA brief overview of community-building organizations (CEA, LessWrong, CFAR, etc.) that support the EA/rationality ecosystem underpinning AI safety, noting Berkeley concentration and post-FTX EA-saf...Quality: 35/100Diagram Naming ResearchDiagram Naming ResearchInternal research report comparing terminology for factor diagrams across 8 frameworks (influence diagrams, causal loop diagrams, crux maps, etc.), recommending 'Crux Map' as most appropriate for t...Quality: 31/100Ea Longtermist Wins LossesEa Longtermist Wins LossesA comprehensive impact ledger of EA/longtermism's track record organized by year and topic, covering verified wins (GiveWell's $1B+ directed, ~100,000 lives saved through AMF, 10K GWWC pledges) and...Quality: 53/100