UK-based AI safety advocacy organization focused on preventing artificial superintelligence development through policy campaigns and grassroots outreach to lawmakers.
Grassroots AI Safety and Democracy ActivismApproachGrassroots AI Safety and Democracy ActivismBottom-up movements mobilizing public concern about AI into organized political action. Includes protest movements (PauseAI, with groups in 30+ cities), youth-led advocacy (Encode Justice, 600 memb...
Policy
EU AI ActPolicyEU AI ActComprehensive overview of the EU AI Act's risk-based regulatory framework, particularly its two-tier approach to foundation models that distinguishes between standard and systemic risk AI systems. ...Quality: 55/100
Risks
SchemingRiskSchemingScheming—strategic AI deception during training—has transitioned from theoretical concern to observed behavior across all major frontier models (o1: 37% alignment faking, Claude: 14% harmful compli...Quality: 74/100DeepfakesRiskDeepfakesComprehensive overview of deepfake risks documenting $60M+ in fraud losses, 90%+ non-consensual imagery prevalence, and declining detection effectiveness (65% best accuracy). Reviews technical capa...Quality: 50/100
Organizations
AnthropicOrganizationAnthropicComprehensive reference page on Anthropic covering financials ($380B valuation, $14B ARR at Series G growing to $19B by March 2026), safety research (Constitutional AI, mechanistic interpretability...Quality: 74/100OpenAIOrganizationOpenAIComprehensive organizational profile of OpenAI documenting evolution from 2015 non-profit to Public Benefit Corporation, with detailed analysis of governance crisis, 2024-2025 ownership restructuri...Quality: 62/100Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100LessWrongOrganizationLessWrongLessWrong is a rationality-focused community blog founded in 2009 that has influenced AI safety discourse, receiving $5M+ in funding and serving as the origin point for ~31% of EA survey respondent...Quality: 44/100Georgetown CSETOrganizationGeorgetown CSETCSET is a $100M+ Georgetown center with 50+ staff conducting data-driven AI policy research, particularly on U.S.-China competition and export controls. The center conducts hundreds of annual gover...Quality: 43/100Sentinel (Catastrophic Risk Foresight)OrganizationSentinel (Catastrophic Risk Foresight)Sentinel is a catastrophic risk foresight organization co-founded by Nuño Sempere and Rai Sur, operating informally from around 2023 and incorporated as a US 501(c)(3) nonprofit in or after late 20...Quality: 39/100
Other
Will MacAskillPersonWill MacAskillComprehensive biographical reference on Will MacAskill covering his founding of EA organizations, academic work on moral uncertainty and longtermism, AGI preparedness advocacy, and controversies in...Quality: 60/100Eli LiflandPersonEli LiflandBiographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his for...Quality: 58/100Holden KarnofskyPersonHolden KarnofskyHolden Karnofsky directed $300M+ in AI safety funding through Coefficient Giving (formerly Open Philanthropy), growing the field from ~20 to 400+ FTE researchers and developing influential framewor...Quality: 40/100Vidur KapurPersonVidur KapurVidur Kapur is a superforecaster and AI policy researcher involved in multiple forecasting organizations and the Sentinel early warning system, contributing to AI risk assessment and EA Forum discu...Quality: 38/100
Concepts
Safety Orgs OverviewSafety Orgs OverviewA well-organized reference overview of ~20 AI safety organizations categorized by function (alignment research, policy, field-building), with a comparative budget/headcount table showing estimated ...Quality: 48/100
Key Debates
AI Alignment Research AgendasCruxAI Alignment Research AgendasComprehensive comparison of major AI safety research agendas ($100M+ Anthropic, $50M+ DeepMind, $5-10M nonprofits) with detailed funding, team sizes, and failure mode coverage (25-65% per agenda). ...Quality: 69/100