Coefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, with 68% going to evaluations/benchmarking, and launched a $40M Technical AI Safety RFP in 2025 covering 8 research areas.
AI Safety Training ProgramsApproachAI Safety Training ProgramsComprehensive guide to AI safety training programs including MATS (78% alumni in alignment work, 100+ scholars annually), Anthropic Fellows ($2,100/week stipend, 40%+ hired full-time), LASR Labs (5...Quality: 70/100
Analysis
Is EA Biosecurity Work Limited to Restricting LLM Biological Use?AnalysisIs EA Biosecurity Work Limited to Restricting LLM Biological Use?An analysis of the full EA/x-risk biosecurity portfolio, examining whether the community's work consists primarily of AI capability restrictions or encompasses a broader set of interventions includ...Quality: 55/100XPT (Existential Risk Persuasion Tournament)ProjectXPT (Existential Risk Persuasion Tournament)A 2022 forecasting tournament with 169 participants found superforecasters severely underestimated AI progress (2.3% probability for IMO gold vs actual 2025 achievement) and gave 8x lower AI extinc...Quality: 54/100Donations List WebsiteProjectDonations List WebsiteComprehensive documentation of an open-source database tracking $72.8B in philanthropic donations (1969-2023) across 75+ donors, with particular coverage of EA/AI safety funding. The page thoroughl...Quality: 52/100
Organizations
Centre for Long-Term ResilienceOrganizationCentre for Long-Term ResilienceThe Centre for Long-Term Resilience is a UK-based think tank that has demonstrated concrete policy influence on AI and biosecurity risks, including contributing to the UK's AI Strategy and Biologic...Quality: 63/100FTX Future FundOrganizationFTX Future FundThe FTX Future Fund was a major longtermist philanthropic initiative that distributed 132M USD in grants (including ~32M USD to AI safety) before collapsing with FTX's November 2022 bankruptcy, exp...Quality: 60/100Johns Hopkins Center for Health SecurityOrganizationJohns Hopkins Center for Health SecurityThe Johns Hopkins Center for Health Security is a well-established biosecurity organization that has significantly influenced US policy on pandemic preparedness and biological threats, with recent ...Quality: 63/100Long-Term Future Fund (LTFF)OrganizationLong-Term Future Fund (LTFF)LTFF is a regranting program that has distributed $20M since 2017 (approximately $10M to AI safety) with median grants of $25K, filling a critical niche between personal savings and institutional f...Quality: 56/100
Other
Nick BecksteadPersonNick BecksteadNick Beckstead is a philosopher and EA/longtermism figure whose 2013 dissertation formalized longtermist ethics; this article covers his career arc from academic philosopher to Coefficient Giving (...Quality: 60/100AI EvaluationsResearch AreaAI EvaluationsEvaluations and red-teaming reduce detectable dangerous capabilities by 30-50x when combined with training interventions (o3 covert actions: 13% → 0.4%), but face fundamental limitations against so...Quality: 72/100Scalable OversightResearch AreaScalable OversightProcess supervision achieves 78.2% accuracy on MATH benchmarks (vs 72.4% outcome-based) and is deployed in OpenAI's o1 models, while debate shows 60-80% accuracy on factual questions with +4% impro...Quality: 68/100Recoding AmericaResourceRecoding AmericaPahlka's 2023 book argues government digital failures stem from institutional culture separating policy from implementation, creating a 'cascade of rigidity' that threatens effective AI governance....Quality: 60/100
Key Debates
Technical AI Safety ResearchCruxTechnical AI Safety ResearchTechnical AI safety research encompasses six major agendas (mechanistic interpretability, scalable oversight, AI control, evaluations, agent foundations, and robustness) with 500+ researchers and $...Quality: 66/100AI Accident Risk CruxesCruxAI Accident Risk CruxesComprehensive survey of AI safety researcher disagreements on accident risks, quantifying probability ranges for mesa-optimization (15-55%), deceptive alignment (15-50%), and P(doom) (5-35% median ...Quality: 67/100The Case For AI Existential RiskArgumentThe Case For AI Existential RiskComprehensive formal argument that AI poses 5-14% median extinction risk by 2100 (per 2,788 researcher survey), structured around four premises: capabilities will advance, alignment is hard (with d...Quality: 66/100
Concepts
AI TimelinesConceptAI TimelinesForecasts and debates about when transformative AI capabilities will be developedQuality: 95/100Ea Epistemic Failures In The Ftx EraEa Epistemic Failures In The Ftx EraThis page synthesizes post-FTX critiques of EA's epistemic and governance failures, identifying interlocking problems including donor hero-worship, funding concentration in volatile crypto assets, ...Quality: 84/100
Risks
Bioweapons RiskRiskBioweapons RiskComprehensive synthesis of AI-bioweapons evidence through early 2026, including the FRI expert survey finding 5x risk increase from AI capabilities (0.3% → 1.5% annual epidemic probability), Anthro...Quality: 91/100
Historical
Deep Learning Revolution EraHistoricalDeep Learning Revolution EraComprehensive timeline documenting 2012-2020 AI capability breakthroughs (AlexNet, AlphaGo, GPT-3) and parallel safety field development, with quantified metrics showing capabilities funding outpac...Quality: 44/100The MIRI EraHistoricalThe MIRI EraComprehensive chronological account of AI safety's institutional emergence (2000-2015), from MIRI's founding through Bostrom's Superintelligence to mainstream recognition. Covers key organizations,...Quality: 31/100