80,000 Hours is the largest EA career organization, reaching 10M+ readers and reporting 3,000+ significant career plan changes, with 80% of $10M+ funding from Coefficient Giving. Since 2016 they've prioritized AI safety, shifting explicitly to AGI focus in 2025, providing career guidance through their website, podcast, and job board.
AI Safety Training ProgramsApproachAI Safety Training ProgramsComprehensive guide to AI safety training programs including MATS (78% alumni in alignment work, 100+ scholars annually), Anthropic Fellows ($2,100/week stipend, 40%+ hired full-time), LASR Labs (5...Quality: 70/100
Analysis
AI Safety Research Allocation ModelAnalysisAI Safety Research Allocation ModelAnalysis finds AI safety research suffers 30-50% efficiency losses from industry dominance (60-70% of ~$700M annually), with critical areas like multi-agent dynamics and corrigibility receiving 3-5...Quality: 65/100AI Safety Researcher Gap ModelAnalysisAI Safety Researcher Gap ModelQuantifies AI safety talent shortage: current 300-800 unfilled positions (30-50% gap) with training pipelines producing only 220-450 researchers annually when 500-1,500 are needed. Projects gaps co...Quality: 67/100
Concepts
Ea Epistemic Failures In The Ftx EraEa Epistemic Failures In The Ftx EraThis page synthesizes post-FTX critiques of EA's epistemic and governance failures, identifying interlocking problems including donor hero-worship, funding concentration in volatile crypto assets, ...Quality: 84/100FTX Collapse: Lessons for EA Funding ResilienceConceptFTX Collapse: Lessons for EA Funding ResilienceThe November 2022 collapse of FTX resulted in approximately $160M in committed EA grants that were not disbursed, organizational restructuring across the ecosystem, and revealed structural vulnerab...Quality: 78/100AI Welfare and Digital MindsConceptAI Welfare and Digital MindsAI welfare represents an emerging field examining whether AI systems deserve moral consideration based on consciousness, sentience, or agency, with growing institutional support from organizations ...Quality: 63/100Earning To GiveEarning To GiveEarning to Give is an EA career strategy emphasizing high-income jobs to fund effective charities, which peaked around 2012–2015 before being de-emphasized by major EA organizations; the FTX collap...Quality: 63/100
Organizations
FTXOrganizationFTXFTX was a major crypto exchange that collapsed in November 2022 due to fraud, with its AI safety relevance stemming from FTX Future Fund grants to organizations like Anthropic and its ties to the e...Quality: 74/100Giving What We CanOrganizationGiving What We CanComprehensive reference page on Giving What We Can covering its history, pledge structure, research approach, and criticisms; notes 10,000+ pledgers, $340M+ donated historically, $80M in 2024, and ...Quality: 62/100Rethink PrioritiesOrganizationRethink PrioritiesRethink Priorities is a research organization founded in 2018 that grew from 2 to ~130 people by 2022, conducting evidence-based analysis across animal welfare, global health, and AI governance. Th...Quality: 60/100Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100NTI | bio (Nuclear Threat Initiative - Biological Program)OrganizationNTI | bio (Nuclear Threat Initiative - Biological Program)The biosecurity division of the Nuclear Threat Initiative, NTI | bio works to reduce global catastrophic biological risks through DNA synthesis screening, BWC strengthening.Quality: 60/100Survival and Flourishing FundOrganizationSurvival and Flourishing FundSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100
Key Debates
Technical AI Safety ResearchCruxTechnical AI Safety ResearchTechnical AI safety research encompasses six major agendas (mechanistic interpretability, scalable oversight, AI control, evaluations, agent foundations, and robustness) with 500+ researchers and $...Quality: 66/100AI Governance and PolicyCruxAI Governance and PolicyComprehensive analysis of AI governance mechanisms estimating 30-50% probability of meaningful regulation by 2027 and 5-25% x-risk reduction potential through coordinated international approaches. ...Quality: 66/100
Risks
Power-Seeking AIRiskPower-Seeking AIFormal proofs demonstrate optimal policies seek power in MDPs (Turner et al. 2021), now empirically validated: OpenAI o3 sabotaged shutdown in 79% of tests (Palisade 2025), and Claude 3 Opus showed...Quality: 67/100
Other
Sam Bankman-FriedPersonSam Bankman-FriedSam Bankman-Fried, convicted of fraud and sentenced to 25 years for misappropriating FTX customer funds, damaged both cryptocurrency markets and the effective altruism community, raising substantiv...Quality: 55/100Ajeya CotraPersonAjeya CotraAjeya Cotra is a member of technical staff at METR and former senior advisor at Coefficient Giving (formerly Open Philanthropy), where she led technical AI safety grantmaking including a $25M agent...Quality: 55/100
Historical
The MIRI EraHistoricalThe MIRI EraComprehensive chronological account of AI safety's institutional emergence (2000-2015), from MIRI's founding through Bostrom's Superintelligence to mainstream recognition. Covers key organizations,...Quality: 31/100