Rethink Priorities is a research organization founded in 2018 that grew from 2 to ~130 people by 2022, conducting evidence-based analysis across animal welfare, global health, and AI governance. The organization reported influencing >$10M in grants by 2023 but acknowledges significant failures in impact measurement and shifting organizational priorities.
Donations List WebsiteProjectDonations List WebsiteComprehensive documentation of an open-source database tracking $72.8B in philanthropic donations (1969-2023) across 75+ donors, with particular coverage of EA/AI safety funding. The page thoroughl...Quality: 52/100
Risks
SchemingRiskSchemingScheming—strategic AI deception during training—has transitioned from theoretical concern to observed behavior across all major frontier models (o1: 37% alignment faking, Claude: 14% harmful compli...Quality: 74/100AI ProliferationRiskAI ProliferationAI proliferation accelerated dramatically as the capability gap narrowed from 18 to 6 months (2022-2024), with open-source models like DeepSeek R1 now matching frontier performance. US export contr...Quality: 60/100
Concepts
Agentic AICapabilityAgentic AIAnalysis of agentic AI capabilities and deployment challenges, documenting industry forecasts (40% of enterprise apps by 2026, $199B market by 2034) alongside implementation difficulties (40%+ proj...Quality: 68/100Ea Longtermist Wins LossesEa Longtermist Wins LossesA comprehensive impact ledger of EA/longtermism's track record organized by year and topic, covering verified wins (GiveWell's $1B+ directed, ~100,000 lives saved through AMF, 10K GWWC pledges) and...Quality: 53/100Ea Institutions Response To The Ftx CollapseEa Institutions Response To The Ftx CollapseEA institutions responded to the FTX collapse through public condemnation, funding pauses, and community surveys, but were damaged by revelations that warnings about SBF's conduct had been downplay...Quality: 53/100Ftx Collapse And Ea Public CredibilityFtx Collapse And Ea Public CredibilityThe FTX collapse (November 2022) caused measurable but concentrated reputational damage to effective altruism, with ~30% of engaged community members losing substantial trust in EA leadership, whil...Quality: 50/100
Organizations
Epoch AIOrganizationEpoch AIEpoch AI maintains comprehensive databases tracking 3,200+ ML models showing 4.4x annual compute growth and projects data exhaustion 2026-2032. Their empirical work directly informed EU AI Act's 10...Quality: 51/100Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/1001Day SoonerOrganization1Day SoonerA pandemic preparedness nonprofit originally founded to advocate for COVID-19 human challenge trials, now working on indoor air quality (germicidal UV), advance market commitments for vaccines.Quality: 60/100Apollo ResearchOrganizationApollo ResearchApollo Research demonstrated in December 2024 that all six tested frontier models (including o1, Claude 3.5 Sonnet, Gemini 1.5 Pro) engage in scheming behaviors, with o1 maintaining deception in ov...Quality: 58/100MetaculusOrganizationMetaculusMetaculus is a reputation-based forecasting platform with 1M+ predictions showing AGI probability at 25% by 2027 and 50% by 2031 (down from 50 years away in 2020). Analysis finds good short-term ca...Quality: 50/10080,000 HoursOrganization80,000 Hours80,000 Hours is the largest EA career organization, reaching 10M+ readers and reporting 3,000+ significant career plan changes, with 80% of $10M+ funding from Coefficient Giving. Since 2016 they've...Quality: 45/100
Other
Vipul NaikPersonVipul NaikVipul Naik is a mathematician and EA community member who has funded ~$255K in contract research (primarily to Sebastian Sanchez and Issa Rice) and created the Donations List Website tracking $72.8...Quality: 63/100
Historical
The MIRI EraHistoricalThe MIRI EraComprehensive chronological account of AI safety's institutional emergence (2000-2015), from MIRI's founding through Bostrom's Superintelligence to mainstream recognition. Covers key organizations,...Quality: 31/100