SFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders express utility functions and an algorithm allocates grants favoring projects with enthusiastic champions, though the mechanism can produce volatile grant distributions year-to-year.
Dangerous Capability EvaluationsApproachDangerous Capability EvaluationsComprehensive synthesis showing dangerous capability evaluations are now standard practice (95%+ frontier models) but face critical limitations: AI capabilities double every 7 months while external...Quality: 64/100
Analysis
Anthropic IPOAnalysisAnthropic IPOAnthropic is actively preparing for a potential 2026 IPO with concrete steps like hiring Wilson Sonsini and conducting bank discussions, though timeline uncertainty remains with prediction markets ...Quality: 65/100Anthropic (Funder)AnalysisAnthropic (Funder)Comprehensive model of EA-aligned philanthropic capital at Anthropic. At $380B valuation (Series G, Feb 2026, $30B raised): $27-76B risk-adjusted EA capital expected. Total funding raised exceeds $...Quality: 65/100Donations List WebsiteProjectDonations List WebsiteComprehensive documentation of an open-source database tracking $72.8B in philanthropic donations (1969-2023) across 75+ donors, with particular coverage of EA/AI safety funding. The page thoroughl...Quality: 52/100
Organizations
FAR AIOrganizationFAR AIFAR AI is an AI safety research nonprofit founded in July 2022 by Adam Gleave (CEO) and Karl Berzins (Co-founder & President). Based in Berkeley, California, the organization conducts technical res...Quality: 76/100OpenAI FoundationOrganizationOpenAI FoundationThe OpenAI Foundation holds 26% equity (~$130B) in OpenAI Group PBC with governance control, but detailed analysis of board member incentives reveals strong bias toward capital preservation over ph...Quality: 87/100Long-Term Future Fund (LTFF)OrganizationLong-Term Future Fund (LTFF)LTFF is a regranting program that has distributed $20M since 2017 (approximately $10M to AI safety) with median grants of $25K, filling a critical niche between personal savings and institutional f...Quality: 56/100Redwood ResearchOrganizationRedwood ResearchA nonprofit AI safety and security research organization founded in 2021, known for pioneering AI Control research, developing causal scrubbing interpretability methods, and conducting landmark ali...Quality: 78/100Coefficient GivingOrganizationCoefficient GivingCoefficient Giving (formerly Open Philanthropy) has directed $4B+ in grants since 2014, including $336M to AI safety (~60% of external funding). The organization spent ~$50M on AI safety in 2024, w...Quality: 55/100METROrganizationMETRMETR conducts pre-deployment dangerous capability evaluations for frontier AI labs (OpenAI, Anthropic, Google DeepMind), testing autonomous replication, cybersecurity, CBRN, and manipulation capabi...Quality: 66/100
Other
Max TegmarkPersonMax TegmarkComprehensive biographical profile of Max Tegmark covering his transition from cosmology to AI safety advocacy, his role founding the Future of Life Institute, and his controversial Mathematical Un...Quality: 63/100Vipul NaikPersonVipul NaikVipul Naik is a mathematician and EA community member who has funded ~$255K in contract research (primarily to Sebastian Sanchez and Issa Rice) and created the Donations List Website tracking $72.8...Quality: 63/100
Concepts
EA Shareholder Diversification from AnthropicConceptEA Shareholder Diversification from AnthropicThe EA ecosystem faces extreme portfolio concentration risk with $27-76B in risk-adjusted capital at the $380B Series G — scaling to $42-119B at March 2026 secondary market pricing (~$595B implied)...Quality: 60/100Funders OverviewFunders OverviewOverview of major funders supporting AI safety, existential risk reduction, and longtermist causes. These organizations and individuals collectively provide hundreds of millions of dollars annually...Quality: 3/100