Seldon Lab is a San Francisco-based AI safety accelerator founded in early 2025 that combines research publication with startup investment, claiming early success with portfolio companies raising $10M+ and selling to major AI companies. The article provides comprehensive documentation of a new organization model combining safety research with commercial acceleration.
Anthropic Core ViewsSafety AgendaAnthropic Core ViewsAnthropic allocates 15-25% of R&D (~$100-200M annually) to safety research including the world's largest interpretability team (40-60 researchers), while maintaining $5B+ revenue by 2025. Their RSP...Quality: 62/100
Organizations
Survival and Flourishing FundOrganizationSurvival and Flourishing FundSFF distributed $141M since 2019 (primarily from Jaan Tallinn's ~$900M fortune), with the 2025 round totaling $34.33M (86% to AI safety). Uses unique S-process mechanism where 6-12 recommenders exp...Quality: 59/100ManifundOrganizationManifundManifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, ...Quality: 50/100Lionheart VenturesOrganizationLionheart VenturesLionheart Ventures is a small venture capital firm ($25M inaugural fund) focused on AI safety and mental health investments, notable for its investment in Anthropic and integration with the EA comm...Quality: 50/100Schmidt FuturesOrganizationSchmidt FuturesSchmidt Futures is a major philanthropic initiative founded by Eric Schmidt that has committed substantial funding to AI safety research ($135M across AI2050 and AI Safety Science programs) while a...Quality: 60/100FutureSearchOrganizationFutureSearchFutureSearch is an AI forecasting startup founded by former Metaculus leaders that combines LLM research agents with human judgment, demonstrating some prediction accuracy but facing uncertain comm...Quality: 50/100
Concepts
Safety Orgs OverviewSafety Orgs OverviewA well-organized reference overview of ~20 AI safety organizations categorized by function (alignment research, policy, field-building), with a comparative budget/headcount table showing estimated ...Quality: 48/100Self-Improvement and Recursive EnhancementCapabilitySelf-Improvement and Recursive EnhancementComprehensive analysis of AI self-improvement from current AutoML systems (23% training speedups via AlphaEvolve) to theoretical intelligence explosion scenarios, with expert consensus at ~50% prob...Quality: 69/100
Other
Buck ShlegerisPersonBuck ShlegerisCEO of Redwood Research, focused on AI safety research and alignment techniques.Esben KranPersonEsben KranFounder of Apart Research, an AI safety research organization that accelerates safety research through hackathons, fellowships, and collaborative sprints. Founded Apart Research in 2022 at age 22 a...Jeremie HarrisPersonJeremie HarrisCo-founder and CEO of Gladstone AI, an AI safety advisory firm that provides briefings, training, and risk assessments to government and enterprise clients. Y Combinator alumnus.