LTFF is a regranting program that has distributed $20M since 2017 (approximately $10M to AI safety) with median grants of $25K, filling a critical niche between personal savings and institutional funders like Coefficient Giving (median $257K). In 2023, LTFF granted $6.67M with a 19.3% acceptance rate, filling an important gap in the AI safety funding ecosystem.
AI WatchProjectAI WatchAI Watch is a tracking database by Issa Rice that monitors AI safety organizations, people, funding, and publications as part of his broader knowledge infrastructure ecosystem. The article provides...Quality: 23/100
Key Debates
AI Alignment Research AgendasCruxAI Alignment Research AgendasComprehensive comparison of major AI safety research agendas ($100M+ Anthropic, $50M+ DeepMind, $5-10M nonprofits) with detailed funding, team sizes, and failure mode coverage (25-65% per agenda). ...Quality: 69/100Technical AI Safety ResearchCruxTechnical AI Safety ResearchTechnical AI safety research encompasses six major agendas (mechanistic interpretability, scalable oversight, AI control, evaluations, agent foundations, and robustness) with 500+ researchers and $...Quality: 66/100
Other
Nick BecksteadPersonNick BecksteadNick Beckstead is a philosopher and EA/longtermism figure whose 2013 dissertation formalized longtermist ethics; this article covers his career arc from academic philosopher to Coefficient Giving (...Quality: 60/100Eli LiflandPersonEli LiflandBiographical profile of Eli Lifland, a top-ranked forecaster and AI safety researcher who co-authored the AI 2027 scenario forecast and co-founded the AI Futures Project. The page documents his for...Quality: 58/100Helen TonerPersonHelen TonerComprehensive biographical profile of Helen Toner documenting her career from EA Melbourne founder to CSET Interim Executive Director, with detailed timeline of the November 2023 OpenAI board crisi...Quality: 43/100Paul ChristianoPersonPaul ChristianoComprehensive biography of Paul Christiano documenting his technical contributions (IDA, debate, scalable oversight), risk assessment (~10-20% P(doom), AGI 2030s-2040s), and evolution from higher o...Quality: 39/100Oliver HabrykaPersonOliver HabrykaCo-founder and CEO of Lightcone Infrastructure (operators of LessWrong). Key figure in the rationalist and AI safety community infrastructure.David KruegerPersonDavid KruegerDavid Krueger is an assistant professor at the University of Cambridge working on AI alignment and safety. His research focuses on understanding and mitigating risks from advanced AI, including wor...
Organizations
Manifold (Prediction Market)OrganizationManifold (Prediction Market)Manifold is a play-money prediction market with millions of predictions and ~2,000 peak daily users, showing AGI by 2030 at ~60% vs Metaculus ~45%. Platform scored Brier 0.0342 on 2024 election (vs...Quality: 43/100ManifundOrganizationManifundManifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, ...Quality: 50/100Rethink PrioritiesOrganizationRethink PrioritiesRethink Priorities is a research organization founded in 2018 that grew from 2 to ~130 people by 2022, conducting evidence-based analysis across animal welfare, global health, and AI governance. Th...Quality: 60/100
Concepts
Funders OverviewFunders OverviewOverview of major funders supporting AI safety, existential risk reduction, and longtermist causes. These organizations and individuals collectively provide hundreds of millions of dollars annually...Quality: 3/100EA Funding Absorption CapacityConceptEA Funding Absorption CapacityThe EA ecosystem's ability to absorb large capital inflows is limited by talent pipelines, management capacity, and the challenge of maintaining quality at scale. Current AI safety funding is $120-...Quality: 3/100