Manifund is a $2M+ annual charitable regranting platform (founded 2022) that provides fast grants (<1 week) to AI safety projects through expert regrantors ($50K-400K budgets), fiscal sponsorship, and experimental impact certificates. The platform distributed $2.06M in 2023 (~40% to AI safety research), with a growing focus on AI safety evaluations and field-building.
Prediction Markets (AI Forecasting)ApproachPrediction Markets (AI Forecasting)Prediction markets achieve Brier scores of 0.16-0.24 (15-25% better than polls) by aggregating dispersed information through financial incentives, with platforms handling $1-3B annually. For AI saf...Quality: 56/100
Analysis
Stampy / AISafety.infoProjectStampy / AISafety.infoDetailed reference page for AISafety.info/Stampy, covering technical architecture, team structure, funding status (2024 emergency campaign seeking $40K), and community programs like the paused Dist...Quality: 45/100
Other
David DalrymplePersonDavid DalrympleA comprehensive biographical profile of David Dalrymple covering his role directing ARIA's £59M Safeguarded AI programme, his technical approach to formal verification for safety-critical AI, and h...Quality: 61/100Leopold AschenbrennerPersonLeopold AschenbrennerComprehensive biographical profile of Leopold Aschenbrenner, covering his trajectory from Columbia valedictorian to OpenAI researcher to $1.5B hedge fund founder, with detailed documentation of his...Quality: 61/100Nuño SemperePersonNuño SempereNuño Sempere is a Spanish superforecaster who co-founded the highly successful Samotsvety forecasting group and now runs Sentinel for global catastrophe early warning, while being known for skeptic...Quality: 50/100Robin HansonPersonRobin HansonComprehensive biographical entry on Robin Hanson covering his contributions to prediction markets, futarchy governance, and skeptical AI safety positions. The page provides valuable context on a si...Quality: 53/100Evan HubingerPersonEvan HubingerComprehensive biography of Evan Hubinger documenting his influential theoretical work on mesa-optimization/deceptive alignment (2019, 205+ citations) and empirical demonstrations at Anthropic showi...Quality: 43/100Eliezer YudkowskyPersonEliezer YudkowskyComprehensive biographical profile of Eliezer Yudkowsky covering his foundational contributions to AI safety (CEV, early problem formulation, agent foundations) and notably pessimistic views on AI ...Quality: 35/100
Organizations
MATS ML Alignment Theory Scholars programOrganizationMATS ML Alignment Theory Scholars programMATS is a well-documented 12-week fellowship program that has successfully trained 213 AI safety researchers with strong career outcomes (80% in alignment work) and research impact (160+ publicatio...Quality: 60/100Seldon LabOrganizationSeldon LabSeldon Lab is a San Francisco-based AI safety accelerator founded in early 2025 that combines research publication with startup investment, claiming early success with portfolio companies raising $...Quality: 45/100Lighthaven (Event Venue)OrganizationLighthaven (Event Venue)Lighthaven is a Berkeley conference venue operated by Lightcone Infrastructure that serves as physical infrastructure for AI safety, rationality, and EA communities. While well-documented as a faci...Quality: 40/100Sentinel (Catastrophic Risk Foresight)OrganizationSentinel (Catastrophic Risk Foresight)Sentinel is a catastrophic risk foresight organization co-founded by Nuño Sempere and Rai Sur, operating informally from around 2023 and incorporated as a US 501(c)(3) nonprofit in or after late 20...Quality: 39/100PolymarketOrganizationPolymarketThis is a comprehensive overview of Polymarket as a prediction market platform, covering its history, mechanics, and accuracy, but has minimal relevance to AI safety beyond brief mentions in the EA...Quality: 33/100GratifiedOrganizationGratifiedGratified is an early-stage coffee and art community organization in San Francisco that hosts events at EA-adjacent venues, including a 24-hour Winter Solstice event at Mox SF in December 2025. It ...Quality: 25/100
Concepts
Funders OverviewFunders OverviewOverview of major funders supporting AI safety, existential risk reduction, and longtermist causes. These organizations and individuals collectively provide hundreds of millions of dollars annually...Quality: 3/100