Overview
Holden Karnofsky was co-CEO of Coefficient Giving↗🔗 web★★★★☆Coefficient GivingOpen Philanthropy grants databaseOpen Philanthropy is one of the most influential funders in AI safety; their grants database is a useful reference for understanding which organizations and research directions receive major philanthropic support.Open Philanthropy is a major philanthropic organization that funds work across global health, AI safety, biosecurity, and other cause areas. Their grants database provides trans...ai-safetyexistential-riskgovernancecoordination+3Source ↗ (formerly Open Philanthropy), the most influential grantmaker in AI safety and existential risk. Through Coefficient, he directed over $300 million toward AI safety research and governance, fundamentally transforming it from a fringe academic interest into a well-funded field with hundreds of researchers. In 2025, he joined Anthropic.
His strategic thinking has shaped how the effective altruism community prioritizes AI risk through frameworks like the "Most Important Century"↗🔗 web★★★☆☆Cold Takes"Most Important Century"A widely read series by Holden Karnofsky (Open Philanthropy) that helped mainstream longtermist and transformative AI risk arguments within the effective altruism and AI safety communities; available as blog posts, podcast, and PDF.Holden Karnofsky's 'Most Important Century' series argues that 21st-century AI development could trigger a productivity explosion leading to a galaxy-wide civilization far soone...ai-safetyexistential-riskai-timelineseffective-altruism+5Source ↗ thesis. This argues we may live in the century that determines humanity's entire future trajectory due to transformative AI development.
| Funding Achievement | Amount | Impact |
|---|---|---|
| Total AI safety grants | $300M+ | Enabled field growth from ~dozens to hundreds of researchers |
| Anthropic investment | $580M+ | Created major safety-focused AI lab |
| Field building grants | $50M+ | Established academic programs and research infrastructure |
Risk Assessment
| Risk Category | Karnofsky's Assessment | Evidence | Timeline |
|---|---|---|---|
| Transformative AI | ~15% by 2036, ≈50% by 2060 | Bio anchors framework↗🔗 web★★★☆☆Cold TakesBio anchors frameworkWritten by Holden Karnofsky of Open Philanthropy, this post summarizes Ajeya Cotra's influential 'Biological Anchors' report, which has been widely cited in AI safety discussions about timelines and urgency of alignment work.A layperson-friendly summary of Ajeya Cotra's 'Biological Anchors' framework for forecasting when transformative AI (specifically, AI that can automate all human activities driv...ai-timelinescapabilitiescomputeexistential-risk+3Source ↗ | This century |
| Existential importance | "Most important century" | AI could permanently shape humanity's trajectory | 2021-2100 |
| Tractability | High enough for top priority | Open Phil's largest focus area allocation | Current |
| Funding adequacy | Severely underfunded | Still seeking to grow field substantially | Ongoing |