Holden Karnofsky
holden-karnofsky (E156)← Back to pagePath: /knowledge-base/people/holden-karnofsky/
Page Metadata
{
"id": "holden-karnofsky",
"numericId": null,
"path": "/knowledge-base/people/holden-karnofsky/",
"filePath": "knowledge-base/people/holden-karnofsky.mdx",
"title": "Holden Karnofsky",
"quality": 40,
"importance": 25,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2026-01-29",
"llmSummary": "Holden Karnofsky directed $300M+ in AI safety funding through Open Philanthropy, growing the field from ~20 to 400+ FTE researchers and developing influential frameworks like the 'Most Important Century' thesis (15% transformative AI by 2036, 50% by 2060). His funding decisions include a $580M Anthropic investment and establishment of 15+ university AI safety programs.",
"structuredSummary": null,
"description": "Former co-CEO of Coefficient Giving (formerly Open Philanthropy) who directed $300M+ toward AI safety, shaped EA prioritization, and developed influential frameworks like the \"Most Important Century\" thesis. Now at Anthropic.",
"ratings": {
"novelty": 2,
"rigor": 4.5,
"actionability": 2,
"completeness": 6
},
"category": "people",
"subcategory": null,
"clusters": [
"community",
"ai-safety",
"governance"
],
"metrics": {
"wordCount": 1662,
"tableCount": 14,
"diagramCount": 0,
"internalLinks": 53,
"externalLinks": 0,
"footnoteCount": 0,
"bulletRatio": 0.3,
"sectionCount": 34,
"hasOverview": true,
"structuralScore": 10
},
"suggestedQuality": 67,
"updateFrequency": 45,
"evergreen": true,
"wordCount": 1662,
"unconvertedLinks": [],
"unconvertedLinkCount": 0,
"convertedLinkCount": 24,
"backlinkCount": 1,
"redundancy": {
"maxSimilarity": 17,
"similarPages": [
{
"id": "dario-amodei",
"title": "Dario Amodei",
"path": "/knowledge-base/people/dario-amodei/",
"similarity": 17
},
{
"id": "toby-ord",
"title": "Toby Ord",
"path": "/knowledge-base/people/toby-ord/",
"similarity": 15
},
{
"id": "chai",
"title": "CHAI (Center for Human-Compatible AI)",
"path": "/knowledge-base/organizations/chai/",
"similarity": 14
},
{
"id": "coefficient-giving",
"title": "Coefficient Giving",
"path": "/knowledge-base/organizations/coefficient-giving/",
"similarity": 14
},
{
"id": "conjecture",
"title": "Conjecture",
"path": "/knowledge-base/organizations/conjecture/",
"similarity": 14
}
]
}
}Entity Data
{
"id": "holden-karnofsky",
"type": "person",
"title": "Holden Karnofsky",
"description": "Holden Karnofsky is the former Co-CEO of Coefficient Giving (formerly Open Philanthropy), one of the largest funders of AI safety research and related work. Through Coefficient Giving, he directed hundreds of millions of dollars toward reducing existential risks from AI, making him one of the most influential figures in shaping the field's growth and direction. In 2025, he joined Anthropic.\n\nKarnofsky's intellectual contributions have been equally significant. His \"Most Important Century\" series of blog posts on Cold Takes presents a detailed argument that the 21st century could be the most pivotal in human history due to transformative AI. He has developed frameworks for thinking about AI timelines, the potential for a \"galaxy-brained\" AI to manipulate humans, and how philanthropic funding should be allocated given deep uncertainty about AI trajectories.\n\nBefore focusing on AI risk, Karnofsky co-founded GiveWell, a charity evaluator that became the intellectual foundation for effective altruism. His transition to prioritizing AI safety reflects a broader shift in the EA movement. Through Coefficient Giving's grants to organizations like Anthropic, MIRI, Redwood Research, and many others, Karnofsky helped build the institutional infrastructure of AI safety as a field.\n",
"tags": [
"effective-altruism",
"ai-safety-funding",
"ai-timelines",
"transformative-ai",
"x-risk",
"most-important-century",
"grantmaking"
],
"relatedEntries": [
{
"id": "anthropic",
"type": "lab"
},
{
"id": "toby-ord",
"type": "researcher"
}
],
"sources": [
{
"title": "Coefficient Giving",
"url": "https://coefficientgiving.org"
},
{
"title": "Cold Takes Blog",
"url": "https://www.cold-takes.com/"
},
{
"title": "Most Important Century Series",
"url": "https://www.cold-takes.com/most-important-century/"
},
{
"title": "AI Timelines Post",
"url": "https://www.cold-takes.com/where-ai-forecasting-stands-today/"
}
],
"lastUpdated": "2025-12",
"website": "https://www.openphilanthropy.org",
"customFields": []
}Canonical Facts (0)
No facts for this entity
External Links
{
"eaForum": "https://forum.effectivealtruism.org/topics/holden-karnofsky"
}Backlinks (1)
| id | title | type | relationship |
|---|---|---|---|
| toby-ord | Toby Ord | researcher | — |
Frontmatter
{
"title": "Holden Karnofsky",
"description": "Former co-CEO of Coefficient Giving (formerly Open Philanthropy) who directed $300M+ toward AI safety, shaped EA prioritization, and developed influential frameworks like the \"Most Important Century\" thesis. Now at Anthropic.",
"sidebar": {
"order": 10
},
"quality": 40,
"llmSummary": "Holden Karnofsky directed $300M+ in AI safety funding through Open Philanthropy, growing the field from ~20 to 400+ FTE researchers and developing influential frameworks like the 'Most Important Century' thesis (15% transformative AI by 2036, 50% by 2060). His funding decisions include a $580M Anthropic investment and establishment of 15+ university AI safety programs.",
"lastEdited": "2026-01-29",
"importance": 25,
"update_frequency": 45,
"ratings": {
"novelty": 2,
"rigor": 4.5,
"actionability": 2,
"completeness": 6
},
"clusters": [
"community",
"ai-safety",
"governance"
],
"entityType": "person"
}Raw MDX Source
---
title: "Holden Karnofsky"
description: "Former co-CEO of Coefficient Giving (formerly Open Philanthropy) who directed $300M+ toward AI safety, shaped EA prioritization, and developed influential frameworks like the \"Most Important Century\" thesis. Now at Anthropic."
sidebar:
order: 10
quality: 40
llmSummary: "Holden Karnofsky directed $300M+ in AI safety funding through Open Philanthropy, growing the field from ~20 to 400+ FTE researchers and developing influential frameworks like the 'Most Important Century' thesis (15% transformative AI by 2036, 50% by 2060). His funding decisions include a $580M Anthropic investment and establishment of 15+ university AI safety programs."
lastEdited: "2026-01-29"
importance: 25
update_frequency: 45
ratings:
novelty: 2
rigor: 4.5
actionability: 2
completeness: 6
clusters: ["community", "ai-safety", "governance"]
entityType: person
---
import {DataInfoBox, R, EntityLink, DataExternalLinks} from '@components/wiki';
<DataExternalLinks pageId="holden-karnofsky" />
<DataInfoBox entityId="E156" />
## Overview
Holden Karnofsky was co-CEO of <R id="dd0cf0ff290cc68e"><EntityLink id="E521">Coefficient Giving</EntityLink></R> (formerly <EntityLink id="E552">Coefficient Giving</EntityLink>), the most influential grantmaker in AI safety and existential risk. Through Coefficient, he directed over \$100 million toward AI safety research and governance, fundamentally transforming it from a fringe academic interest into a well-funded field with hundreds of researchers. In 2025, he joined Anthropic.
His strategic thinking has shaped how the effective altruism community prioritizes AI risk through frameworks like the <R id="1a20dfc897a0933a">"Most Important Century"</R> thesis. This argues we may live in the century that determines humanity's entire future trajectory due to transformative AI development.
| Funding Achievement | Amount | Impact |
|-------------------|---------|---------|
| Total AI safety grants | \$300M+ | Enabled field growth from ~dozens to hundreds of researchers |
| Anthropic investment | \$580M+ | Created major safety-focused AI lab |
| Field building grants | \$50M+ | Established academic programs and research infrastructure |
## Risk Assessment
| Risk Category | Karnofsky's Assessment | Evidence | Timeline |
|---------------|----------------------|----------|----------|
| Transformative AI | ~15% by 2036, ≈50% by 2060 | <R id="62d29d310d596d2a">Bio anchors framework</R> | This century |
| Existential importance | "Most important century" | AI could permanently shape humanity's trajectory | 2021-2100 |
| Tractability | High enough for top priority | Open Phil's largest focus area allocation | Current |
| Funding adequacy | Severely underfunded | Still seeking to grow field substantially | Ongoing |
## Career Evolution and Major Achievements
### Early Career (2007-2014): Building Effective Altruism
| Period | Role | Key Achievements |
|--------|------|------------------|
| 2007-2011 | Co-founder, <R id="9315689a12534405">GiveWell</R> | Pioneered rigorous charity evaluation methodology |
| 2011-2014 | Launch Coefficient Giving | Expanded beyond global health to cause prioritization |
| 2012-2014 | EA movement building | Helped establish effective altruism as global movement |
### Transition to AI Focus (2014-2018)
**Initial AI engagement:**
- 2014: First significant AI safety grants through Coefficient (then Open Philanthropy)
- 2016: Major funding to <EntityLink id="E57">Center for Human-Compatible AI (CHAI)</EntityLink>
- 2017: Early <EntityLink id="E218">OpenAI</EntityLink> funding (before pivot to for-profit)
- 2018: Increased conviction leading to AI as top priority
### AI Safety Leadership (2018-Present)
**Major funding decisions:**
- 2021: <R id="27ce8f3b89dcdaa1">\$580M investment in Anthropic</R> to create safety-focused lab
- 2022: Establishment of <R id="5714a008527a379a">AI safety university programs</R>
- 2023: Expanded governance funding addressing AI regulation
## Strategic Frameworks and Intellectual Contributions
### The "Most Important Century" Thesis
**Core argument structure:**
| Component | Claim | Implication |
|-----------|-------|-------------|
| Technology potential | Transformative AI possible this century | Could exceed agricultural/industrial revolution impacts |
| Speed differential | AI transition faster than historical precedents | Less time to adapt and coordinate |
| Leverage moment | Our actions now shape outcomes | Unlike past revolutions where individuals had little influence |
| Conclusion | This century uniquely important | Justifies enormous current investment |
**Supporting evidence:**
- <R id="62d29d310d596d2a">Biological anchors methodology</R> for AI timelines
- Historical analysis of technological transitions
- Economic modeling of AI impact potential
### Bio Anchors Framework
Developed with <R id="63739057bf3d421b">Ajeya Cotra</R>, this framework estimates AI development timelines by comparing required computation to biological systems:
| Anchor Type | Computation Estimate | Timeline Implication |
|-------------|---------------------|----------------------|
| Human brain | ≈10^15 FLOP/s | Medium-term (2030s-2040s) |
| Human lifetime | ≈10^24 FLOP | Longer-term (2040s-2050s) |
| Evolution | ≈10^41 FLOP | Much longer-term if needed |
## Coefficient Giving Funding Strategy
### Portfolio Approach
| Research Area | Funding Focus | Key Recipients | Rationale |
|---------------|---------------|----------------|-----------|
| Technical alignment | \$100M+ | <EntityLink id="E22">Anthropic</EntityLink>, <EntityLink id="E557">Redwood Research</EntityLink> | Direct work on making AI systems safer |
| <EntityLink id="E608">AI governance</EntityLink> | \$80M+ | <R id="f0d95954b449240a"><EntityLink id="E524">Center for Security and Emerging Technology</EntityLink></R>, policy fellowships | Institutional responses to AI development |
| Field building | \$50M+ | University programs, individual researchers | Growing research community |
| Compute governance | \$20M+ | <EntityLink id="E464">Compute monitoring research</EntityLink> | Oversight of AI development resources |
### Grantmaking Philosophy
**Key principles:**
- **Hits-based giving**: Expect most grants to have limited impact, few to be transformative
- **Long time horizons**: Patient capital for 5-10 year research projects
- **Active partnership**: Strategic guidance beyond just funding
- **Portfolio diversification**: Multiple approaches given uncertainty
**Notable funding decisions:**
- <R id="27ce8f3b89dcdaa1">Anthropic investment</R>: \$580M to create safety-focused competitor to OpenAI
- <EntityLink id="E202">MIRI funding</EntityLink>: Early support for foundational <EntityLink id="E439">AI alignment</EntityLink> research
- Policy fellowships: Placing AI safety researchers in government positions
## Current Views and Assessment
### Karnofsky's AI Risk Timeline
Based on public statements and Coefficient Giving priorities from 2023-2024, Karnofsky's views reflect a combination of timeline estimates derived from technical forecasting and strategic assessments about field readiness and policy urgency:
| Expert/Source | Estimate | Reasoning |
|---------------|----------|-----------|
| Transformative AI (2022) | 15% by 2036, 50% by 2060 | Derived from the bio anchors framework developed with Ajeya Cotra, which estimates AI development timelines by comparing required computation to biological systems. This central estimate suggests transformative AI is more likely than not within this century, though substantial uncertainty remains around both shorter and longer timelines. |
| Field adequacy (2024) | Still severely underfunded | Despite directing over \$100M toward AI safety and growing the field from approximately 20 to 400+ FTE researchers, Coefficient Giving continues aggressive hiring and grantmaking. This assessment reflects the belief that the scale of the challenge—ensuring safe development of transformative AI—far exceeds current resources and talent devoted to it. |
| Policy urgency (2024) | High priority | Coefficient has significantly increased governance focus, funding policy research, placing fellows in government positions, and supporting regulatory frameworks. This shift recognizes that technical alignment work alone is insufficient—institutional and policy responses are critical to managing AI development trajectories and preventing <EntityLink id="E239">racing dynamics</EntityLink>. |
### Evolution of Views (2020-2024)
| Year | Key Update | Reasoning |
|------|------------|-----------|
| 2021 | "Most Important Century" series | Crystallized long-term strategic thinking |
| 2022 | Increased policy focus | Recognition of need for governance alongside technical work |
| 2023 | Anthropic model success | Validation of safety-focused lab approach |
| 2024 | Accelerated timelines concern | <R id="b80187df7777104d">Shorter timelines than bio anchors suggested</R> |
## Influence on AI Safety Field
### Field Growth Metrics
| Metric | 2015 | 2024 | Growth Factor |
|--------|------|------|---------------|
| FTE researchers | ≈20 | ≈400 | 20x |
| Annual funding | <\$5M | >\$200M | 40x |
| University programs | 0 | 15+ | New category |
| Major organizations | 2-3 | 20+ | 7x |
### Institutional Impact
**Academic legitimacy:**
- Funding enabled <R id="a9815b0be81c47c0">AI safety courses</R> at major universities
- Supported tenure-track positions focused on alignment research
- Created pathway for traditional CS researchers to enter field
**Policy influence:**
- Funded experts now advising <EntityLink id="E365">US AI Safety Institute</EntityLink>
- Supported research informing <R id="1ad6dc89cded8b0c"><EntityLink id="E127">EU AI Act</EntityLink></R>
- Built relationships between AI safety community and policymakers
## Key Uncertainties and Strategic Cruxes
### Open Questions in Karnofsky's Framework
| Uncertainty | Stakes | Current Evidence |
|-------------|--------|------------------|
| AI timeline accuracy | Entire strategy timing | Mixed signals from recent capabilities |
| Technical tractability | Funding allocation efficiency | Early positive results but limited validation |
| Governance effectiveness | Policy investment value | Unclear institutional responsiveness |
| Anthropic success | Large investment justification | Strong early results but long-term unknown |
### Strategic Disagreements
**Within EA community:**
- Some argue for <EntityLink id="E505">longtermist focus beyond AI</EntityLink>
- Others prefer <R id="9315689a12534405">global health and development</R> emphasis
- Debate over concentration vs. diversification of funding
**With AI safety researchers:**
- Tension between <EntityLink id="E373">technical alignment focus</EntityLink> and governance approaches
- Disagreement over <EntityLink id="E217">open vs. closed development</EntityLink> funding
- Questions about emphasis on capabilities research safety benefits
## Public Communication and Influence
### Cold Takes Blog Impact
**Most influential posts:**
- <R id="1a20dfc897a0933a">"The Most Important Century"</R> series (>100k views)
- <R id="b80187df7777104d">"AI Timelines: Where the Arguments Stand"</R> (policy reference)
- <R id="62d29d310d596d2a">"Bio Anchors" explanation</R> (research methodology)
**Communication approach:**
- Transparent reasoning and uncertainty acknowledgment
- Accessible explanations of complex topics
- Regular updates as views evolve
- Direct engagement with critics and alternative viewpoints
### Media and Policy Engagement
| Platform | Reach | Impact |
|----------|-------|--------|
| Congressional testimony | Direct policy influence | Informed <EntityLink id="E248">AI regulation debate</EntityLink> |
| Academic conferences | Research community | Shaped university AI safety programs |
| <EntityLink id="E525">EA Global</EntityLink> talks | Movement direction | Influenced thousands of career decisions |
| Podcast interviews | Public understanding | Mainstream exposure for AI safety ideas |
## Current Priorities and Future Direction
### 2024-2026 Strategic Focus
**Immediate priorities:**
1. **Anthropic scaling**: Supporting responsible development of powerful systems
2. **Governance acceleration**: Policy research and implementation support
3. **Technical diversification**: Funding multiple alignment research approaches
4. **<EntityLink id="E171">International coordination</EntityLink>**: Supporting global AI safety cooperation
**Emerging areas:**
- Compute governance infrastructure
- <EntityLink id="E447">AI evaluation</EntityLink> methodologies
- <EntityLink id="E462">Corporate AI safety</EntityLink> practices
- <EntityLink id="E228">Prediction market</EntityLink> applications
### Long-term Vision
**Field development goals:**
- Self-sustaining research ecosystem independent of Coefficient Giving
- Government funding matching or exceeding philanthropic support
- Integration of safety research into mainstream AI development
- <EntityLink id="E470">International coordination mechanisms</EntityLink> for AI governance
## Critiques and Responses
### Common Criticisms
| Criticism | Karnofsky's Response | Counter-evidence |
|-----------|---------------------|------------------|
| Over-<EntityLink id="E68">concentration of power</EntityLink> | Funding diversification, transparency | Multiple other major funders emerging |
| Field capture risk | Portfolio approach, external evaluation | Continued criticism tolerated and addressed |
| Timeline overconfidence | Explicit uncertainty, range estimates | Regular updating based on new evidence |
| Governance skepticism | Measured expectations, multiple approaches | Early policy wins demonstrate tractability |
### Ongoing Debates
**Resource allocation:**
- Should Coefficient Giving fund more basic research vs. applied safety work?
- Optimal balance between technical and governance approaches?
- Geographic distribution of funding (US-centric concerns)
**Strategic approach:**
- Speed vs. care in scaling funding
- Competition vs. cooperation with AI labs
- Public advocacy vs. behind-the-scenes influence
## Sources & Resources
### Primary Sources
| Type | Source | Description |
|------|--------|-------------|
| Blog | <R id="859ff786a553505f">Cold Takes</R> | Karnofsky's strategic thinking and analysis |
| Organization | <R id="dd0cf0ff290cc68e">Coefficient Giving</R> | Grant database and reasoning |
| Research | <R id="f43f63419dbf2e6e">Bio Anchors Report</R> | Technical forecasting methodology |
| Testimony | <R id="52748445fab0e8cc">Congressional Hearing</R> | Policy positions and recommendations |
### Secondary Analysis
| Type | Source | Focus |
|------|--------|-------|
| Academic | <R id="bff2f5843023e85e">EA Research</R> | Critical analysis of funding decisions |
| Journalistic | <R id="21a4a585cdbf7dd3">MIT Technology Review</R> | External perspective on influence |
| Policy | <R id="0a17f30e99091ebf">RAND Corporation</R> | Government research on philanthropic AI funding |
### Related Profiles
- <EntityLink id="E91">Dario Amodei</EntityLink> - CEO of Anthropic, major funding recipient
- <EntityLink id="E220">Paul Christiano</EntityLink> - Technical alignment researcher, influenced Karnofsky's views
- <EntityLink id="E215">Nick Bostrom</EntityLink> - Author of "Superintelligence," early influence on Coefficient AI focus
- <EntityLink id="E114">Eliezer Yudkowsky</EntityLink> - MIRI founder, recipient of early Coefficient AI safety grants