Holden Karnofsky
Holden Karnofsky
Holden Karnofsky directed $300M+ in AI safety funding through Coefficient Giving (formerly Open Philanthropy), growing the field from ~20 to 400+ FTE researchers and developing influential frameworks like the 'Most Important Century' thesis (15% transformative AI by 2036, 50% by 2060). His funding decisions include a $580M Anthropic investment and establishment of 15+ university AI safety programs.
Overview
Holden Karnofsky was co-CEO of Coefficient Giving↗🔗 web★★★★☆Coefficient GivingOpen Philanthropy grants databaseOpen Philanthropy is one of the most influential funders in AI safety; their grants database is a useful reference for understanding which organizations and research directions receive major philanthropic support.Open Philanthropy is a major philanthropic organization that funds work across global health, AI safety, biosecurity, and other cause areas. Their grants database provides trans...ai-safetyexistential-riskgovernancecoordination+3Source ↗ (formerly Open Philanthropy), the most influential grantmaker in AI safety and existential risk. Through Coefficient, he directed over $300 million toward AI safety research and governance, fundamentally transforming it from a fringe academic interest into a well-funded field with hundreds of researchers. In 2025, he joined Anthropic.
His strategic thinking has shaped how the effective altruism community prioritizes AI risk through frameworks like the "Most Important Century"↗🔗 web★★★☆☆Cold Takes"Most Important Century"A widely read series by Holden Karnofsky (Open Philanthropy) that helped mainstream longtermist and transformative AI risk arguments within the effective altruism and AI safety communities; available as blog posts, podcast, and PDF.Holden Karnofsky's 'Most Important Century' series argues that 21st-century AI development could trigger a productivity explosion leading to a galaxy-wide civilization far soone...ai-safetyexistential-riskai-timelineseffective-altruism+5Source ↗ thesis. This argues we may live in the century that determines humanity's entire future trajectory due to transformative AI development.
| Funding Achievement | Amount | Impact |
|---|---|---|
| Total AI safety grants | $300M+ | Enabled field growth from ~dozens to hundreds of researchers |
| Anthropic investment | $580M+ | Created major safety-focused AI lab |
| Field building grants | $50M+ | Established academic programs and research infrastructure |
Risk Assessment
| Risk Category | Karnofsky's Assessment | Evidence | Timeline |
|---|---|---|---|
| Transformative AI | ~15% by 2036, ≈50% by 2060 | Bio anchors framework↗🔗 web★★★☆☆Cold TakesBio anchors frameworkWritten by Holden Karnofsky of Open Philanthropy, this post summarizes Ajeya Cotra's influential 'Biological Anchors' report, which has been widely cited in AI safety discussions about timelines and urgency of alignment work.A layperson-friendly summary of Ajeya Cotra's 'Biological Anchors' framework for forecasting when transformative AI (specifically, AI that can automate all human activities driv...ai-timelinescapabilitiescomputeexistential-risk+3Source ↗ | This century |
| Existential importance | "Most important century" | AI could permanently shape humanity's trajectory | 2021-2100 |
| Tractability | High enough for top priority | Open Phil's largest focus area allocation | Current |
| Funding adequacy | Severely underfunded | Still seeking to grow field substantially | Ongoing |
Career Evolution and Major Achievements
Early Career (2007-2014): Building Effective Altruism
| Period | Role | Key Achievements |
|---|---|---|
| 2007-2011 | Co-founder, GiveWell↗🔗 webGiveWell - Evidence-Based Charity EvaluatorGiveWell is a leading EA-adjacent charity evaluator focused on near-term global health causes; relevant to AI safety funding debates as a contrast to long-termist cause prioritization, but does not directly evaluate AI safety organizations.GiveWell is a nonprofit charity evaluator that researches and recommends highly effective giving opportunities, focusing on evidence-based interventions with strong cost-effecti...effective-altruismcost-effectivenessexpected-valueresearch-priorities+3Source ↗ | Pioneered rigorous charity evaluation methodology |
| 2011-2014 | Launch Coefficient Giving | Expanded beyond global health to cause prioritization |
| 2012-2014 | EA movement building | Helped establish effective altruism as global movement |
Transition to AI Focus (2014-2018)
Initial AI engagement:
- 2014: First significant AI safety grants through Open Philanthropy (now Coefficient Giving)
- 2016: Major funding to Center for Human-Compatible AI (CHAI)
- 2017: Early OpenAI funding (before pivot to for-profit)
- 2018: Increased conviction leading to AI as top priority
AI Safety Leadership (2018-2025)
Major funding decisions:
- 2021: $580M investment in Anthropic↗🔗 web★★★★☆AnthropicAnthropic Receives Investment from Open PhilanthropyThis URL is a dead link (404 error). The original announcement covered Open Philanthropy's investment in Anthropic, relevant to understanding EA-aligned funding of safety-focused AI labs. Archived versions may be available via the Wayback Machine.This page appears to be a 404 error, meaning the original announcement about Open Philanthropy's investment in Anthropic is no longer accessible at this URL. The content that wo...ai-safetygovernancecoordinationexistential-risk+2Source ↗ to create safety-focused lab
- 2022: Establishment of AI safety university programs↗🔗 web★★★★☆Coefficient GivingAI safety university programsThis URL originally linked to an Open Philanthropy grant page for AI safety university programs, but now redirects to Coefficient Giving's homepage following the organizational rebrand; the original grant content is no longer accessible at this URL.This URL originally pointed to an Open Philanthropy grant page for AI safety via market incentives, but now redirects to Coefficient Giving (the rebranded Open Philanthropy). Th...ai-safetygovernancecoordinationpolicy+2Source ↗
- 2023: Expanded governance funding addressing AI regulation
Departure from Coefficient Giving and Impact (2023-2025)
Karnofsky's departure from Coefficient Giving in 2023 had significant ripple effects on the organization's AI safety work. Ajeya Cotra, who worked closely with Karnofsky for nine years, described losing "an incredibly engaged partner... someone who would read 30 pages of analysis and give you deep feedback." His departure removed a key source of intellectual partnership that had driven Coefficient's AI strategy, including the Bio Anchors framework and the organization's approach to technical AI safety grantmaking. Karnofsky subsequently joined Anthropic, where he continues working on AI safety from within a frontier lab.
Strategic Frameworks and Intellectual Contributions
The "Most Important Century" Thesis
Core argument structure:
| Component | Claim | Implication |
|---|---|---|
| Technology potential | Transformative AI possible this century | Could exceed agricultural/industrial revolution impacts |
| Speed differential | AI transition faster than historical precedents | Less time to adapt and coordinate |
| Leverage moment | Our actions now shape outcomes | Unlike past revolutions where individuals had little influence |
| Conclusion | This century uniquely important | Justifies enormous current investment |
Supporting evidence:
- Biological anchors methodology↗🔗 web★★★☆☆Cold TakesBio anchors frameworkWritten by Holden Karnofsky of Open Philanthropy, this post summarizes Ajeya Cotra's influential 'Biological Anchors' report, which has been widely cited in AI safety discussions about timelines and urgency of alignment work.A layperson-friendly summary of Ajeya Cotra's 'Biological Anchors' framework for forecasting when transformative AI (specifically, AI that can automate all human activities driv...ai-timelinescapabilitiescomputeexistential-risk+3Source ↗ for AI timelines
- Historical analysis of technological transitions
- Economic modeling of AI impact potential
Bio Anchors Framework
Developed with Ajeya Cotra↗🔗 web★★★★☆Coefficient GivingAjeya Cotra – Senior Advisor, Navigating Transformative AI (Open Philanthropy / Coefficient Giving)Ajeya Cotra is a prominent figure in AI safety research and philanthropic strategy, known for her work on AI timelines and the biological anchors framework; this page is her professional profile at Coefficient Giving (Open Philanthropy's affiliated entity).Biographical profile of Ajeya Cotra, a researcher at Coefficient Giving (formerly Open Philanthropy) focused on forecasting AI's trajectory and ensuring its impact is positive. ...ai-safetyai-timelinesexistential-risktechnical-safety+4Source ↗, this framework estimates AI development timelines by comparing required computation to biological systems:
| Anchor Type | Computation Estimate | Timeline Implication |
|---|---|---|
| Human brain | ≈10^15 FLOP/s | Medium-term (2030s-2040s) |
| Human lifetime | ≈10^24 FLOP | Longer-term (2040s-2050s) |
| Evolution | ≈10^41 FLOP | Much longer-term if needed |
Coefficient Giving Funding Strategy
Portfolio Approach
| Research Area | Funding Focus | Key Recipients | Rationale |
|---|---|---|---|
| Technical alignment | $100M+ | Anthropic, Redwood Research | Direct work on making AI systems safer |
| AI governance | $80M+ | Center for Security and Emerging Technology↗🔗 web★★★★☆CSET GeorgetownCSET: AI Market DynamicsCSET is a prominent DC-based think tank whose research on AI governance, compute policy, and geopolitical competition is frequently cited in AI safety and policy discussions; this is their institutional homepage.CSET (Center for Security and Emerging Technology) at Georgetown University is a policy research organization focused on the security implications of emerging technologies, part...governancepolicyai-safetycoordination+2Source ↗, policy fellowships | Institutional responses to AI development |
| Field building | $50M+ | University programs, individual researchers | Growing research community |
| Compute governance | $20M+ | Compute monitoring research | Oversight of AI development resources |
Grantmaking Philosophy
Key principles:
- Hits-based giving: Expect most grants to have limited impact, few to be transformative
- Long time horizons: Patient capital for 5-10 year research projects
- Active partnership: Strategic guidance beyond just funding
- Portfolio diversification: Multiple approaches given uncertainty
Notable funding decisions:
- Anthropic investment↗🔗 web★★★★☆AnthropicAnthropic Receives Investment from Open PhilanthropyThis URL is a dead link (404 error). The original announcement covered Open Philanthropy's investment in Anthropic, relevant to understanding EA-aligned funding of safety-focused AI labs. Archived versions may be available via the Wayback Machine.This page appears to be a 404 error, meaning the original announcement about Open Philanthropy's investment in Anthropic is no longer accessible at this URL. The content that wo...ai-safetygovernancecoordinationexistential-risk+2Source ↗: $580M to create safety-focused competitor to OpenAI
- MIRI funding: Early support for foundational AI alignment research
- Policy fellowships: Placing AI safety researchers in government positions
Current Views and Assessment
Karnofsky's AI Risk Timeline
Based on public statements and Coefficient Giving priorities from 2023-2024, Karnofsky's views reflect a combination of timeline estimates derived from technical forecasting and strategic assessments about field readiness and policy urgency:
| Expert/Source | Estimate | Reasoning |
|---|---|---|
| Transformative AI (2022) | 15% by 2036, 50% by 2060 | Derived from the bio anchors framework developed with Ajeya Cotra, which estimates AI development timelines by comparing required computation to biological systems. This central estimate suggests transformative AI is more likely than not within this century, though substantial uncertainty remains around both shorter and longer timelines. |
| Field adequacy (2024) | Still severely underfunded | Despite directing over $100M toward AI safety and growing the field from approximately 20 to 400+ FTE researchers, Coefficient Giving continues aggressive hiring and grantmaking. This assessment reflects the belief that the scale of the challenge—ensuring safe development of transformative AI—far exceeds current resources and talent devoted to it. |
| Policy urgency (2024) | High priority | Coefficient has significantly increased governance focus, funding policy research, placing fellows in government positions, and supporting regulatory frameworks. This shift recognizes that technical alignment work alone is insufficient—institutional and policy responses are critical to managing AI development trajectories and preventing racing dynamics. |
Evolution of Views (2020-2024)
| Year | Key Update | Reasoning |
|---|---|---|
| 2021 | "Most Important Century" series | Crystallized long-term strategic thinking |
| 2022 | Increased policy focus | Recognition of need for governance alongside technical work |
| 2023 | Anthropic model success | Validation of safety-focused lab approach |
| 2024 | Accelerated timelines concern | Shorter timelines than bio anchors suggested↗🔗 web★★★☆☆Cold TakesShorter timelines than bio anchors suggestedWritten by Holden Karnofsky (Open Philanthropy co-CEO) as part of his 'Most Important Century' series; relevant for understanding how EA-adjacent funders and researchers reason about AI development pace and prioritization.Holden Karnofsky's Cold Takes post synthesizes arguments for why transformative AI may arrive sooner than the Biological Anchors framework suggested, reviewing both expert surve...ai-timelinescapabilitiesexistential-riskai-safety+3Source ↗ |
Influence on AI Safety Field
Field Growth Metrics
| Metric | 2015 | 2024 | Growth Factor |
|---|---|---|---|
| FTE researchers | ≈20 | ≈400 | 20x |
| Annual funding | <$5M | >$200M | 40x |
| University programs | 0 | 15+ | New category |
| Major organizations | 2-3 | 20+ | 7x |
Institutional Impact
Academic legitimacy:
- Funding enabled AI safety courses↗🔗 webAI Safety Academic Programs and Courses DirectoryThis directory is relevant for newcomers and career-changers seeking structured educational pathways into AI safety; content details are unavailable so metadata is inferred from URL and title.A directory listing academic programs, courses, and educational resources focused on AI safety. The page serves as a centralized hub for individuals seeking formal or structured...ai-safetyalignmentgovernanceexistential-risk+4Source ↗ at major universities
- Supported tenure-track positions focused on alignment research
- Created pathway for traditional CS researchers to enter field
Policy influence:
- Funded experts now advising US AI Safety Institute
- Supported research informing EU AI Act↗🔗 webEU AI Act – Official Resource HubThis is the primary information hub for the EU AI Act, the landmark 2024 EU regulation that sets legally binding rules for AI development and deployment across the European Union, directly relevant to AI safety governance and policy discussions.The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, establishing a risk-based classification system for AI applications. It imposes var...governancepolicyai-safetydeployment+4Source ↗
- Built relationships between AI safety community and policymakers
Key Uncertainties and Strategic Cruxes
Open Questions in Karnofsky's Framework
| Uncertainty | Stakes | Current Evidence |
|---|---|---|
| AI timeline accuracy | Entire strategy timing | Mixed signals from recent capabilities |
| Technical tractability | Funding allocation efficiency | Early positive results but limited validation |
| Governance effectiveness | Policy investment value | Unclear institutional responsiveness |
| Anthropic success | Large investment justification | Strong early results but long-term unknown |
Strategic Disagreements
Within EA community:
- Some argue for longtermist focus beyond AI
- Others prefer global health and development↗🔗 webGiveWell - Evidence-Based Charity EvaluatorGiveWell is a leading EA-adjacent charity evaluator focused on near-term global health causes; relevant to AI safety funding debates as a contrast to long-termist cause prioritization, but does not directly evaluate AI safety organizations.GiveWell is a nonprofit charity evaluator that researches and recommends highly effective giving opportunities, focusing on evidence-based interventions with strong cost-effecti...effective-altruismcost-effectivenessexpected-valueresearch-priorities+3Source ↗ emphasis
- Debate over concentration vs. diversification of funding
With AI safety researchers:
- Tension between technical alignment focus and governance approaches
- Disagreement over open vs. closed development funding
- Questions about emphasis on capabilities research safety benefits
Public Communication and Influence
Cold Takes Blog Impact
Most influential posts:
- "The Most Important Century"↗🔗 web★★★☆☆Cold Takes"Most Important Century"A widely read series by Holden Karnofsky (Open Philanthropy) that helped mainstream longtermist and transformative AI risk arguments within the effective altruism and AI safety communities; available as blog posts, podcast, and PDF.Holden Karnofsky's 'Most Important Century' series argues that 21st-century AI development could trigger a productivity explosion leading to a galaxy-wide civilization far soone...ai-safetyexistential-riskai-timelineseffective-altruism+5Source ↗ series (>100k views)
- "AI Timelines: Where the Arguments Stand"↗🔗 web★★★☆☆Cold TakesShorter timelines than bio anchors suggestedWritten by Holden Karnofsky (Open Philanthropy co-CEO) as part of his 'Most Important Century' series; relevant for understanding how EA-adjacent funders and researchers reason about AI development pace and prioritization.Holden Karnofsky's Cold Takes post synthesizes arguments for why transformative AI may arrive sooner than the Biological Anchors framework suggested, reviewing both expert surve...ai-timelinescapabilitiesexistential-riskai-safety+3Source ↗ (policy reference)
- "Bio Anchors" explanation↗🔗 web★★★☆☆Cold TakesBio anchors frameworkWritten by Holden Karnofsky of Open Philanthropy, this post summarizes Ajeya Cotra's influential 'Biological Anchors' report, which has been widely cited in AI safety discussions about timelines and urgency of alignment work.A layperson-friendly summary of Ajeya Cotra's 'Biological Anchors' framework for forecasting when transformative AI (specifically, AI that can automate all human activities driv...ai-timelinescapabilitiescomputeexistential-risk+3Source ↗ (research methodology)
Communication approach:
- Transparent reasoning and uncertainty acknowledgment
- Accessible explanations of complex topics
- Regular updates as views evolve
- Direct engagement with critics and alternative viewpoints
Media and Policy Engagement
| Platform | Reach | Impact |
|---|---|---|
| Congressional testimony | Direct policy influence | Informed AI regulation debate |
| Academic conferences | Research community | Shaped university AI safety programs |
| EA Global talks | Movement direction | Influenced thousands of career decisions |
| Podcast interviews | Public understanding | Mainstream exposure for AI safety ideas |
Current Priorities and Future Direction
2024-2026 Strategic Focus
Immediate priorities:
- Anthropic scaling: Supporting responsible development of powerful systems
- Governance acceleration: Policy research and implementation support
- Technical diversification: Funding multiple alignment research approaches
- International coordination: Supporting global AI safety cooperation
Emerging areas:
- Compute governance infrastructure
- AI evaluation methodologies
- Corporate AI safety practices
- Prediction market applications
Long-term Vision
Field development goals:
- Self-sustaining research ecosystem independent of Coefficient Giving
- Government funding matching or exceeding philanthropic support
- Integration of safety research into mainstream AI development
- International coordination mechanisms for AI governance
Critiques and Responses
Common Criticisms
| Criticism | Karnofsky's Response | Counter-evidence |
|---|---|---|
| Over-concentration of power | Funding diversification, transparency | Multiple other major funders emerging |
| Field capture risk | Portfolio approach, external evaluation | Continued criticism tolerated and addressed |
| Timeline overconfidence | Explicit uncertainty, range estimates | Regular updating based on new evidence |
| Governance skepticism | Measured expectations, multiple approaches | Early policy wins demonstrate tractability |
Ongoing Debates
Resource allocation:
- Should Coefficient Giving fund more basic research vs. applied safety work?
- Optimal balance between technical and governance approaches?
- Geographic distribution of funding (US-centric concerns)
Strategic approach:
- Speed vs. care in scaling funding
- Competition vs. cooperation with AI labs
- Public advocacy vs. behind-the-scenes influence
Sources & Resources
Primary Sources
| Type | Source | Description |
|---|---|---|
| Blog | Cold Takes↗🔗 web★★★☆☆Cold TakesCold Takes – Holden Karnofsky's BlogInfluential blog by an Open Philanthropy co-CEO; the 'Most Important Century' series is widely read in the AI safety community and provides strategic framing for why AI safety work is urgent.Cold Takes is Holden Karnofsky's (co-CEO of Open Philanthropy) personal blog exploring big-picture questions about AI, existential risk, effective altruism, and how to think abo...ai-safetyexistential-riskai-timelinesgovernance+4Source ↗ | Karnofsky's strategic thinking and analysis |
| Organization | Coefficient Giving↗🔗 web★★★★☆Coefficient GivingOpen Philanthropy grants databaseOpen Philanthropy is one of the most influential funders in AI safety; their grants database is a useful reference for understanding which organizations and research directions receive major philanthropic support.Open Philanthropy is a major philanthropic organization that funds work across global health, AI safety, biosecurity, and other cause areas. Their grants database provides trans...ai-safetyexistential-riskgovernancecoordination+3Source ↗ | Grant database and reasoning |
| Research | Bio Anchors Report↗🔗 web★★★★☆Coefficient GivingBiological Anchors: A Draft Report on AI Timelines (Open Philanthropy)This report by Ajeya Cotra at Open Philanthropy is one of the most cited formal attempts to estimate transformative AI timelines using biological computation anchors; it shaped EA community strategy and funding priorities around AI safety.This Open Philanthropy report by Ajeya Cotra uses 'biological anchors' to estimate timelines for transformative AI, grounding forecasts in the computational resources required t...ai-timelinescapabilitiescomputeforecasting+4Source ↗ | Technical forecasting methodology |
| Testimony | Congressional Hearing↗🏛️ government★★★★★US CongressCongressional HearingCongress.gov is the official legislative tracker for U.S. federal law; useful for monitoring AI-related bills, hearings, and regulatory developments but is a general government portal rather than an AI safety-specific resource.Congress.gov is the official U.S. government portal for tracking federal legislation, committee hearings, floor activities, and public laws. It provides access to bills, nominat...governancepolicyai-safetycoordination+2Source ↗ | Policy positions and recommendations |
Secondary Analysis
| Type | Source | Focus |
|---|---|---|
| Academic | EA Research↗✏️ blog★★★☆☆EA ForumEA Forum Career PostsThe EA Forum is a major community platform where AI safety researchers, policymakers, and career-changers discuss strategy, share opportunities, and coordinate on field-building; useful for those exploring how to contribute to AI safety work.The Effective Altruism Forum serves as a community hub for discussing careers, cause prioritization, and field-building within the EA and AI safety ecosystem. It hosts posts on ...ai-safetyexistential-riskgovernancecoordination+2Source ↗ | Critical analysis of funding decisions |
| Journalistic | MIT Technology Review↗🔗 web★★★★☆MIT Technology ReviewMIT Technology Review: Deepfake CoverageThis is the MIT Technology Review homepage, a general tech journalism outlet; the title referencing 'Deepfake Coverage' appears inaccurate and the page does not contain specific AI safety or deepfake content in the retrieved snapshot.MIT Technology Review is a major science and technology journalism outlet covering AI, biotechnology, climate, and emerging technologies. It publishes in-depth reporting, analys...capabilitiesgovernancepolicydeployment+1Source ↗ | External perspective on influence |
| Policy | RAND Corporation↗🔗 web★★★★☆RAND CorporationRAND Provides Objective Research Services and Public Policy AnalysisRAND Corporation's homepage serves as an entry point to a large body of policy-relevant research on AI governance, national security, and emerging technology risks, useful as a reference for policymakers and researchers in the AI safety space.RAND Corporation is a nonprofit research organization providing objective analysis and policy recommendations across a wide range of topics including national security, technolo...governancepolicyai-safetycybersecurity+4Source ↗ | Government research on philanthropic AI funding |
Related Profiles
- Dario Amodei - CEO of Anthropic, major funding recipient
- Paul Christiano - Technical alignment researcher, influenced Karnofsky's views
- Nick Bostrom - Author of "Superintelligence," early influence on Coefficient AI focus
- Eliezer Yudkowsky - MIRI founder, recipient of early Coefficient AI safety grants
References
RAND Corporation is a nonprofit research organization providing objective analysis and policy recommendations across a wide range of topics including national security, technology, governance, and emerging risks. It produces influential studies on AI policy, cybersecurity, and global governance challenges. RAND's work is frequently cited by governments and policymakers worldwide.
Holden Karnofsky's 'Most Important Century' series argues that 21st-century AI development could trigger a productivity explosion leading to a galaxy-wide civilization far sooner than expected, making current decisions uniquely consequential for long-run human welfare. The series synthesizes arguments about AI timelines, transformative risk, and the moral weight of shaping humanity's long-term trajectory.
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, establishing a risk-based classification system for AI applications. It imposes varying obligations on developers and deployers depending on the risk level of their AI systems, from minimal-risk to unacceptable-risk categories. The act sets precedents for global AI governance and compliance requirements.
MIT Technology Review is a major science and technology journalism outlet covering AI, biotechnology, climate, and emerging technologies. It publishes in-depth reporting, analysis, and magazine features on the societal implications of technology. The current title referencing 'Deepfake Coverage' does not match the general homepage content retrieved.
This page appears to be a 404 error, meaning the original announcement about Open Philanthropy's investment in Anthropic is no longer accessible at this URL. The content that would have described this funding relationship between Anthropic and the effective altruism-aligned philanthropic organization is unavailable.
Congress.gov is the official U.S. government portal for tracking federal legislation, committee hearings, floor activities, and public laws. It provides access to bills, nominations, treaties, and congressional records for both the House and Senate. This resource serves as a primary reference for monitoring AI-related legislation and governance activities in the U.S. Congress.
This URL originally pointed to an Open Philanthropy grant page for AI safety via market incentives, but now redirects to Coefficient Giving (the rebranded Open Philanthropy). The page reflects the organization's philanthropic focus areas including 'Navigating Transformative AI' to ensure AI is safe and well-governed.
A layperson-friendly summary of Ajeya Cotra's 'Biological Anchors' framework for forecasting when transformative AI (specifically, AI that can automate all human activities driving scientific progress) might be developed. The method estimates training compute costs by anchoring to the human brain's scale, projecting when such training will become affordable. From nearly all modeled scenarios, it assigns high probability to transformative AI arriving this century.
Biographical profile of Ajeya Cotra, a researcher at Coefficient Giving (formerly Open Philanthropy) focused on forecasting AI's trajectory and ensuring its impact is positive. She previously led the technical AI safety program area and has worked on AI timelines, worldview diversification in budget allocation, and global catastrophic risk prioritization. She holds a B.S. in EECS from UC Berkeley.
Cold Takes is Holden Karnofsky's (co-CEO of Open Philanthropy) personal blog exploring big-picture questions about AI, existential risk, effective altruism, and how to think about the most important challenges of our time. It features in-depth essays on AI timelines, transformative AI scenarios, and philanthropic strategy. The blog is notable for its 'Most Important Century' series arguing that we may be living at a uniquely pivotal moment in history.
GiveWell is a nonprofit charity evaluator that researches and recommends highly effective giving opportunities, focusing on evidence-based interventions with strong cost-effectiveness. It conducts in-depth analysis of charities to identify where donations can do the most good, primarily in global health and poverty. GiveWell exemplifies the effective altruism methodology of rigorous expected-value reasoning applied to philanthropic decisions.
A directory listing academic programs, courses, and educational resources focused on AI safety. The page serves as a centralized hub for individuals seeking formal or structured training in AI safety topics, ranging from technical alignment to governance.
Holden Karnofsky's Cold Takes post synthesizes arguments for why transformative AI may arrive sooner than the Biological Anchors framework suggested, reviewing both expert surveys and key lines of reasoning. It examines why some analysts believe we are on a faster trajectory toward powerful AI systems than mainstream estimates indicate.
The Effective Altruism Forum serves as a community hub for discussing careers, cause prioritization, and field-building within the EA and AI safety ecosystem. It hosts posts on career transitions into high-impact roles, including AI safety research, policy, and governance positions. The forum aggregates community thinking on how individuals can best contribute to reducing existential risks.
Open Philanthropy is a major philanthropic organization that funds work across global health, AI safety, biosecurity, and other cause areas. Their grants database provides transparency into which organizations and research directions receive funding. They are one of the largest funders of AI safety and existential risk research.
CSET (Center for Security and Emerging Technology) at Georgetown University is a policy research organization focused on the security implications of emerging technologies, particularly AI. It produces research on AI policy, workforce, geopolitics, and governance. The content could not be fully extracted, limiting detailed analysis.
This Open Philanthropy report by Ajeya Cotra uses 'biological anchors' to estimate timelines for transformative AI, grounding forecasts in the computational resources required to match the human brain's training process. It synthesizes estimates across multiple hypotheses about the relevant scale of computation and projects probability distributions over when such compute might become affordable. The report became a foundational reference for AI timeline discussions in the EA and AI safety communities.