Overview
Toby Ord is a moral philosopher at Oxford University whose 2020 book "The Precipice" fundamentally shaped how the world thinks about existential risks. His quantitative estimates—10% chance of AI-caused extinction this century and 1-in-6 overall existential risk—became foundational anchors for AI risk discourse and resource allocation decisions.
Ord's work bridges rigorous philosophical analysis with accessible public communication, making existential risk concepts mainstream while providing the intellectual foundation for the effective altruism movement. His framework for evaluating humanity's long-term potential continues to influence policy, research priorities, and AI safety governance.
Risk Assessment & Influence
| Risk Category | Ord's Estimate | Impact on Field | Key Insight |
|---|---|---|---|
| AI Extinction | 10% this century | Became standard anchor | Largest single risk |
| Total X-Risk | 1-in-6 this century | Galvanized movement | Unprecedented danger |
| Natural Risks | <0.01% combined | Shifted focus | Technology dominates |
| Nuclear War | 0.1% extinction | Policy discussions | Civilization threat |
Field Impact: Ord's estimates influenced $10+ billion in philanthropic commitments↗🔗 web★★★★☆Coefficient GivingOpen Philanthropy: Global Catastrophic RisksOpen Philanthropy is among the most influential funders in AI safety and existential risk; understanding their priorities and grantees provides useful context for the broader AI safety research ecosystem and funding landscape.This page describes Open Philanthropy's grantmaking focus on global catastrophic risks, including AI safety, biosecurity, and other threats capable of causing civilizational-sca...existential-riskai-safetygovernanceeffective-altruism+4Source ↗ and shaped government AI policies↗🏛️ government★★★★☆UK Governmentgovernment AI policiesA foundational international policy document for AI governance; frequently cited as the first major intergovernmental acknowledgment of catastrophic AI risk, making it highly relevant to tracking the evolution of global AI safety policy.The Bletchley Declaration is a landmark multinational policy agreement signed at the AI Safety Summit 2023, committing participating nations to collaborative efforts on AI safet...governancepolicyai-safetyexistential-risk+3Source ↗ across multiple countries.