Skip to content
Longterm Wiki
Navigation
Updated 2026-01-29HistoryData
Page StatusContent
Edited 2 months ago2.5k words19 backlinksUpdated every 6 weeksOverdue by 21 days
41QualityAdequate •26ImportancePeripheral11.5ResearchMinimal
Content7/13
SummaryScheduleEntityEdit historyOverview
Tables19/ ~10Diagrams0/ ~1Int. links40/ ~20Ext. links0/ ~12Footnotes0/ ~7References21/ ~7Quotes0Accuracy0RatingsN:2 R:4.5 A:2 C:6Backlinks19
Issues2
QualityRated 41 but structure suggests 73 (underrated by 32 points)
StaleLast edited 66 days ago - may need review

Toby Ord

Person

Toby Ord

Comprehensive biographical profile of Toby Ord documenting his 10% AI extinction estimate and role founding effective altruism, with detailed tables on risk assessments, academic background, and influence metrics. While thorough on his contributions, provides limited original analysis beyond summarizing publicly available information about his work and impact.

RoleSenior Research Fellow in Philosophy
Known ForThe Precipice, existential risk quantification, effective altruism
Related
People
Nick Bostrom
2.5k words · 19 backlinks

Overview

Toby Ord is a moral philosopher at Oxford University whose 2020 book "The Precipice" fundamentally shaped how the world thinks about existential risks. His quantitative estimates—10% chance of AI-caused extinction this century and 1-in-6 overall existential risk—became foundational anchors for AI risk discourse and resource allocation decisions.

Ord's work bridges rigorous philosophical analysis with accessible public communication, making existential risk concepts mainstream while providing the intellectual foundation for the effective altruism movement. His framework for evaluating humanity's long-term potential continues to influence policy, research priorities, and AI safety governance.

Risk Assessment & Influence

Risk CategoryOrd's EstimateImpact on FieldKey Insight
AI Extinction10% this centuryBecame standard anchorLargest single risk
Total X-Risk1-in-6 this centuryGalvanized movementUnprecedented danger
Natural Risks<0.01% combinedShifted focusTechnology dominates
Nuclear War0.1% extinctionPolicy discussionsCivilization threat

Field Impact: Ord's estimates influenced $10+ billion in philanthropic commitments and shaped government AI policies across multiple countries.

Academic Background & Credentials

InstitutionRolePeriodAchievement
Oxford UniversitySenior Research Fellow2009-presentMoral philosophy focus
Future of Humanity InstituteResearch Fellow2009-2024X-risk specialization
OxfordPhD Philosophy2001-2005Foundations in ethics
Giving What We CanCo-founder2009EA movement launch

Key Affiliations: Oxford Uehiro Centre, Centre for Effective Altruism, former Future of Humanity Institute

The Precipice: Landmark Contributions

Quantitative Risk Framework

In "The Precipice," Ord provided explicit probability estimates for various existential risks over the 21st century. These quantitative assessments became foundational anchors for the existential risk community, establishing a shared vocabulary for discussing comparative risk magnitudes. His estimates combined historical base rates, expert interviews, and philosophical reasoning about technological trajectory to arrive at what he explicitly frames as "rough and ready" estimates meant to guide prioritization rather than precise predictions.

Risk CategoryEstimateReasoning
Unaligned AI10% (1 in 10)Ord identifies artificial intelligence as the single largest existential risk facing humanity this century. This estimate reflects the unprecedented potential for AI systems to exceed human capabilities across all domains, combined with fundamental difficulties in ensuring alignment between AI goals and human values. The probability is notably higher than other technological risks due to the rapid pace of AI development, the possibility of recursive self-improvement, and the one-shot nature of the control problem—once a sufficiently powerful misaligned AI is deployed, correction opportunities may be irreversibly lost.
Engineered Pandemics3.3% (1 in 30)The second-largest risk stems from advances in biotechnology that could enable the deliberate creation of highly lethal and transmissible pathogens. Ord's estimate accounts for the dual-use nature of biological research, the diffusion of bioengineering knowledge and tools, and the potential for both state and non-state actors to develop bioweapons. Unlike natural pandemics, engineered pathogens could be designed specifically for lethality, contagiousness, and resistance to countermeasures, making them substantially more dangerous than naturally occurring diseases.
Nuclear War0.1% (1 in 1,000)While nuclear conflict could cause civilization collapse and hundreds of millions of deaths, Ord assesses the probability of actual human extinction from nuclear war as relatively low. Nuclear winter effects, while catastrophic for civilization, would likely leave some surviving human populations. The estimate reflects both the continued existence of massive nuclear arsenals and the various near-miss incidents throughout the Cold War and after, balanced against the stabilizing effects of deterrence theory and the reduced tensions following the Soviet Union's collapse.
Natural Pandemics0.01% (1 in 10,000)Based on historical precedent, naturally occurring pandemics pose minimal existential risk despite their potential for massive death tolls. No natural disease in human history has threatened complete extinction, and evolutionary pressures generally select against pathogens that kill all their hosts. While pandemics like COVID-19 demonstrate society's vulnerability to natural disease emergence, the historical base rate for extinction-level natural pandemics is extremely low compared to anthropogenic risks.
Climate Change0.1% (1 in 1,000)Ord's climate change estimate reflects his assessment that while climate change represents a catastrophic risk to civilization with potential for hundreds of millions of deaths and massive ecological damage, the probability of it directly causing human extinction remains low. Humans are highly adaptable and geographically distributed, making complete extinction from climate effects unlikely even under worst-case warming scenarios. However, climate change could contribute to civilizational collapse or combine with other risks in dangerous ways.
Total All Risks16.7% (1 in 6)Ord's combined estimate aggregates all existential risks—both those listed explicitly and other potential threats—to arrive at approximately one-in-six odds that humanity faces an existential catastrophe this century. This aggregate figure accounts for potential interactions between risks and unknown threats not captured in individual categories. The estimate represents an unprecedented level of danger compared to any other century in human history, primarily driven by humanity's rapidly advancing technological capabilities outpacing our wisdom and coordination mechanisms for managing those technologies safely.

Book Impact Metrics

MetricAchievementSource
Sales50,000+ copies first yearPublisher data
Citations1,000+ academic papersGoogle Scholar
Policy InfluenceCited in 15+ government reportsVarious gov sources
Media Coverage200+ interviews/articlesMedia tracking

AI Risk Analysis & Arguments

Why AI Poses Unique Existential Threat

Risk FactorAssessmentEvidenceComparison to Other Risks
Power PotentialUnprecedentedCould exceed human intelligence across all domainsNuclear: Limited scope
Development SpeedRapid accelerationRecursive self-improvement possibleClimate: Slow progression
Alignment DifficultyExtremely hardMesa-optimization, goal misgeneralizationPandemics: Natural selection
IrreversibilityOne-shot problemHard to correct after deploymentNuclear: Recoverable
Control ProblemFundamentalNo guaranteed off-switchBio: Containable

Key Arguments from The Precipice

The Intelligence Explosion Argument:

  • AI systems could rapidly improve their own intelligence
  • Human-level AI → Superhuman AI in short timeframe
  • Leaves little time for safety measures or course correction
  • Links to takeoff dynamics research

The Alignment Problem:

  • No guarantee AI goals align with human values
  • Instrumental convergence toward problematic behaviors
  • Technical alignment difficulty compounds over time

Philosophical Frameworks

Existential Risk Definition

Ord's three-part framework for existential catastrophes:

TypeDefinitionExamplesPrevention Priority
ExtinctionDeath of all humansAsteroid impact, AI takeoverHighest
Unrecoverable CollapseCivilization permanently destroyedNuclear winter, climate collapseHigh
Unrecoverable DystopiaPermanent lock-in of bad valuesTotalitarian surveillance stateHigh

Moral Case for Prioritization

Expected Value Framework:

  • Future contains potentially trillions of lives
  • Preventing extinction saves all future generations
  • Even small probability reductions have enormous expected value
  • Mathematical justification: P(survival) × Future Value = Priority

Cross-Paradigm Agreement:

Ethical FrameworkReason to Prioritize X-RiskStrength
ConsequentialismMaximizes expected utilityStrong
DeontologyDuty to future generationsModerate
Virtue EthicsGuardianship virtueModerate
Common-SenseSave lives principleStrong

Effective Altruism Foundations

Cause Prioritization Framework

Ord co-developed EA's core methodology:

CriterionDefinitionAI Risk AssessmentScore (1-5)
ImportanceScale of problemAll of humanity's future5
TractabilityCan we make progress?Technical solutions possible3
NeglectednessOthers working on it?Few researchers relative to stakes5
OverallCombined assessmentTop global priority4.3

Movement Building Impact

InitiativeRoleImpactCurrent Status
Giving What We CanCo-founder (2009)$200M+ pledgedActive
EA ConceptsIntellectual foundation10,000+ career changesMainstream
X-Risk PrioritizationPhilosophical justification$1B+ funding shiftGrowing

Public Communication & Influence

Media & Outreach Strategy

High-Impact Platforms:

  • 80,000 Hours Podcast (1M+ downloads)
  • TED Talks and university lectures
  • New York Times, Guardian op-eds
  • Policy briefings for UK Parliament, UN

Communication Effectiveness

AudienceStrategySuccess MetricsImpact
General PublicAccessible writing, analogiesBook sales, media coverageHigh awareness
AcademicsRigorous arguments, citationsAcademic adoptionGrowing influence
PolicymakersRisk quantification, briefingsPolicy mentionsModerate uptake
PhilanthropistsExpected value argumentsFunding redirectedMajor success

Policy & Governance Influence

Government Engagement

CountryEngagement TypePolicy ImpactStatus
United KingdomParliamentary testimonyAI White Paper mentionsOngoing
United StatesThink tank briefingsNIST AI framework inputActive
European UnionAcademic consultationsAI Act considerationsLimited
InternationalUN presentationsGlobal cooperation discussionsEarly stage

Key Policy Contributions

Risk Assessment Methodology:

  • Quantitative frameworks for government risk analysis
  • Long-term thinking in policy planning
  • Cross-generational ethical considerations

International Coordination:

  • Argues for global cooperation on AI governance
  • Emphasizes shared humanity stake in outcomes
  • Links to international governance discussions

Current Research & Focus Areas

Active Projects (2024-Present)

ProjectDescriptionCollaborationTimeline
Long ReflectionFramework for humanity's values deliberationOxford philosophersOngoing
X-Risk QuantificationRefined probability estimatesGiveWell, researchers2024-2025
Policy FrameworksGovernment risk assessment toolsRAND CorporationActive
EA DevelopmentNext-generation prioritizationCoefficient GivingOngoing

The Long Reflection Concept

Core Idea: Once humanity achieves existential security, we should take time to carefully determine our values and future direction.

Key Components:

  • Moral uncertainty and value learning
  • Democratic deliberation at global scale
  • Avoiding lock-in of current values
  • Ensuring transformative decisions are reversible

Intellectual Evolution & Timeline

PeriodFocusKey OutputsImpact
2005-2009Global povertyPhD thesis, early EAMovement foundation
2009-2015EA developmentGiving What We Can, prioritizationCommunity building
2015-2020X-risk researchThe Precipice writingRisk quantification
2020-PresentImplementationPolicy work, refinementMainstream adoption

Evolving Views on AI Risk

Early Position (2015): AI risk deserves serious attention alongside other x-risks

The Precipice (2020): AI risk is the single largest existential threat this century

Current (2024): Maintains 10% estimate while emphasizing governance solutions

Key Concepts & Contributions

Existential Security

Definition: State where humanity has reduced existential risks to negligible levels permanently.

Requirements:

  • Robust institutions
  • Widespread risk awareness
  • Technical safety solutions
  • International coordination

The Precipice Period

Definition: Current historical moment where humanity faces unprecedented risks from its own technology.

Characteristics:

  • First time extinction risk primarily human-caused
  • Technology development outpacing safety measures
  • Critical decisions about humanity's future

Value of the Future

Framework: Quantifying the moral importance of humanity's potential future.

Key Insights:

  • Billions of years of potential flourishing
  • Trillions of future lives at stake
  • Cosmic significance of Earth-originating intelligence

Criticisms & Limitations

Academic Reception

CriticismSourceOrd's ResponseResolution
Probability EstimatesSome risk researchersAcknowledges uncertainty, provides rangesOngoing debate
Pascal's MuggingPhilosophy criticsExpected value still valid with boundsPartial consensus
Tractability ConcernsPolicy expertsEmphasizes research valueGrowing acceptance
Timeline PrecisionAI researchersFocuses on order of magnitudeReasonable approach

Methodological Debates

Quantification Challenges:

  • Deep uncertainty about AI development
  • Model uncertainty in risk assessment
  • Potential for overconfidence in estimates

Response Strategy: Ord emphasizes these are "rough and ready" estimates meant to guide prioritization, not precise predictions.

Impact on AI Safety Field

Research Prioritization Influence

AreaBefore OrdAfter OrdChange
Funding<$10M annually$100M+ annually10x increase
Researchers≈50 full-time500+ full-time10x growth
Academic ProgramsMinimal15+ universitiesNew field
Policy AttentionNoneMultiple governmentsMainstream

Conceptual Contributions

Risk Communication: Made abstract x-risks concrete and actionable through quantification.

Moral Urgency: Connected long-term thinking with immediate research priorities.

Resource Allocation: Provided framework for comparing AI safety to other cause areas.

Relationship to Key Debates

AGI Timeline Debates

Ord's Position: Timeline uncertainty doesn't reduce priority—risk × impact still enormous.

Scaling vs. Alternative Approaches

Ord's View: Focus on outcomes rather than methods—whatever reduces risk most effectively.

Open vs. Closed Development

Ord's Framework: Weigh democratization benefits against proliferation risks case-by-case.

Future Directions & Legacy

Ongoing Influence Areas

DomainCurrent ImpactProjected GrowthKey Mechanisms
Academic ResearchGrowing citationsContinued expansionUniversity curricula
Policy DevelopmentEarly adoptionMainstream integrationGovernment frameworks
Philanthropic PrioritiesMajor redirectionSustained focusEA movement
Public AwarenessSignificant increaseBroader recognitionMedia coverage

Long-term Legacy Potential

Conceptual Framework: The Precipice may become defining text for 21st-century risk thinking.

Methodological Innovation: Quantitative x-risk assessment now standard practice.

Movement Building: Helped transform niche academic concern into global priority.

Sources & Resources

Primary Sources

Source TypeTitleAccessKey Insights
BookThe Precipice: Existential Risk and the Future of HumanityPublicCore arguments and estimates
Academic PapersOxford research profileAcademicTechnical foundations
Interviews80,000 Hours podcastsFreeDetailed explanations

Key Organizations & Collaborations

OrganizationRelationshipCurrent StatusFocus Area
Future of Humanity InstituteFormer FellowClosed 2024X-risk research
Centre for Effective AltruismAdvisorActiveMovement coordination
Oxford Uehiro CentreFellowActivePractical ethics
Giving What We CanCo-founderActiveEffective giving

Further Reading

CategoryRecommendationsRelevance
Follow-up BooksBostrom's Superintelligence, Russell's Human CompatibleComplementary AI risk analysis
Academic PapersOrd's published research on moral uncertaintyTechnical foundations
Policy DocumentsGovernment reports citing Ord's workReal-world applications

References

RAND Corporation is a nonprofit research organization providing objective analysis and policy recommendations across a wide range of topics including national security, technology, governance, and emerging risks. It produces influential studies on AI policy, cybersecurity, and global governance challenges. RAND's work is frequently cited by governments and policymakers worldwide.

★★★★☆
2NYT: The Information WarsThe New York Times

This is an archived snapshot of the New York Times homepage from March 2026, capturing live news coverage of geopolitical conflicts including U.S.-Iran military strikes, Middle East war, and domestic U.S. policy issues. It does not contain a specific article on 'information wars' or AI safety topics.

★★★★☆
3**Future of Humanity Institute**Future of Humanity Institute

The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk research and AI safety. FHI closed on 16 April 2024 after approximately two decades of influential work. The site now serves as an archived record of the institution's history, research agenda, and legacy.

★★★★☆

This is the homepage of Hachette Books, a major commercial book publisher and imprint of Hachette Book Group. It showcases a broad catalog of popular nonfiction and fiction titles. It has no direct relevance to AI safety, alignment, or related fields.

5government AI policiesUK Government·Government

The Bletchley Declaration is a landmark multinational policy agreement signed at the AI Safety Summit 2023, committing participating nations to collaborative efforts on AI safety while enabling beneficial AI development. It represents one of the first major intergovernmental consensus documents explicitly addressing risks from frontier AI systems, including potential catastrophic and existential harms.

★★★★☆

This page returns a 404 error, indicating the TED speaker profile for Toby Ord (philosopher and author of 'The Precipice') is no longer available at this URL. Toby Ord is a prominent figure in existential risk research and effective altruism, and his TED talks typically cover topics related to global catastrophic risks and humanity's long-term future.

7Centre for Effective AltruismCentre for Effective Altruism

The Centre for Effective Altruism (CEA) is the primary organizational hub for the effective altruism movement, supporting community growth through conferences, local group funding, online forums, grants, and communications. It operates key EA infrastructure including the EA Forum, EA Funds, and effectivealtruism.org. CEA's work is broadly relevant to AI safety as a significant portion of the EA community prioritizes existential and catastrophic AI risks.

★★★☆☆
880,000 Hours Podcast80,000 Hours

A comprehensive podcast interview with philosopher Toby Ord discussing his book 'The Precipice', covering quantitative estimates of existential risks from natural and anthropogenic sources including AI, bioweapons, nuclear war, and climate change. Ord argues humanity is at a uniquely dangerous 'hinge of history' and outlines both the moral case for prioritizing existential risk reduction and practical policy recommendations.

★★★☆☆
9Ord (2020): The Precipicetheprecipice.com

Toby Ord's book argues that humanity faces unprecedented existential risks from nuclear weapons, engineered pandemics, and unaligned AI, and that reducing these risks is among the most pressing moral priorities of our time. It grounds longtermism in rigorous analysis of risk probabilities and makes the case that safeguarding humanity's long-run future is an urgent ethical imperative.

The Guardian is a major international news outlet. This URL points to its homepage, which at the time of capture was featuring breaking news about an Israeli-Iranian military conflict, including strikes on Iranian gas infrastructure and diplomatic tensions involving the US. It has no specific AI safety content.

★★★☆☆

GiveWell is a nonprofit charity evaluator that researches and recommends highly effective giving opportunities, focusing on evidence-based interventions with strong cost-effectiveness. It conducts in-depth analysis of charities to identify where donations can do the most good, primarily in global health and poverty. GiveWell exemplifies the effective altruism methodology of rigorous expected-value reasoning applied to philanthropic decisions.

12Oxford Uehiro Centrepracticalethics.ox.ac.uk

The Oxford Uehiro Centre for Practical Ethics applies rigorous philosophical analysis to ethical issues arising from science, technology, and public policy. It conducts research on topics including AI ethics, bioethics, existential risk, and moral philosophy. The Centre is a major academic hub connecting philosophical ethics to real-world policy and AI safety concerns.

The official homepage of the United Nations, the primary international intergovernmental organization focused on maintaining international peace, security, and cooperation. It serves as a hub for UN agencies, treaties, resolutions, and global governance initiatives across a wide range of issues including peace, human rights, and sustainable development.

★★★★☆
14UK Government Official SourcesUK Government·Government

The UK Government's official web portal (gov.uk) provides access to government policies, publications, and regulatory guidance across all departments. It serves as a central repository for official UK government positions, legislation, and policy documents relevant to AI governance and safety regulation.

★★★★☆
15Oxford research profilepracticalethics.ox.ac.uk

This is the official Oxford Uehiro Centre for Practical Ethics profile page for Toby Ord, philosopher and author of 'The Precipice.' Ord is a leading figure in existential risk research and the effective altruism movement, with scholarly work focused on global catastrophic risks, longtermism, and humanity's long-term future.

The UK government's foundational AI regulatory framework document outlining a principles-based, pro-innovation approach to AI governance. It establishes five cross-sectoral principles—safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress—to guide responsible AI development and deployment without imposing rigid legislation initially.

★★★★☆
17Giving What We CanGiving What We Can

Giving What We Can is an effective altruism organization that encourages individuals to pledge a portion of their income (typically 10%) to the most cost-effective charities. It provides resources, community support, and research to help donors maximize their positive impact. The organization is associated with the broader EA movement and encourages evidence-based giving.

★★★☆☆

This page describes Open Philanthropy's grantmaking focus on global catastrophic risks, including AI safety, biosecurity, and other threats capable of causing civilizational-scale harm. It outlines their strategic priorities, funding philosophy, and key grants in these areas. Open Philanthropy is one of the largest funders in the AI safety and existential risk space.

★★★★☆

Open Philanthropy is a major philanthropic organization that funds work across global health, AI safety, biosecurity, and other cause areas. Their grants database provides transparency into which organizations and research directions receive funding. They are one of the largest funders of AI safety and existential risk research.

★★★★☆

The official website of the United Kingdom Parliament, providing access to legislative proceedings, parliamentary debates, committee reports, and policy documents. It serves as the primary source for UK government legislation and parliamentary activity, including matters related to AI governance and technology regulation.

21Google ScholarGoogle Scholar

Google Scholar is a freely accessible academic search engine that indexes scholarly literature across disciplines, including AI safety, alignment, and related technical fields. It provides access to papers, citations, author profiles, and citation metrics. It serves as a primary discovery tool for finding peer-reviewed research relevant to AI safety.

★★★★☆

Structured Data

8 facts·2 recordsView in FactBase →
Employed By
Future of Humanity Institute
as of Mar 2026
Role / Title
Senior Research Fellow in Philosophy
as of Mar 2026
Birth Year
1979

All Facts

8
People
PropertyValueAs OfSource
Role / TitleSenior Research Fellow in PhilosophyMar 2026
Employed ByFuture of Humanity InstituteMar 2026
Biographical
PropertyValueAs OfSource
Notable ForThe Precipice; existential risk quantification; effective altruismMar 2026
Wikipediahttps://en.wikipedia.org/wiki/Toby_Ord
Birth Year1979
EducationUniversity of Melbourne
General
PropertyValueAs OfSource
Websitehttps://www.tobyord.com

Career History

2
OrganizationTitleStartEnd
Future of Humanity InstituteSenior Research Fellow20142024-04
jRQ6xPlVdzIkSenior Research Fellow2024-04

Related Wiki Pages

Top Related Pages

Other

Nick BostromHolden KarnofskyNick BecksteadWill MacAskill

Risks

AI Value Lock-inGoal MisgeneralizationInstrumental ConvergenceAI ProliferationMesa-Optimization

Concepts

Existential Risk from AIEarning To GiveEa Longtermist Wins LossesLongtermism Credibility After Ftx

Analysis

Relative Longtermist Value Comparisons

Key Debates

AI Governance and PolicyThe Case For AI Existential RiskOpen vs Closed Source AIIs Scaling All You Need?

Organizations

Coefficient Giving80,000 Hours