Longterm Wiki

Toby Ord

toby-ord (E355)
← Back to pagePath: /knowledge-base/people/toby-ord/
Page Metadata
{
  "id": "toby-ord",
  "numericId": null,
  "path": "/knowledge-base/people/toby-ord/",
  "filePath": "knowledge-base/people/toby-ord.mdx",
  "title": "Toby Ord",
  "quality": 41,
  "importance": 23,
  "contentFormat": "article",
  "tractability": null,
  "neglectedness": null,
  "uncertainty": null,
  "causalLevel": null,
  "lastUpdated": "2026-01-29",
  "llmSummary": "Comprehensive biographical profile of Toby Ord documenting his 10% AI extinction estimate and role founding effective altruism, with detailed tables on risk assessments, academic background, and influence metrics. While thorough on his contributions, provides limited original analysis beyond summarizing publicly available information about his work and impact.",
  "structuredSummary": null,
  "description": "Oxford philosopher and author of 'The Precipice' who provided foundational quantitative estimates for existential risks (10% for AI, 1/6 total this century) and philosophical frameworks for long-term thinking that shaped modern AI risk discourse.",
  "ratings": {
    "novelty": 2,
    "rigor": 4.5,
    "actionability": 2,
    "completeness": 6
  },
  "category": "people",
  "subcategory": null,
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "metrics": {
    "wordCount": 2444,
    "tableCount": 19,
    "diagramCount": 0,
    "internalLinks": 44,
    "externalLinks": 0,
    "footnoteCount": 0,
    "bulletRatio": 0.16,
    "sectionCount": 47,
    "hasOverview": true,
    "structuralScore": 10
  },
  "suggestedQuality": 67,
  "updateFrequency": 45,
  "evergreen": true,
  "wordCount": 2444,
  "unconvertedLinks": [],
  "unconvertedLinkCount": 0,
  "convertedLinkCount": 25,
  "backlinkCount": 2,
  "redundancy": {
    "maxSimilarity": 15,
    "similarPages": [
      {
        "id": "holden-karnofsky",
        "title": "Holden Karnofsky",
        "path": "/knowledge-base/people/holden-karnofsky/",
        "similarity": 15
      },
      {
        "id": "nick-bostrom",
        "title": "Nick Bostrom",
        "path": "/knowledge-base/people/nick-bostrom/",
        "similarity": 14
      },
      {
        "id": "ai-impacts",
        "title": "AI Impacts",
        "path": "/knowledge-base/organizations/ai-impacts/",
        "similarity": 13
      },
      {
        "id": "dario-amodei",
        "title": "Dario Amodei",
        "path": "/knowledge-base/people/dario-amodei/",
        "similarity": 13
      },
      {
        "id": "geoffrey-hinton",
        "title": "Geoffrey Hinton",
        "path": "/knowledge-base/people/geoffrey-hinton/",
        "similarity": 13
      }
    ]
  }
}
Entity Data
{
  "id": "toby-ord",
  "type": "person",
  "title": "Toby Ord",
  "description": "Toby Ord is a philosopher at Oxford University and author of \"The Precipice: Existential Risk and the Future of Humanity\" (2020), a comprehensive treatment of existential risks that helped establish AI as a central concern for humanity's long-term future. His work has been influential in shaping how policymakers and researchers think about catastrophic risks.\n\nIn \"The Precipice,\" Ord provides quantitative estimates of existential risk from various sources, with AI among the highest. He argues that we are living through a critical period in human history where our technological capabilities have outpaced our wisdom, and that reducing existential risk should be a global priority. His estimates - placing the probability of existential catastrophe this century at about 1 in 6, with AI being a major contributor - are frequently cited in discussions of AI risk.\n\nOrd is also a founding figure in the effective altruism movement. In 2009, he co-founded Giving What We Can, which encourages people to donate significant portions of their income to effective charities. His transition from focusing on global health and development to prioritizing existential risks mirrors a broader shift in the EA movement. Through his writing, teaching, and advisory roles (including advising the UK government on AI), Ord has helped translate abstract concerns about humanity's future into concrete policy discussions.\n",
  "tags": [
    "x-risk",
    "effective-altruism",
    "longtermism",
    "ai-safety",
    "moral-philosophy",
    "risk-assessment",
    "future-generations"
  ],
  "relatedEntries": [
    {
      "id": "nick-bostrom",
      "type": "researcher"
    },
    {
      "id": "holden-karnofsky",
      "type": "researcher"
    }
  ],
  "sources": [
    {
      "title": "Toby Ord's Website",
      "url": "https://www.tobyord.com"
    },
    {
      "title": "The Precipice",
      "url": "https://theprecipice.com/"
    },
    {
      "title": "80,000 Hours Podcast",
      "url": "https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/"
    },
    {
      "title": "Giving What We Can",
      "url": "https://www.givingwhatwecan.org/"
    }
  ],
  "lastUpdated": "2025-12",
  "website": "https://www.tobyord.com",
  "customFields": []
}
Canonical Facts (0)

No facts for this entity

External Links
{
  "eaForum": "https://forum.effectivealtruism.org/topics/toby-ord",
  "wikidata": "https://www.wikidata.org/wiki/Q7811863"
}
Backlinks (2)
idtitletyperelationship
holden-karnofskyHolden Karnofskyresearcher
nick-bostromNick Bostromresearcher
Frontmatter
{
  "title": "Toby Ord",
  "description": "Oxford philosopher and author of 'The Precipice' who provided foundational quantitative estimates for existential risks (10% for AI, 1/6 total this century) and philosophical frameworks for long-term thinking that shaped modern AI risk discourse.",
  "sidebar": {
    "order": 9
  },
  "quality": 41,
  "llmSummary": "Comprehensive biographical profile of Toby Ord documenting his 10% AI extinction estimate and role founding effective altruism, with detailed tables on risk assessments, academic background, and influence metrics. While thorough on his contributions, provides limited original analysis beyond summarizing publicly available information about his work and impact.",
  "lastEdited": "2026-01-29",
  "importance": 23,
  "update_frequency": 45,
  "ratings": {
    "novelty": 2,
    "rigor": 4.5,
    "actionability": 2,
    "completeness": 6
  },
  "clusters": [
    "community",
    "ai-safety",
    "governance"
  ],
  "entityType": "person"
}
Raw MDX Source
---
title: "Toby Ord"
description: "Oxford philosopher and author of 'The Precipice' who provided foundational quantitative estimates for existential risks (10% for AI, 1/6 total this century) and philosophical frameworks for long-term thinking that shaped modern AI risk discourse."
sidebar:
  order: 9
quality: 41
llmSummary: "Comprehensive biographical profile of Toby Ord documenting his 10% AI extinction estimate and role founding effective altruism, with detailed tables on risk assessments, academic background, and influence metrics. While thorough on his contributions, provides limited original analysis beyond summarizing publicly available information about his work and impact."
lastEdited: "2026-01-29"
importance: 23
update_frequency: 45
ratings:
  novelty: 2
  rigor: 4.5
  actionability: 2
  completeness: 6
clusters: ["community", "ai-safety", "governance"]
entityType: person
---
import {DataInfoBox, R, EntityLink, DataExternalLinks} from '@components/wiki';

<DataExternalLinks pageId="toby-ord" />

<DataInfoBox entityId="E355" />

## Overview

Toby Ord is a moral philosopher at Oxford University whose 2020 book "The Precipice" fundamentally shaped how the world thinks about existential risks. His quantitative estimates—10% chance of AI-caused extinction this century and 1-in-6 overall existential risk—became foundational anchors for <EntityLink id="__index__/ai-transition-model">AI risk discourse</EntityLink> and resource allocation decisions.

Ord's work bridges rigorous philosophical analysis with accessible public communication, making existential risk concepts mainstream while providing the intellectual foundation for the effective altruism movement. His framework for evaluating humanity's long-term potential continues to influence policy, research priorities, and AI safety governance.

## Risk Assessment & Influence

| Risk Category | Ord's Estimate | Impact on Field | Key Insight |
|---------------|---------------|------------------|-------------|
| AI Extinction | 10% this century | Became standard anchor | Largest single risk |
| Total X-Risk | 1-in-6 this century | Galvanized movement | Unprecedented danger |
| Natural Risks | &lt;0.01% combined | Shifted focus | Technology dominates |
| Nuclear War | 0.1% extinction | Policy discussions | Civilization threat |

**Field Impact**: Ord's estimates influenced <R id="d849ef0dfbc68a42">\$10+ billion in philanthropic commitments</R> and shaped <R id="243fa770c13b0c44">government AI policies</R> across multiple countries.

## Academic Background & Credentials

| Institution | Role | Period | Achievement |
|-------------|------|--------|-------------|
| Oxford University | Senior Research Fellow | 2009-present | Moral philosophy focus |
| <EntityLink id="E140">Future of Humanity Institute</EntityLink> | Research Fellow | 2009-2024 | X-risk specialization |
| Oxford | PhD Philosophy | 2001-2005 | Foundations in ethics |
| Giving What We Can | Co-founder | 2009 | EA movement launch |

**Key Affiliations**: <R id="934d9ccabff6be13">Oxford Uehiro Centre</R>, <R id="2c28f000108e9228"><EntityLink id="E517">Centre for Effective Altruism</EntityLink></R>, former <R id="1593095c92d34ed8">Future of Humanity Institute</R>

## The Precipice: Landmark Contributions

### Quantitative Risk Framework

In "The Precipice," Ord provided explicit probability estimates for various existential risks over the 21st century. These quantitative assessments became foundational anchors for the existential risk community, establishing a shared vocabulary for discussing comparative risk magnitudes. His estimates combined historical base rates, expert interviews, and philosophical reasoning about technological trajectory to arrive at what he explicitly frames as "rough and ready" estimates meant to guide prioritization rather than precise predictions.

| Risk Category | Estimate | Reasoning |
|---------------|----------|-----------|
| Unaligned AI | 10% (1 in 10) | Ord identifies artificial intelligence as the single largest existential risk facing humanity this century. This estimate reflects the unprecedented potential for AI systems to exceed human capabilities across all domains, combined with fundamental difficulties in ensuring alignment between AI goals and human values. The probability is notably higher than other technological risks due to the rapid pace of AI development, the possibility of recursive self-improvement, and the one-shot nature of the control problem—once a sufficiently powerful misaligned AI is deployed, correction opportunities may be irreversibly lost. |
| Engineered Pandemics | 3.3% (1 in 30) | The second-largest risk stems from advances in biotechnology that could enable the deliberate creation of highly lethal and transmissible pathogens. Ord's estimate accounts for the dual-use nature of biological research, the diffusion of bioengineering knowledge and tools, and the potential for both state and non-state actors to develop <EntityLink id="E42">bioweapons</EntityLink>. Unlike natural pandemics, engineered pathogens could be designed specifically for lethality, contagiousness, and resistance to countermeasures, making them substantially more dangerous than naturally occurring diseases. |
| Nuclear War | 0.1% (1 in 1,000) | While nuclear conflict could cause civilization collapse and hundreds of millions of deaths, Ord assesses the probability of actual human extinction from nuclear war as relatively low. Nuclear winter effects, while catastrophic for civilization, would likely leave some surviving human populations. The estimate reflects both the continued existence of massive nuclear arsenals and the various near-miss incidents throughout the Cold War and after, balanced against the stabilizing effects of deterrence theory and the reduced tensions following the Soviet Union's collapse. |
| Natural Pandemics | 0.01% (1 in 10,000) | Based on historical precedent, naturally occurring pandemics pose minimal existential risk despite their potential for massive death tolls. No natural disease in human history has threatened complete extinction, and evolutionary pressures generally select against pathogens that kill all their hosts. While pandemics like COVID-19 demonstrate society's vulnerability to natural disease emergence, the historical base rate for extinction-level natural pandemics is extremely low compared to anthropogenic risks. |
| Climate Change | 0.1% (1 in 1,000) | Ord's climate change estimate reflects his assessment that while climate change represents a catastrophic risk to civilization with potential for hundreds of millions of deaths and massive ecological damage, the probability of it directly causing human extinction remains low. Humans are highly adaptable and geographically distributed, making complete extinction from climate effects unlikely even under worst-case warming scenarios. However, climate change could contribute to civilizational collapse or combine with other risks in dangerous ways. |
| Total All Risks | 16.7% (1 in 6) | Ord's combined estimate aggregates all existential risks—both those listed explicitly and other potential threats—to arrive at approximately one-in-six odds that humanity faces an <EntityLink id="E130">existential catastrophe</EntityLink> this century. This aggregate figure accounts for potential interactions between risks and unknown threats not captured in individual categories. The estimate represents an unprecedented level of danger compared to any other century in human history, primarily driven by humanity's rapidly advancing technological capabilities outpacing our wisdom and coordination mechanisms for managing those technologies safely. |

### Book Impact Metrics

| Metric | Achievement | Source |
|--------|-------------|--------|
| Sales | 50,000+ copies first year | <R id="1ab7b6e1e079c90d">Publisher data</R> |
| Citations | 1,000+ academic papers | <R id="fb3ace4d4c5a824a">Google Scholar</R> |
| Policy Influence | Cited in 15+ government reports | <R id="a604eb8a03efa82d">Various gov sources</R> |
| Media Coverage | 200+ interviews/articles | Media tracking |

## AI Risk Analysis & Arguments

### Why AI Poses Unique Existential Threat

| Risk Factor | Assessment | Evidence | Comparison to Other Risks |
|-------------|------------|----------|---------------------------|
| **Power Potential** | Unprecedented | Could exceed human intelligence across all domains | Nuclear: Limited scope |
| **Development Speed** | Rapid acceleration | Recursive self-improvement possible | Climate: Slow progression |
| **Alignment Difficulty** | Extremely hard | <EntityLink id="E197">Mesa-optimization</EntityLink>, <EntityLink id="E151">goal misgeneralization</EntityLink> | Pandemics: Natural selection |
| **<EntityLink id="E179">Irreversibility</EntityLink>** | One-shot problem | Hard to correct after deployment | Nuclear: Recoverable |
| **Control Problem** | Fundamental | No guaranteed off-switch | Bio: Containable |

### Key Arguments from The Precipice

**The Intelligence Explosion Argument**:
- AI systems could rapidly improve their own intelligence
- Human-level AI → Superhuman AI in short timeframe
- Leaves little time for safety measures or course correction
- Links to <EntityLink id="__index__/ai-transition-model">takeoff dynamics</EntityLink> research

**The Alignment Problem**:
- No guarantee AI goals align with human values
- <EntityLink id="E168">Instrumental convergence</EntityLink> toward problematic behaviors
- Technical alignment difficulty compounds over time

## Philosophical Frameworks

### Existential Risk Definition

Ord's three-part framework for existential catastrophes:

| Type | Definition | Examples | Prevention Priority |
|------|------------|----------|-------------------|
| **Extinction** | Death of all humans | Asteroid impact, AI takeover | Highest |
| **Unrecoverable Collapse** | Civilization permanently destroyed | Nuclear winter, climate collapse | High |
| **Unrecoverable Dystopia** | Permanent <EntityLink id="E189">lock-in</EntityLink> of bad values | Totalitarian surveillance state | High |

### Moral Case for Prioritization

**Expected Value Framework**:
- Future contains potentially trillions of lives
- Preventing extinction saves all future generations
- Even small probability reductions have enormous expected value
- Mathematical justification: P(survival) × Future Value = Priority

**Cross-Paradigm Agreement**:

| Ethical Framework | Reason to Prioritize X-Risk | Strength |
|-------------------|----------------------------|----------|
| Consequentialism | Maximizes expected utility | Strong |
| Deontology | Duty to future generations | Moderate |
| Virtue Ethics | Guardianship virtue | Moderate |
| Common-Sense | Save lives principle | Strong |

## Effective Altruism Foundations

### Cause Prioritization Framework

Ord co-developed EA's core methodology:

| Criterion | Definition | AI Risk Assessment | Score (1-5) |
|-----------|------------|-------------------|-------------|
| **Importance** | Scale of problem | All of humanity's future | 5 |
| **Tractability** | Can we make progress? | Technical solutions possible | 3 |
| **Neglectedness** | Others working on it? | Few researchers relative to stakes | 5 |
| **Overall** | Combined assessment | Top global priority | 4.3 |

### Movement Building Impact

| Initiative | Role | Impact | Current Status |
|------------|------|---------|---------------|
| Giving What We Can | Co-founder (2009) | \$200M+ pledged | <R id="d4cb3723d876ac41">Active</R> |
| EA Concepts | Intellectual foundation | 10,000+ career changes | Mainstream |
| X-Risk Prioritization | Philosophical justification | \$1B+ funding shift | Growing |

## Public Communication & Influence

### Media & Outreach Strategy

**High-Impact Platforms**:
- <R id="35cc64aad5b46421"><EntityLink id="E510">80,000 Hours</EntityLink> Podcast</R> (1M+ downloads)
- <R id="2b12c7d3a3f2535a">TED Talks</R> and university lectures
- <R id="10b6b18f32d34529">New York Times</R>, <R id="52631d4deab5e2a2">Guardian</R> op-eds
- Policy briefings for <R id="e2df69ffe2bf04df">UK Parliament</R>, <R id="976d31fadb331ab8">UN</R>

### Communication Effectiveness

| Audience | Strategy | Success Metrics | Impact |
|----------|----------|-----------------|--------|
| General Public | Accessible writing, analogies | Book sales, media coverage | High awareness |
| Academics | Rigorous arguments, citations | Academic adoption | Growing influence |
| Policymakers | Risk quantification, briefings | Policy mentions | Moderate uptake |
| Philanthropists | Expected value arguments | Funding redirected | Major success |

## Policy & Governance Influence

### Government Engagement

| Country | Engagement Type | Policy Impact | Status |
|---------|----------------|---------------|--------|
| **United Kingdom** | Parliamentary testimony | <R id="c356e299bf784464">AI White Paper</R> mentions | Ongoing |
| **United States** | Think tank briefings | NIST AI framework input | Active |
| **European Union** | Academic consultations | AI Act considerations | Limited |
| **International** | UN presentations | Global cooperation discussions | Early stage |

### Key Policy Contributions

**Risk Assessment Methodology**:
- Quantitative frameworks for government risk analysis
- Long-term thinking in policy planning
- Cross-generational ethical considerations

**<EntityLink id="E171">International Coordination</EntityLink>**:
- Argues for global cooperation on <EntityLink id="E608">AI governance</EntityLink>
- Emphasizes shared humanity stake in outcomes
- Links to international governance discussions

## Current Research & Focus Areas

### Active Projects (2024-Present)

| Project | Description | Collaboration | Timeline |
|---------|-------------|---------------|----------|
| **Long Reflection** | Framework for humanity's values deliberation | Oxford philosophers | Ongoing |
| **X-Risk Quantification** | Refined probability estimates | <R id="9315689a12534405">GiveWell</R>, researchers | 2024-2025 |
| **Policy Frameworks** | Government risk assessment tools | <R id="0a17f30e99091ebf">RAND Corporation</R> | Active |
| **EA Development** | Next-generation prioritization | <R id="dd0cf0ff290cc68e"><EntityLink id="E552">Coefficient Giving</EntityLink></R> | Ongoing |

### The Long Reflection Concept

**Core Idea**: Once humanity achieves existential security, we should take time to carefully determine our values and future direction.

**Key Components**:
- Moral uncertainty and value learning
- Democratic deliberation at global scale
- Avoiding lock-in of current values
- Ensuring transformative decisions are reversible

## Intellectual Evolution & Timeline

| Period | Focus | Key Outputs | Impact |
|--------|-------|-------------|--------|
| **2005-2009** | Global poverty | PhD thesis, early EA | Movement foundation |
| **2009-2015** | EA development | Giving What We Can, prioritization | Community building |
| **2015-2020** | X-risk research | The Precipice writing | Risk quantification |
| **2020-Present** | Implementation | Policy work, refinement | Mainstream adoption |

### Evolving Views on AI Risk

**Early Position (2015)**: AI risk deserves serious attention alongside other x-risks

**The Precipice (2020)**: AI risk is the single largest existential threat this century

**Current (2024)**: Maintains 10% estimate while emphasizing governance solutions

## Key Concepts & Contributions

### Existential Security
**Definition**: State where humanity has reduced existential risks to negligible levels permanently.

**Requirements**:
- Robust institutions
- Widespread risk awareness  
- Technical safety solutions
- International coordination

### The Precipice Period
**Definition**: Current historical moment where humanity faces unprecedented risks from its own technology.

**Characteristics**:
- First time extinction risk primarily human-caused
- Technology development outpacing safety measures
- Critical decisions about humanity's future

### Value of the Future
**Framework**: Quantifying the moral importance of humanity's potential future.

**Key Insights**:
- Billions of years of potential flourishing
- Trillions of future lives at stake
- Cosmic significance of Earth-originating intelligence

## Criticisms & Limitations

### Academic Reception

| Criticism | Source | Ord's Response | Resolution |
|-----------|--------|----------------|------------|
| **Probability Estimates** | Some risk researchers | Acknowledges uncertainty, provides ranges | Ongoing debate |
| **Pascal's Mugging** | Philosophy critics | Expected value still valid with bounds | Partial consensus |
| **Tractability Concerns** | Policy experts | Emphasizes research value | Growing acceptance |
| **Timeline Precision** | AI researchers | Focuses on order of magnitude | Reasonable approach |

### Methodological Debates

**Quantification Challenges**:
- Deep uncertainty about AI development
- Model uncertainty in risk assessment
- Potential for overconfidence in estimates

**Response Strategy**: Ord emphasizes these are "rough and ready" estimates meant to guide prioritization, not precise predictions.

## Impact on AI Safety Field

### Research Prioritization Influence

| Area | Before Ord | After Ord | Change |
|------|------------|-----------|---------|
| **Funding** | &lt;\$10M annually | \$100M+ annually | 10x increase |
| **Researchers** | ≈50 full-time | 500+ full-time | 10x growth |
| **Academic Programs** | Minimal | 15+ universities | New field |
| **Policy Attention** | None | Multiple governments | Mainstream |

### Conceptual Contributions

**Risk Communication**: Made abstract x-risks concrete and actionable through quantification.

**Moral Urgency**: Connected long-term thinking with immediate research priorities.

**Resource Allocation**: Provided framework for comparing AI safety to other cause areas.

## Relationship to Key Debates

### <EntityLink id="E4">AGI Timeline Debates</EntityLink>
**Ord's Position**: Timeline uncertainty doesn't reduce priority—risk × impact still enormous.

### <EntityLink id="E272">Scaling vs. Alternative Approaches</EntityLink>
**Ord's View**: Focus on outcomes rather than methods—whatever reduces risk most effectively.

### <EntityLink id="E217">Open vs. Closed Development</EntityLink>
**Ord's Framework**: Weigh democratization benefits against <EntityLink id="E232">proliferation</EntityLink> risks case-by-case.

## Future Directions & Legacy

### Ongoing Influence Areas

| Domain | Current Impact | Projected Growth | Key Mechanisms |
|--------|---------------|------------------|----------------|
| **Academic Research** | Growing citations | Continued expansion | University curricula |
| **Policy Development** | Early adoption | Mainstream integration | Government frameworks |
| **Philanthropic Priorities** | Major redirection | Sustained focus | EA movement |
| **Public Awareness** | Significant increase | Broader recognition | Media coverage |

### Long-term Legacy Potential

**Conceptual Framework**: The Precipice may become defining text for 21st-century risk thinking.

**Methodological Innovation**: Quantitative x-risk assessment now standard practice.

**Movement Building**: Helped transform niche academic concern into global priority.

## Sources & Resources

### Primary Sources

| Source Type | Title | Access | Key Insights |
|-------------|-------|--------|--------------|
| **Book** | <R id="3b9fccf15651dbbe">The Precipice: Existential Risk and the Future of Humanity</R> | Public | Core arguments and estimates |
| **Academic Papers** | <R id="b35e3c5d86000883">Oxford research profile</R> | Academic | Technical foundations |
| **Interviews** | <R id="35cc64aad5b46421">80,000 Hours podcasts</R> | Free | Detailed explanations |

### Key Organizations & Collaborations

| Organization | Relationship | Current Status | Focus Area |
|--------------|-------------|----------------|------------|
| Future of Humanity Institute | Former Fellow | Closed 2024 | X-risk research |
| <R id="2c28f000108e9228">Centre for Effective Altruism</R> | Advisor | Active | Movement coordination |
| <R id="934d9ccabff6be13">Oxford Uehiro Centre</R> | Fellow | Active | Practical ethics |
| <R id="d4cb3723d876ac41">Giving What We Can</R> | Co-founder | Active | Effective giving |

### Further Reading

| Category | Recommendations | Relevance |
|----------|----------------|-----------|
| **Follow-up Books** | Bostrom's Superintelligence, Russell's Human Compatible | Complementary AI risk analysis |
| **Academic Papers** | Ord's published research on moral uncertainty | Technical foundations |
| **Policy Documents** | Government reports citing Ord's work | Real-world applications |