Toby Ord
Toby Ord
Comprehensive biographical profile of Toby Ord documenting his 10% AI extinction estimate and role founding effective altruism, with detailed tables on risk assessments, academic background, and influence metrics. While thorough on his contributions, provides limited original analysis beyond summarizing publicly available information about his work and impact.
Overview
Toby Ord is a moral philosopher at Oxford University whose 2020 book "The Precipice" fundamentally shaped how the world thinks about existential risks. His quantitative estimates—10% chance of AI-caused extinction this century and 1-in-6 overall existential risk—became foundational anchors for AI risk discourse and resource allocation decisions.
Ord's work bridges rigorous philosophical analysis with accessible public communication, making existential risk concepts mainstream while providing the intellectual foundation for the effective altruism movement. His framework for evaluating humanity's long-term potential continues to influence policy, research priorities, and AI safety governance.
Risk Assessment & Influence
| Risk Category | Ord's Estimate | Impact on Field | Key Insight |
|---|---|---|---|
| AI Extinction | 10% this century | Became standard anchor | Largest single risk |
| Total X-Risk | 1-in-6 this century | Galvanized movement | Unprecedented danger |
| Natural Risks | <0.01% combined | Shifted focus | Technology dominates |
| Nuclear War | 0.1% extinction | Policy discussions | Civilization threat |
Field Impact: Ord's estimates influenced $10+ billion in philanthropic commitments↗🔗 web★★★★☆Coefficient GivingOpen Philanthropy: Global Catastrophic RisksOpen Philanthropy is among the most influential funders in AI safety and existential risk; understanding their priorities and grantees provides useful context for the broader AI safety research ecosystem and funding landscape.This page describes Open Philanthropy's grantmaking focus on global catastrophic risks, including AI safety, biosecurity, and other threats capable of causing civilizational-sca...existential-riskai-safetygovernanceeffective-altruism+4Source ↗ and shaped government AI policies↗🏛️ government★★★★☆UK Governmentgovernment AI policiesA foundational international policy document for AI governance; frequently cited as the first major intergovernmental acknowledgment of catastrophic AI risk, making it highly relevant to tracking the evolution of global AI safety policy.The Bletchley Declaration is a landmark multinational policy agreement signed at the AI Safety Summit 2023, committing participating nations to collaborative efforts on AI safet...governancepolicyai-safetyexistential-risk+3Source ↗ across multiple countries.
Academic Background & Credentials
| Institution | Role | Period | Achievement |
|---|---|---|---|
| Oxford University | Senior Research Fellow | 2009-present | Moral philosophy focus |
| Future of Humanity Institute | Research Fellow | 2009-2024 | X-risk specialization |
| Oxford | PhD Philosophy | 2001-2005 | Foundations in ethics |
| Giving What We Can | Co-founder | 2009 | EA movement launch |
Key Affiliations: Oxford Uehiro Centre↗🔗 webOxford Uehiro CentreThe Oxford Uehiro Centre is a leading academic institution for practical ethics research, with significant ties to the AI safety and longtermism communities through affiliated scholars and published work on existential risk and emerging technologies.The Oxford Uehiro Centre for Practical Ethics applies rigorous philosophical analysis to ethical issues arising from science, technology, and public policy. It conducts research...existential-riskgovernancepolicyai-safety+2Source ↗, Centre for Effective Altruism↗🔗 web★★★☆☆Centre for Effective AltruismCentre for Effective AltruismCEA is the institutional backbone of the EA movement, which has been a major funding and talent source for AI safety research; understanding CEA helps contextualize the broader ecosystem in which AI safety work is conducted.The Centre for Effective Altruism (CEA) is the primary organizational hub for the effective altruism movement, supporting community growth through conferences, local group fundi...effective-altruismexistential-riskcoordinationai-safety+2Source ↗, former Future of Humanity Institute↗🔗 web★★★★☆Future of Humanity Institute**Future of Humanity Institute**FHI was a pioneering institution in AI safety and existential risk; this archived homepage is useful for historical context and understanding the institutional origins of the field, though the site is no longer actively updated following its April 2024 closure.The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk researc...ai-safetyexistential-riskalignmentgovernance+3Source ↗
The Precipice: Landmark Contributions
Quantitative Risk Framework
In "The Precipice," Ord provided explicit probability estimates for various existential risks over the 21st century. These quantitative assessments became foundational anchors for the existential risk community, establishing a shared vocabulary for discussing comparative risk magnitudes. His estimates combined historical base rates, expert interviews, and philosophical reasoning about technological trajectory to arrive at what he explicitly frames as "rough and ready" estimates meant to guide prioritization rather than precise predictions.
| Risk Category | Estimate | Reasoning |
|---|---|---|
| Unaligned AI | 10% (1 in 10) | Ord identifies artificial intelligence as the single largest existential risk facing humanity this century. This estimate reflects the unprecedented potential for AI systems to exceed human capabilities across all domains, combined with fundamental difficulties in ensuring alignment between AI goals and human values. The probability is notably higher than other technological risks due to the rapid pace of AI development, the possibility of recursive self-improvement, and the one-shot nature of the control problem—once a sufficiently powerful misaligned AI is deployed, correction opportunities may be irreversibly lost. |
| Engineered Pandemics | 3.3% (1 in 30) | The second-largest risk stems from advances in biotechnology that could enable the deliberate creation of highly lethal and transmissible pathogens. Ord's estimate accounts for the dual-use nature of biological research, the diffusion of bioengineering knowledge and tools, and the potential for both state and non-state actors to develop bioweapons. Unlike natural pandemics, engineered pathogens could be designed specifically for lethality, contagiousness, and resistance to countermeasures, making them substantially more dangerous than naturally occurring diseases. |
| Nuclear War | 0.1% (1 in 1,000) | While nuclear conflict could cause civilization collapse and hundreds of millions of deaths, Ord assesses the probability of actual human extinction from nuclear war as relatively low. Nuclear winter effects, while catastrophic for civilization, would likely leave some surviving human populations. The estimate reflects both the continued existence of massive nuclear arsenals and the various near-miss incidents throughout the Cold War and after, balanced against the stabilizing effects of deterrence theory and the reduced tensions following the Soviet Union's collapse. |
| Natural Pandemics | 0.01% (1 in 10,000) | Based on historical precedent, naturally occurring pandemics pose minimal existential risk despite their potential for massive death tolls. No natural disease in human history has threatened complete extinction, and evolutionary pressures generally select against pathogens that kill all their hosts. While pandemics like COVID-19 demonstrate society's vulnerability to natural disease emergence, the historical base rate for extinction-level natural pandemics is extremely low compared to anthropogenic risks. |
| Climate Change | 0.1% (1 in 1,000) | Ord's climate change estimate reflects his assessment that while climate change represents a catastrophic risk to civilization with potential for hundreds of millions of deaths and massive ecological damage, the probability of it directly causing human extinction remains low. Humans are highly adaptable and geographically distributed, making complete extinction from climate effects unlikely even under worst-case warming scenarios. However, climate change could contribute to civilizational collapse or combine with other risks in dangerous ways. |
| Total All Risks | 16.7% (1 in 6) | Ord's combined estimate aggregates all existential risks—both those listed explicitly and other potential threats—to arrive at approximately one-in-six odds that humanity faces an existential catastrophe this century. This aggregate figure accounts for potential interactions between risks and unknown threats not captured in individual categories. The estimate represents an unprecedented level of danger compared to any other century in human history, primarily driven by humanity's rapidly advancing technological capabilities outpacing our wisdom and coordination mechanisms for managing those technologies safely. |
Book Impact Metrics
| Metric | Achievement | Source |
|---|---|---|
| Sales | 50,000+ copies first year | Publisher data↗🔗 webHachette Books - Publisher HomepageThis is a general commercial publisher homepage with no substantive connection to AI safety, existential risk, effective altruism, or longtermism; current tags appear to be incorrectly assigned.This is the homepage of Hachette Books, a major commercial book publisher and imprint of Hachette Book Group. It showcases a broad catalog of popular nonfiction and fiction titl...x-riskeffective-altruismlongtermismSource ↗ |
| Citations | 1,000+ academic papers | Google Scholar↗🔗 web★★★★☆Google ScholarGoogle ScholarGoogle Scholar is a general academic search tool useful for locating peer-reviewed AI safety research, tracking citations, and discovering related work; it is not an AI safety resource itself but a means to find such resources.Google Scholar is a freely accessible academic search engine that indexes scholarly literature across disciplines, including AI safety, alignment, and related technical fields. ...referenceai-safetyalignmentcapabilities+3Source ↗ |
| Policy Influence | Cited in 15+ government reports | Various gov sources↗🏛️ government★★★★☆UK GovernmentUK Government Official SourcesThe UK gov.uk portal is a primary source for official UK government AI policy and regulation documents; specific departmental pages (e.g., AI Safety Institute, DSIT) are more directly relevant to AI safety research.The UK Government's official web portal (gov.uk) provides access to government policies, publications, and regulatory guidance across all departments. It serves as a central rep...governancepolicyai-safetycoordination+1Source ↗ |
| Media Coverage | 200+ interviews/articles | Media tracking |
AI Risk Analysis & Arguments
Why AI Poses Unique Existential Threat
| Risk Factor | Assessment | Evidence | Comparison to Other Risks |
|---|---|---|---|
| Power Potential | Unprecedented | Could exceed human intelligence across all domains | Nuclear: Limited scope |
| Development Speed | Rapid acceleration | Recursive self-improvement possible | Climate: Slow progression |
| Alignment Difficulty | Extremely hard | Mesa-optimization, goal misgeneralization | Pandemics: Natural selection |
| Irreversibility | One-shot problem | Hard to correct after deployment | Nuclear: Recoverable |
| Control Problem | Fundamental | No guaranteed off-switch | Bio: Containable |
Key Arguments from The Precipice
The Intelligence Explosion Argument:
- AI systems could rapidly improve their own intelligence
- Human-level AI → Superhuman AI in short timeframe
- Leaves little time for safety measures or course correction
- Links to takeoff dynamics research
The Alignment Problem:
- No guarantee AI goals align with human values
- Instrumental convergence toward problematic behaviors
- Technical alignment difficulty compounds over time
Philosophical Frameworks
Existential Risk Definition
Ord's three-part framework for existential catastrophes:
| Type | Definition | Examples | Prevention Priority |
|---|---|---|---|
| Extinction | Death of all humans | Asteroid impact, AI takeover | Highest |
| Unrecoverable Collapse | Civilization permanently destroyed | Nuclear winter, climate collapse | High |
| Unrecoverable Dystopia | Permanent lock-in of bad values | Totalitarian surveillance state | High |
Moral Case for Prioritization
Expected Value Framework:
- Future contains potentially trillions of lives
- Preventing extinction saves all future generations
- Even small probability reductions have enormous expected value
- Mathematical justification: P(survival) × Future Value = Priority
Cross-Paradigm Agreement:
| Ethical Framework | Reason to Prioritize X-Risk | Strength |
|---|---|---|
| Consequentialism | Maximizes expected utility | Strong |
| Deontology | Duty to future generations | Moderate |
| Virtue Ethics | Guardianship virtue | Moderate |
| Common-Sense | Save lives principle | Strong |
Effective Altruism Foundations
Cause Prioritization Framework
Ord co-developed EA's core methodology:
| Criterion | Definition | AI Risk Assessment | Score (1-5) |
|---|---|---|---|
| Importance | Scale of problem | All of humanity's future | 5 |
| Tractability | Can we make progress? | Technical solutions possible | 3 |
| Neglectedness | Others working on it? | Few researchers relative to stakes | 5 |
| Overall | Combined assessment | Top global priority | 4.3 |
Movement Building Impact
| Initiative | Role | Impact | Current Status |
|---|---|---|---|
| Giving What We Can | Co-founder (2009) | $200M+ pledged | Active↗🔗 web★★★☆☆Giving What We CanGiving What We CanGiving What We Can is tangentially relevant to AI safety insofar as the EA community funds and promotes AI safety research, but the site itself focuses on charitable giving broadly rather than AI safety specifically.Giving What We Can is an effective altruism organization that encourages individuals to pledge a portion of their income (typically 10%) to the most cost-effective charities. It...effective-altruismcoordinationexistential-riskpolicySource ↗ |
| EA Concepts | Intellectual foundation | 10,000+ career changes | Mainstream |
| X-Risk Prioritization | Philosophical justification | $1B+ funding shift | Growing |
Public Communication & Influence
Media & Outreach Strategy
High-Impact Platforms:
- 80,000 Hours Podcast↗🔗 web★★★☆☆80,000 Hours80,000 Hours PodcastThis interview accompanies Toby Ord's book 'The Precipice' (2020) and serves as an accessible entry point to existential risk thinking, particularly relevant for understanding how AI risk fits within the broader landscape of civilizational-scale risks.A comprehensive podcast interview with philosopher Toby Ord discussing his book 'The Precipice', covering quantitative estimates of existential risks from natural and anthropoge...existential-riskai-safetygovernancepolicy+5Source ↗ (1M+ downloads)
- TED Talks↗🔗 webToby Ord - TED Speaker PageThis URL returns a 404 error and the content is inaccessible. Users seeking Toby Ord's TED talks should search directly on TED.com or consult his Oxford Future of Humanity Institute page.This page returns a 404 error, indicating the TED speaker profile for Toby Ord (philosopher and author of 'The Precipice') is no longer available at this URL. Toby Ord is a prom...existential-risklongtermismeffective-altruismSource ↗ and university lectures
- New York Times↗🔗 web★★★★☆The New York TimesNYT: The Information WarsThis is a Wayback Machine archive of the NYT homepage from March 2026; it contains no AI safety or information-warfare-specific content and appears to have been misclassified in the knowledge base.This is an archived snapshot of the New York Times homepage from March 2026, capturing live news coverage of geopolitical conflicts including U.S.-Iran military strikes, Middle ...governancepolicynewsSource ↗, Guardian↗🔗 web★★★☆☆The GuardianThe Guardian - News HomepageThis is the Guardian newspaper homepage and does not appear to contain AI safety-relevant content; it was likely added in error or as a placeholder. Current tags (x-risk, effective-altruism, longtermism) appear misassigned.The Guardian is a major international news outlet. This URL points to its homepage, which at the time of capture was featuring breaking news about an Israeli-Iranian military co...newsgovernancepolicySource ↗ op-eds
- Policy briefings for UK Parliament↗🔗 webUK Parliament Official WebsiteThe UK Parliament website is a reference for tracking AI governance legislation and parliamentary inquiries in the UK, relevant to researchers monitoring policy responses to AI safety concerns. The current tags (x-risk, effective-altruism, longtermism) appear misassigned for a general government homepage.The official website of the United Kingdom Parliament, providing access to legislative proceedings, parliamentary debates, committee reports, and policy documents. It serves as ...governancepolicyai-safetycoordination+1Source ↗, UN↗🔗 web★★★★☆United NationsWelcome to the United NationsThe UN homepage is tangentially relevant to AI safety as an entry point to international governance bodies increasingly engaged with AI regulation, existential risk, and global coordination challenges; specific sub-pages on AI governance would be more directly useful.The official homepage of the United Nations, the primary international intergovernmental organization focused on maintaining international peace, security, and cooperation. It s...governancepolicycoordinationexistential-risk+1Source ↗
Communication Effectiveness
| Audience | Strategy | Success Metrics | Impact |
|---|---|---|---|
| General Public | Accessible writing, analogies | Book sales, media coverage | High awareness |
| Academics | Rigorous arguments, citations | Academic adoption | Growing influence |
| Policymakers | Risk quantification, briefings | Policy mentions | Moderate uptake |
| Philanthropists | Expected value arguments | Funding redirected | Major success |
Policy & Governance Influence
Government Engagement
| Country | Engagement Type | Policy Impact | Status |
|---|---|---|---|
| United Kingdom | Parliamentary testimony | AI White Paper↗🏛️ government★★★★☆UK GovernmentUK AI White Paper: A Pro-Innovation Approach to AI RegulationKey UK government policy document from 2023 shaping national AI regulation strategy; relevant for understanding how major democracies are approaching AI governance and safety requirements at a policy level, contrasting with EU and US approaches.The UK government's foundational AI regulatory framework document outlining a principles-based, pro-innovation approach to AI governance. It establishes five cross-sectoral prin...governancepolicyai-safetydeployment+3Source ↗ mentions | Ongoing |
| United States | Think tank briefings | NIST AI framework input | Active |
| European Union | Academic consultations | AI Act considerations | Limited |
| International | UN presentations | Global cooperation discussions | Early stage |
Key Policy Contributions
Risk Assessment Methodology:
- Quantitative frameworks for government risk analysis
- Long-term thinking in policy planning
- Cross-generational ethical considerations
International Coordination:
- Argues for global cooperation on AI governance
- Emphasizes shared humanity stake in outcomes
- Links to international governance discussions
Current Research & Focus Areas
Active Projects (2024-Present)
| Project | Description | Collaboration | Timeline |
|---|---|---|---|
| Long Reflection | Framework for humanity's values deliberation | Oxford philosophers | Ongoing |
| X-Risk Quantification | Refined probability estimates | GiveWell↗🔗 webGiveWell - Evidence-Based Charity EvaluatorGiveWell is a leading EA-adjacent charity evaluator focused on near-term global health causes; relevant to AI safety funding debates as a contrast to long-termist cause prioritization, but does not directly evaluate AI safety organizations.GiveWell is a nonprofit charity evaluator that researches and recommends highly effective giving opportunities, focusing on evidence-based interventions with strong cost-effecti...effective-altruismcost-effectivenessexpected-valueresearch-priorities+3Source ↗, researchers | 2024-2025 |
| Policy Frameworks | Government risk assessment tools | RAND Corporation↗🔗 web★★★★☆RAND CorporationRAND Provides Objective Research Services and Public Policy AnalysisRAND Corporation's homepage serves as an entry point to a large body of policy-relevant research on AI governance, national security, and emerging technology risks, useful as a reference for policymakers and researchers in the AI safety space.RAND Corporation is a nonprofit research organization providing objective analysis and policy recommendations across a wide range of topics including national security, technolo...governancepolicyai-safetycybersecurity+4Source ↗ | Active |
| EA Development | Next-generation prioritization | Coefficient Giving↗🔗 web★★★★☆Coefficient GivingOpen Philanthropy grants databaseOpen Philanthropy is one of the most influential funders in AI safety; their grants database is a useful reference for understanding which organizations and research directions receive major philanthropic support.Open Philanthropy is a major philanthropic organization that funds work across global health, AI safety, biosecurity, and other cause areas. Their grants database provides trans...ai-safetyexistential-riskgovernancecoordination+3Source ↗ | Ongoing |
The Long Reflection Concept
Core Idea: Once humanity achieves existential security, we should take time to carefully determine our values and future direction.
Key Components:
- Moral uncertainty and value learning
- Democratic deliberation at global scale
- Avoiding lock-in of current values
- Ensuring transformative decisions are reversible
Intellectual Evolution & Timeline
| Period | Focus | Key Outputs | Impact |
|---|---|---|---|
| 2005-2009 | Global poverty | PhD thesis, early EA | Movement foundation |
| 2009-2015 | EA development | Giving What We Can, prioritization | Community building |
| 2015-2020 | X-risk research | The Precipice writing | Risk quantification |
| 2020-Present | Implementation | Policy work, refinement | Mainstream adoption |
Evolving Views on AI Risk
Early Position (2015): AI risk deserves serious attention alongside other x-risks
The Precipice (2020): AI risk is the single largest existential threat this century
Current (2024): Maintains 10% estimate while emphasizing governance solutions
Key Concepts & Contributions
Existential Security
Definition: State where humanity has reduced existential risks to negligible levels permanently.
Requirements:
- Robust institutions
- Widespread risk awareness
- Technical safety solutions
- International coordination
The Precipice Period
Definition: Current historical moment where humanity faces unprecedented risks from its own technology.
Characteristics:
- First time extinction risk primarily human-caused
- Technology development outpacing safety measures
- Critical decisions about humanity's future
Value of the Future
Framework: Quantifying the moral importance of humanity's potential future.
Key Insights:
- Billions of years of potential flourishing
- Trillions of future lives at stake
- Cosmic significance of Earth-originating intelligence
Criticisms & Limitations
Academic Reception
| Criticism | Source | Ord's Response | Resolution |
|---|---|---|---|
| Probability Estimates | Some risk researchers | Acknowledges uncertainty, provides ranges | Ongoing debate |
| Pascal's Mugging | Philosophy critics | Expected value still valid with bounds | Partial consensus |
| Tractability Concerns | Policy experts | Emphasizes research value | Growing acceptance |
| Timeline Precision | AI researchers | Focuses on order of magnitude | Reasonable approach |
Methodological Debates
Quantification Challenges:
- Deep uncertainty about AI development
- Model uncertainty in risk assessment
- Potential for overconfidence in estimates
Response Strategy: Ord emphasizes these are "rough and ready" estimates meant to guide prioritization, not precise predictions.
Impact on AI Safety Field
Research Prioritization Influence
| Area | Before Ord | After Ord | Change |
|---|---|---|---|
| Funding | <$10M annually | $100M+ annually | 10x increase |
| Researchers | ≈50 full-time | 500+ full-time | 10x growth |
| Academic Programs | Minimal | 15+ universities | New field |
| Policy Attention | None | Multiple governments | Mainstream |
Conceptual Contributions
Risk Communication: Made abstract x-risks concrete and actionable through quantification.
Moral Urgency: Connected long-term thinking with immediate research priorities.
Resource Allocation: Provided framework for comparing AI safety to other cause areas.
Relationship to Key Debates
AGI Timeline Debates
Ord's Position: Timeline uncertainty doesn't reduce priority—risk × impact still enormous.
Scaling vs. Alternative Approaches
Ord's View: Focus on outcomes rather than methods—whatever reduces risk most effectively.
Open vs. Closed Development
Ord's Framework: Weigh democratization benefits against proliferation risks case-by-case.
Future Directions & Legacy
Ongoing Influence Areas
| Domain | Current Impact | Projected Growth | Key Mechanisms |
|---|---|---|---|
| Academic Research | Growing citations | Continued expansion | University curricula |
| Policy Development | Early adoption | Mainstream integration | Government frameworks |
| Philanthropic Priorities | Major redirection | Sustained focus | EA movement |
| Public Awareness | Significant increase | Broader recognition | Media coverage |
Long-term Legacy Potential
Conceptual Framework: The Precipice may become defining text for 21st-century risk thinking.
Methodological Innovation: Quantitative x-risk assessment now standard practice.
Movement Building: Helped transform niche academic concern into global priority.
Sources & Resources
Primary Sources
| Source Type | Title | Access | Key Insights |
|---|---|---|---|
| Book | The Precipice: Existential Risk and the Future of Humanity↗🔗 webOrd (2020): The PrecipiceFoundational longtermist text by Oxford philosopher Toby Ord; frequently cited in AI safety and EA communities as a comprehensive introduction to existential risk, including AI misalignment as a major threat category.Toby Ord's book argues that humanity faces unprecedented existential risks from nuclear weapons, engineered pandemics, and unaligned AI, and that reducing these risks is among t...existential-riskai-safetylongtermismgovernance+3Source ↗ | Public | Core arguments and estimates |
| Academic Papers | Oxford research profile↗🔗 webOxford research profileThis institutional profile page is primarily useful for verifying Toby Ord's academic affiliation and accessing links to his publications; his book 'The Precipice' and other primary works are more substantive resources for understanding his contributions to existential risk and AI safety.This is the official Oxford Uehiro Centre for Practical Ethics profile page for Toby Ord, philosopher and author of 'The Precipice.' Ord is a leading figure in existential risk ...existential-riskai-safetyeffective-altruismlongtermism+3Source ↗ | Academic | Technical foundations |
| Interviews | 80,000 Hours podcasts↗🔗 web★★★☆☆80,000 Hours80,000 Hours PodcastThis interview accompanies Toby Ord's book 'The Precipice' (2020) and serves as an accessible entry point to existential risk thinking, particularly relevant for understanding how AI risk fits within the broader landscape of civilizational-scale risks.A comprehensive podcast interview with philosopher Toby Ord discussing his book 'The Precipice', covering quantitative estimates of existential risks from natural and anthropoge...existential-riskai-safetygovernancepolicy+5Source ↗ | Free | Detailed explanations |
Key Organizations & Collaborations
| Organization | Relationship | Current Status | Focus Area |
|---|---|---|---|
| Future of Humanity Institute | Former Fellow | Closed 2024 | X-risk research |
| Centre for Effective Altruism↗🔗 web★★★☆☆Centre for Effective AltruismCentre for Effective AltruismCEA is the institutional backbone of the EA movement, which has been a major funding and talent source for AI safety research; understanding CEA helps contextualize the broader ecosystem in which AI safety work is conducted.The Centre for Effective Altruism (CEA) is the primary organizational hub for the effective altruism movement, supporting community growth through conferences, local group fundi...effective-altruismexistential-riskcoordinationai-safety+2Source ↗ | Advisor | Active | Movement coordination |
| Oxford Uehiro Centre↗🔗 webOxford Uehiro CentreThe Oxford Uehiro Centre is a leading academic institution for practical ethics research, with significant ties to the AI safety and longtermism communities through affiliated scholars and published work on existential risk and emerging technologies.The Oxford Uehiro Centre for Practical Ethics applies rigorous philosophical analysis to ethical issues arising from science, technology, and public policy. It conducts research...existential-riskgovernancepolicyai-safety+2Source ↗ | Fellow | Active | Practical ethics |
| Giving What We Can↗🔗 web★★★☆☆Giving What We CanGiving What We CanGiving What We Can is tangentially relevant to AI safety insofar as the EA community funds and promotes AI safety research, but the site itself focuses on charitable giving broadly rather than AI safety specifically.Giving What We Can is an effective altruism organization that encourages individuals to pledge a portion of their income (typically 10%) to the most cost-effective charities. It...effective-altruismcoordinationexistential-riskpolicySource ↗ | Co-founder | Active | Effective giving |
Further Reading
| Category | Recommendations | Relevance |
|---|---|---|
| Follow-up Books | Bostrom's Superintelligence, Russell's Human Compatible | Complementary AI risk analysis |
| Academic Papers | Ord's published research on moral uncertainty | Technical foundations |
| Policy Documents | Government reports citing Ord's work | Real-world applications |
References
RAND Corporation is a nonprofit research organization providing objective analysis and policy recommendations across a wide range of topics including national security, technology, governance, and emerging risks. It produces influential studies on AI policy, cybersecurity, and global governance challenges. RAND's work is frequently cited by governments and policymakers worldwide.
This is an archived snapshot of the New York Times homepage from March 2026, capturing live news coverage of geopolitical conflicts including U.S.-Iran military strikes, Middle East war, and domestic U.S. policy issues. It does not contain a specific article on 'information wars' or AI safety topics.
The official website of the Future of Humanity Institute (FHI), an Oxford University research center that was foundational in establishing the fields of existential risk research and AI safety. FHI closed on 16 April 2024 after approximately two decades of influential work. The site now serves as an archived record of the institution's history, research agenda, and legacy.
This is the homepage of Hachette Books, a major commercial book publisher and imprint of Hachette Book Group. It showcases a broad catalog of popular nonfiction and fiction titles. It has no direct relevance to AI safety, alignment, or related fields.
The Bletchley Declaration is a landmark multinational policy agreement signed at the AI Safety Summit 2023, committing participating nations to collaborative efforts on AI safety while enabling beneficial AI development. It represents one of the first major intergovernmental consensus documents explicitly addressing risks from frontier AI systems, including potential catastrophic and existential harms.
This page returns a 404 error, indicating the TED speaker profile for Toby Ord (philosopher and author of 'The Precipice') is no longer available at this URL. Toby Ord is a prominent figure in existential risk research and effective altruism, and his TED talks typically cover topics related to global catastrophic risks and humanity's long-term future.
The Centre for Effective Altruism (CEA) is the primary organizational hub for the effective altruism movement, supporting community growth through conferences, local group funding, online forums, grants, and communications. It operates key EA infrastructure including the EA Forum, EA Funds, and effectivealtruism.org. CEA's work is broadly relevant to AI safety as a significant portion of the EA community prioritizes existential and catastrophic AI risks.
A comprehensive podcast interview with philosopher Toby Ord discussing his book 'The Precipice', covering quantitative estimates of existential risks from natural and anthropogenic sources including AI, bioweapons, nuclear war, and climate change. Ord argues humanity is at a uniquely dangerous 'hinge of history' and outlines both the moral case for prioritizing existential risk reduction and practical policy recommendations.
Toby Ord's book argues that humanity faces unprecedented existential risks from nuclear weapons, engineered pandemics, and unaligned AI, and that reducing these risks is among the most pressing moral priorities of our time. It grounds longtermism in rigorous analysis of risk probabilities and makes the case that safeguarding humanity's long-run future is an urgent ethical imperative.
The Guardian is a major international news outlet. This URL points to its homepage, which at the time of capture was featuring breaking news about an Israeli-Iranian military conflict, including strikes on Iranian gas infrastructure and diplomatic tensions involving the US. It has no specific AI safety content.
GiveWell is a nonprofit charity evaluator that researches and recommends highly effective giving opportunities, focusing on evidence-based interventions with strong cost-effectiveness. It conducts in-depth analysis of charities to identify where donations can do the most good, primarily in global health and poverty. GiveWell exemplifies the effective altruism methodology of rigorous expected-value reasoning applied to philanthropic decisions.
The Oxford Uehiro Centre for Practical Ethics applies rigorous philosophical analysis to ethical issues arising from science, technology, and public policy. It conducts research on topics including AI ethics, bioethics, existential risk, and moral philosophy. The Centre is a major academic hub connecting philosophical ethics to real-world policy and AI safety concerns.
The official homepage of the United Nations, the primary international intergovernmental organization focused on maintaining international peace, security, and cooperation. It serves as a hub for UN agencies, treaties, resolutions, and global governance initiatives across a wide range of issues including peace, human rights, and sustainable development.
The UK Government's official web portal (gov.uk) provides access to government policies, publications, and regulatory guidance across all departments. It serves as a central repository for official UK government positions, legislation, and policy documents relevant to AI governance and safety regulation.
This is the official Oxford Uehiro Centre for Practical Ethics profile page for Toby Ord, philosopher and author of 'The Precipice.' Ord is a leading figure in existential risk research and the effective altruism movement, with scholarly work focused on global catastrophic risks, longtermism, and humanity's long-term future.
The UK government's foundational AI regulatory framework document outlining a principles-based, pro-innovation approach to AI governance. It establishes five cross-sectoral principles—safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress—to guide responsible AI development and deployment without imposing rigid legislation initially.
Giving What We Can is an effective altruism organization that encourages individuals to pledge a portion of their income (typically 10%) to the most cost-effective charities. It provides resources, community support, and research to help donors maximize their positive impact. The organization is associated with the broader EA movement and encourages evidence-based giving.
This page describes Open Philanthropy's grantmaking focus on global catastrophic risks, including AI safety, biosecurity, and other threats capable of causing civilizational-scale harm. It outlines their strategic priorities, funding philosophy, and key grants in these areas. Open Philanthropy is one of the largest funders in the AI safety and existential risk space.
Open Philanthropy is a major philanthropic organization that funds work across global health, AI safety, biosecurity, and other cause areas. Their grants database provides transparency into which organizations and research directions receive funding. They are one of the largest funders of AI safety and existential risk research.
The official website of the United Kingdom Parliament, providing access to legislative proceedings, parliamentary debates, committee reports, and policy documents. It serves as the primary source for UK government legislation and parliamentary activity, including matters related to AI governance and technology regulation.
Google Scholar is a freely accessible academic search engine that indexes scholarly literature across disciplines, including AI safety, alignment, and related technical fields. It provides access to papers, citations, author profiles, and citation metrics. It serves as a primary discovery tool for finding peer-reviewed research relevant to AI safety.