Nick Bostrom
nick-bostrom (E215)← Back to pagePath: /knowledge-base/people/nick-bostrom/
Page Metadata
{
"id": "nick-bostrom",
"numericId": null,
"path": "/knowledge-base/people/nick-bostrom/",
"filePath": "knowledge-base/people/nick-bostrom.mdx",
"title": "Nick Bostrom",
"quality": 25,
"importance": 22,
"contentFormat": "article",
"tractability": null,
"neglectedness": null,
"uncertainty": null,
"causalLevel": null,
"lastUpdated": "2025-12-24",
"llmSummary": "Comprehensive biographical profile of Nick Bostrom covering his founding of FHI, the landmark 2014 book 'Superintelligence' that popularized AI existential risk, and key philosophical contributions (orthogonality thesis, instrumental convergence, treacherous turn). The page documents his influence on the field but provides limited quantitative evidence or citations for claims about impact.",
"structuredSummary": null,
"description": "Philosopher at FHI, author of 'Superintelligence'",
"ratings": {
"novelty": 1.5,
"rigor": 3,
"actionability": 1,
"completeness": 6
},
"category": "people",
"subcategory": null,
"clusters": [
"ai-safety",
"governance"
],
"metrics": {
"wordCount": 943,
"tableCount": 0,
"diagramCount": 0,
"internalLinks": 8,
"externalLinks": 0,
"footnoteCount": 0,
"bulletRatio": 0.62,
"sectionCount": 23,
"hasOverview": false,
"structuralScore": 4
},
"suggestedQuality": 27,
"updateFrequency": 45,
"evergreen": true,
"wordCount": 943,
"unconvertedLinks": [],
"unconvertedLinkCount": 0,
"convertedLinkCount": 0,
"backlinkCount": 2,
"redundancy": {
"maxSimilarity": 15,
"similarPages": [
{
"id": "eliezer-yudkowsky",
"title": "Eliezer Yudkowsky",
"path": "/knowledge-base/people/eliezer-yudkowsky/",
"similarity": 15
},
{
"id": "stuart-russell",
"title": "Stuart Russell",
"path": "/knowledge-base/people/stuart-russell/",
"similarity": 15
},
{
"id": "miri-era",
"title": "The MIRI Era (2000-2015)",
"path": "/knowledge-base/history/miri-era/",
"similarity": 14
},
{
"id": "fhi",
"title": "Future of Humanity Institute (FHI)",
"path": "/knowledge-base/organizations/fhi/",
"similarity": 14
},
{
"id": "toby-ord",
"title": "Toby Ord",
"path": "/knowledge-base/people/toby-ord/",
"similarity": 14
}
]
}
}Entity Data
{
"id": "nick-bostrom",
"type": "person",
"title": "Nick Bostrom",
"description": "Nick Bostrom is a philosopher who founded the Future of Humanity Institute (FHI) at Oxford University and authored \"Superintelligence: Paths, Dangers, Strategies\" (2014), the book that brought AI existential risk into mainstream academic and policy discourse. His work laid the conceptual foundations for much of modern AI safety thinking.\n\nBostrom's key contributions include the orthogonality thesis (intelligence and goals are independent - a superintelligent AI could pursue any objective), instrumental convergence (most goal-pursuing systems will converge on certain subgoals like self-preservation and resource acquisition), and the concept of the \"treacherous turn\" (an AI might behave well until it's powerful enough to act on misaligned goals). These ideas are now standard reference points in AI safety discussions.\n\nBeyond AI, Bostrom has shaped the broader study of existential risk as an academic field, arguing that reducing the probability of human extinction should be a global priority given the astronomical value of humanity's potential future. Though FHI closed in 2024 due to administrative issues at Oxford, its influence persists through the researchers it trained and the research agendas it established. Bostrom's work continues to frame how many researchers and policymakers think about the stakes of advanced AI development.\n",
"tags": [
"superintelligence",
"x-risk",
"orthogonality-thesis",
"instrumental-convergence",
"treacherous-turn",
"value-alignment",
"control-problem"
],
"relatedEntries": [
{
"id": "instrumental-convergence",
"type": "risk"
},
{
"id": "treacherous-turn",
"type": "risk"
},
{
"id": "toby-ord",
"type": "researcher"
}
],
"sources": [
{
"title": "Nick Bostrom's Website",
"url": "https://nickbostrom.com"
},
{
"title": "Superintelligence (book)",
"url": "https://www.superintelligence.com/"
},
{
"title": "FHI Publications",
"url": "https://www.fhi.ox.ac.uk/publications/"
},
{
"title": "Existential Risk Prevention as Global Priority",
"url": "https://www.existential-risk.org/concept.html"
}
],
"lastUpdated": "2025-12",
"website": "https://nickbostrom.com",
"customFields": []
}Canonical Facts (0)
No facts for this entity
External Links
{
"wikipedia": "https://en.wikipedia.org/wiki/Nick_Bostrom",
"lesswrong": "https://www.lesswrong.com/tag/nick-bostrom",
"wikidata": "https://www.wikidata.org/wiki/Q460475"
}Backlinks (2)
| id | title | type | relationship |
|---|---|---|---|
| fhi | Future of Humanity Institute | organization | — |
| toby-ord | Toby Ord | researcher | — |
Frontmatter
{
"title": "Nick Bostrom",
"description": "Philosopher at FHI, author of 'Superintelligence'",
"sidebar": {
"order": 6
},
"quality": 25,
"llmSummary": "Comprehensive biographical profile of Nick Bostrom covering his founding of FHI, the landmark 2014 book 'Superintelligence' that popularized AI existential risk, and key philosophical contributions (orthogonality thesis, instrumental convergence, treacherous turn). The page documents his influence on the field but provides limited quantitative evidence or citations for claims about impact.",
"lastEdited": "2025-12-24",
"importance": 22,
"update_frequency": 45,
"ratings": {
"novelty": 1.5,
"rigor": 3,
"actionability": 1,
"completeness": 6
},
"clusters": [
"ai-safety",
"governance"
],
"entityType": "person"
}Raw MDX Source
---
title: Nick Bostrom
description: Philosopher at FHI, author of 'Superintelligence'
sidebar:
order: 6
quality: 25
llmSummary: Comprehensive biographical profile of Nick Bostrom covering his founding of FHI, the landmark 2014 book 'Superintelligence' that popularized AI existential risk, and key philosophical contributions (orthogonality thesis, instrumental convergence, treacherous turn). The page documents his influence on the field but provides limited quantitative evidence or citations for claims about impact.
lastEdited: "2025-12-24"
importance: 22
update_frequency: 45
ratings:
novelty: 1.5
rigor: 3
actionability: 1
completeness: 6
clusters: ["ai-safety","governance"]
entityType: person
---
import {DataInfoBox, DataExternalLinks, EntityLink} from '@components/wiki';
<DataExternalLinks pageId="nick-bostrom" />
<DataInfoBox entityId="E215" />
## Background
Nick Bostrom is a Swedish-born philosopher at Oxford University who founded the <EntityLink id="E140">Future of Humanity Institute</EntityLink> in 2005. He is widely recognized for bringing academic rigor to the study of existential risks and transformative technologies.
Academic background:
- PhD in Philosophy from London School of Economics (2000)
- Professor at Oxford University
- Director of FHI (2005-2024, until institute closure)
- Published extensively in philosophy, ethics, and technology
His 2014 book "Superintelligence: Paths, Dangers, Strategies" brought AI existential risk into mainstream discourse and influenced many current safety researchers.
## Major Contributions
### Superintelligence (2014)
This landmark book:
- Systematically analyzed paths to superintelligence
- Outlined control problems and failure modes
- Introduced key concepts like orthogonality thesis and <EntityLink id="E168">instrumental convergence</EntityLink>
- Made AI risk intellectually respectable
- Influenced figures like <EntityLink id="E116">Elon Musk</EntityLink> and Bill Gates
The book's impact cannot be overstated - it fundamentally shaped how people think about advanced AI risks.
### Existential Risk Framework
Bostrom pioneered academic study of existential risks:
- Defined existential risk precisely
- Argued for extreme importance (affects all future generations)
- Created framework for analyzing different risks
- Emphasized need for research and prevention
### Key Philosophical Contributions
**Orthogonality Thesis**: Intelligence and goals are independent. A superintelligent system could have any goal, including harmful ones.
**Instrumental Convergence**: Many different final goals lead to similar instrumental goals (resource acquisition, self-preservation, etc.), creating predictable risks.
**<EntityLink id="E359">Treacherous Turn</EntityLink>**: Sufficiently intelligent systems might behave cooperatively until they're powerful enough to achieve goals without constraint.
### Simulation Hypothesis
While not directly related to AI safety, Bostrom's simulation argument has influenced thinking about:
- Nature of intelligence and consciousness
- Future technological capabilities
- Philosophical implications of advanced AI
## Views on AI Risk
### Core Arguments
1. **Superintelligence is possible**: No fundamental barrier to intelligence far exceeding human level
2. **Default outcome is bad**: Without careful preparation, superintelligent AI would likely not share human values
3. **Control is extremely difficult**: Once superintelligence exists, control may be impossible
4. **Prevention is crucial**: Must solve alignment before superintelligence emerges
5. **Stakes are existential**: Failure could mean human extinction or permanent loss of potential
### On Timelines
Bostrom has been relatively cautious about timelines:
- Emphasizes uncertainty
- Argues we should prepare even for unlikely scenarios
- More focused on thinking through problems than predicting dates
- "Superintelligence" discussed various paths with different timelines
### On Solutions
"Superintelligence" explored several potential solutions:
- **Boxing**: Physically or informationally constraining AI
- **Capability control**: Limiting what AI can do
- **Motivation selection**: Choosing safe goals/values
- **Value learning**: AI learning human values
- **<EntityLink id="E502">Whole brain emulation</EntityLink>**: Alternative path to superintelligence
He's generally skeptical that simple solutions will work, emphasizing complexity of the problem.
## Influence and Impact
### Academic Field Building
- Founded FHI, which became major hub for existential risk research
- Supervised numerous PhD students in x-risk
- Published in top philosophy journals on AI and existential risk
- Made studying AI risk academically legitimate
### Public Awareness
- "Superintelligence" became bestseller
- Read by tech leaders, policymakers, and researchers
- Sparked broader conversation about AI risks
- Influenced funding decisions (e.g., <EntityLink id="E521">Coefficient Giving</EntityLink>'s AI focus)
### Policy Influence
- Advised governments on emerging technologies
- Influenced discussions at UN and other international bodies
- Work cited in policy documents on <EntityLink id="E608">AI governance</EntityLink>
### Research Community
- Concepts from "Superintelligence" now standard in AI safety
- Framework influences how researchers think about risks
- Many current safety researchers cite book as influential
## Other Work
Beyond AI, Bostrom has contributed to:
- **Human enhancement ethics**: Should we enhance human capabilities?
- **Global catastrophic risks**: Asteroids, pandemics, nuclear war
- **Information hazards**: Risks from knowledge itself
- **<EntityLink id="E22">Anthropic</EntityLink> reasoning**: How to reason about observer selection effects
## Controversies and Criticisms
### FHI Closure (2024)
FHI closed in 2024 due to administrative issues with Oxford. This ended a major chapter in existential risk research, though many former FHI researchers continue the work elsewhere.
### Criticisms of "Superintelligence"
**Some argue:**
- Overestimates difficulty of alignment
- Underestimates difficulty of achieving superintelligence
- Too focused on specific scenarios
- Anthropomorphizes AI systems
**Supporters counter:**
- Book was prescient about many challenges now visible
- Appropriately cautious given stakes
- Scenarios remain plausible
- Better to overestimate risks than underestimate
### Academic vs. Applied Research
Some critics argue:
- FHI did too much philosophical work, not enough technical research
- Frameworks don't translate directly to engineering solutions
Others counter:
- Conceptual clarity is essential foundation
- Philosophy identifies problems engineers then solve
- FHI's role was complementary to technical work
## Evolution of Views
**Early work (1990s-2000s):**
- Broad focus on existential risks
- Technological optimism balanced with caution
- Development of existential risk framework
**Superintelligence era (2010s):**
- Deep dive into AI-specific risks
- Systematic analysis of control problems
- Major public communication effort
**Recent (2020s):**
- Less public-facing work
- Continued academic research
- More focus on other existential risks
## Legacy
Bostrom's lasting contributions include:
1. **Intellectual framework**: Concepts and vocabulary for discussing AI risk
2. **Academic legitimacy**: Made existential risk a serious field of study
3. **Institution building**: FHI trained a generation of x-risk researchers
4. **Public awareness**: Brought risks to attention of decision-makers
5. **Rigorous analysis**: Demonstrated philosophical methods can illuminate AI safety
Even critics acknowledge his role in establishing AI safety as a field.
## Key Publications
- **"Superintelligence: Paths, Dangers, Strategies"** (2014) - The landmark book
- **"Existential Risk Prevention as Global Priority"** (2013) - Framework for x-risk
- **"Ethical Issues in Advanced Artificial Intelligence"** (2003) - Early AI safety paper
- **"Are You Living in a Computer Simulation?"** (2003) - Simulation argument
- **"The Vulnerable World Hypothesis"** (2019) - Risks from technological development