Skip to content
Longterm Wiki
Back

Nick Bostrom's Homepage

web
nickbostrom.com·nickbostrom.com/

Nick Bostrom is one of the most influential early thinkers in AI existential risk; his homepage aggregates decades of papers relevant to alignment, x-risk, and long-term AI governance. Note FHI closed in 2024.

Metadata

Importance: 72/100homepage

Summary

Personal website of Nick Bostrom, philosopher and founding director of the Future of Humanity Institute at Oxford. He is known for foundational work on existential risk, superintelligence, simulation theory, and the ethics of emerging technologies. His book 'Superintelligence' significantly shaped mainstream discourse on AI safety.

Key Points

  • Hub for Bostrom's extensive academic publications on existential risk, AI, and transhumanism
  • Home to foundational papers on x-risk including the 'Vulnerable World Hypothesis' and 'Astronomical Waste'
  • Author of 'Superintelligence: Paths, Dangers, Strategies' (2014), a landmark text in AI safety
  • Introduced key concepts like the orthogonality thesis and instrumental convergence to AI safety discourse
  • Founding figure of the Future of Humanity Institute (FHI), a major AI safety research institution

Cited by 3 pages

PageTypeQuality
Corrigibility Failure PathwaysAnalysis62.0
Future of Humanity InstituteOrganization51.0
AI Value Lock-inRisk64.0

3 FactBase facts citing this source

Cached Content Preview

HTTP 200Fetched Mar 20, 202640 KB
### March 2026

The world is quickening, and the birth of superintelligence presumably not very far off; yet most people are otherwise occupied.

Currently working on a couple of things related to AGI governance.

Chinese translation of _[Deep Utopia](https://nickbostrom.com/deep-utopia/)_ is now out. Also the [second print run in English](https://nickbostrom.com/booklink/deep-utopia) and the [Audiobook](https://www.audible.com/pd/Deep-Utopia-Audiobook/B0DK22GN3L) — people say the voice actor is good. The book has received still more awards.

You can sign up for [newsletter](https://nickbostrom.com/#newsletter) to receive very rare updates, but the most reliable method is to check this page.

For more, e.g. [New Yorker profile](https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom) (old), [Bio](https://nickbostrom.com/#bio), [CV](https://nickbostrom.com/cv.pdf), [Contact](https://nickbostrom.com/#contact), [Press images](https://nickbostrom.com/press/).

### Recent additions

- [Optimal Timing for Superintelligence](https://nickbostrom.com/optimal.pdf), working paper
- [Open Global Investment as a Governance Model for AGI](https://nickbostrom.com/ogimodel.pdf), working paper
- [Sandcastles](https://nickbostrom.com/poetry/#sandcastles), poem
- [AI Creation and the Cosmic Host](https://nickbostrom.com/papers/ai-creation-and-the-cosmic-host.pdf), working paper
- [Propositions Concerning Digital Minds and Society](https://nickbostrom.com/propositions.pdf), w/ Carl Shulman, in _The Cambridge Journal of Law, Politics, and Art_
- [Base Camp for Mt. Ethics](https://nickbostrom.com/papers/mountethics.pdf), working paper
- [Sharing the World with Digital Minds](https://nickbostrom.com/papers/digital-minds.pdf), w/ Carl Shulman, in [edited volume](https://global.oup.com/academic/product/rethinking-moral-status-9780192894076?cc=gb&lang=en&) (Oxford University Press, 2021)

## Selected papers

### Ethics & Policy

AIs with moral status and political rights? We'll need a modus vivendi, and it’s becoming urgent to figure out the parameters for that. This paper makes a load of specific claims that begin to stake out a position.

There may well exist a normative structure, based on the preferences or concordats of a cosmic host, and which has high relevance to the development of AI.

AI being developed by one or more market-traded corporations has something to be said for it.

Even for quite high values of P(doom), the life expectancy for us existing people seems higher if AGI is developed quite soon.

Humans are relatively expensive but absolutely cheap.

#### [Strategic Implications of Openness in AI Development](https://nickbostrom.com/papers/openness.pdf)

An analysis of the global desirability of different forms of openness (including source code, science, data, safety techniques, capabilities, and goals).

#### [The Reversal Test: Eliminating Status Quo Bias in Applied Ethics](https://nickbostrom.com/ethics/st

... (truncated, 40 KB total)
Resource ID: 9cf1412a293bfdbe | Stable ID: MWFhMjQwMW