Back
Superintelligence: Paths, Dangers, Strategies - Wikipedia
referenceCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: Wikipedia
A Wikipedia overview of Bostrom's seminal 2014 book, which significantly shaped public and academic discourse on AI existential risk; useful as a quick reference for key concepts and arguments introduced in the book.
Metadata
Importance: 72/100wiki pagereference
Summary
Wikipedia article summarizing Nick Bostrom's influential 2014 book arguing that superintelligent AI poses existential risks to humanity. The book introduces key concepts like the orthogonality thesis, instrumental convergence, and the control problem, and argues that ensuring AI alignment is among the most important challenges facing civilization.
Key Points
- •Introduces the 'orthogonality thesis': intelligence and final goals are independent, so a superintelligence could pursue any objective.
- •Argues for 'instrumental convergence': most goals lead AI to seek self-preservation, resource acquisition, and goal-content integrity.
- •Presents the 'control problem': how to ensure a superintelligent AI acts in accordance with human values and intentions.
- •Discusses paths to superintelligence including whole brain emulation, biological enhancement, and artificial general intelligence.
- •Warns of 'value lock-in' scenarios where a misaligned AI permanently determines the future trajectory of civilization.
Cited by 5 pages
| Page | Type | Quality |
|---|---|---|
| AI Acceleration Tradeoff Model | Analysis | 50.0 |
| Future of Humanity Institute | Organization | 51.0 |
| AI-Induced Irreversibility | Risk | 64.0 |
| Treacherous Turn | Risk | 67.0 |
| AI Doomer Worldview | Concept | 38.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 202641 KB
# Superintelligence: Paths, Dangers, Strategies
Superintelligence: Paths, Dangers, Strategies
2014 book by Nick Bostrom
| [](https://en.wikipedia.org/wiki/File:Superintelligence.jpg)<br>First edition |
| Author | [Nick Bostrom](https://en.wikipedia.org/wiki/Nick_Bostrom "Nick Bostrom") |
| Language | [English](https://en.wikipedia.org/wiki/English_language "English language") |
| Subject | [Artificial intelligence](https://en.wikipedia.org/wiki/Artificial_intelligence "Artificial intelligence") |
| Genre | [Philosophy](https://en.wikipedia.org/wiki/Philosophy "Philosophy"), [popular science](https://en.wikipedia.org/wiki/Popular_science "Popular science") |
| Publisher | [Oxford University Press](https://en.wikipedia.org/wiki/Oxford_University_Press "Oxford University Press")[\[1\]](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies#cite_note-Superintelligent_Swede_snapped_up_by_OUP-1) |
| Publication date | July 3, 2014 (UK)<br>September 1, 2014 (US) |
| Publication place | [United Kingdom](https://en.wikipedia.org/wiki/United_Kingdom "United Kingdom") |
| Mediatype | Print, e-book, audiobook |
| Pages | 352 pp. |
| [ISBN](https://en.wikipedia.org/wiki/ISBN_(identifier) "ISBN (identifier)") | [978-0199678112](https://en.wikipedia.org/wiki/Special:BookSources/978-0199678112 "Special:BookSources/978-0199678112") |
| Precededby | _[Global Catastrophic Risks](https://en.wikipedia.org/wiki/Global_Catastrophic_Risks_(book) "Global Catastrophic Risks (book)")_ |
Superintelligence:
Paths, Dangers, Strategies
_**Superintelligence: Paths, Dangers, Strategies**_ is a 2014 book by the philosopher [Nick Bostrom](https://en.wikipedia.org/wiki/Nick_Bostrom "Nick Bostrom"). It explores how [superintelligence](https://en.wikipedia.org/wiki/Superintelligence "Superintelligence") could be created and what its features and motivations might be.[\[2\]](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies#cite_note-:0-2) It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity.[\[3\]](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies#cite_note-Henderson-3) It was particularly influential for raising concerns about [existential risk from artificial intelligence](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence "Existential risk from artificial general intelligence").[\[4\]](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies#cite_note-new_yorker_doomsday-4)
## Synopsis
It is unknown whether human-level [artificial intelligence](https://en.wikipedia.org/wiki/Artificial_intelligence "Artificial intelligence") will arrive in a matter of years, later this century, or not until f
... (truncated, 41 KB total)Resource ID:
0151481d5dc82963 | Stable ID: NWY3YWM3Mz