Skip to content
Longterm Wiki
Back

Credibility Rating

2/5
Mixed(2)

Mixed quality. Some useful content but inconsistent editorial standards. Claims should be verified.

Rating inherited from publication venue: Amazon

This Amazon listing links to Bostrom's foundational AI safety book; the actual content is the book itself, which is widely considered essential reading for anyone entering the AI safety field and heavily influenced subsequent research agendas at organizations like MIRI, FHI, and OpenAI.

Metadata

Importance: 90/100bookprimary source

Summary

Nick Bostrom's seminal 2014 book examining the potential risks posed by the development of machine superintelligence, arguing that a sufficiently advanced AI could pursue goals misaligned with human values and potentially pose an existential threat. The book explores paths to superintelligence, control problems, and strategies for ensuring beneficial outcomes. It became a foundational text in the AI safety field, bringing the alignment problem to mainstream academic and public attention.

Key Points

  • Analyzes multiple pathways to superintelligence including whole brain emulation, biological enhancement, and recursive self-improvement via AI
  • Introduces the 'control problem': how to ensure a superintelligent agent pursues goals aligned with human values rather than instrumental power-seeking
  • Argues that a sufficiently capable AI could undergo rapid capability gains (intelligence explosion) making post-hoc correction extremely difficult
  • Proposes capability control and motivation selection as broad strategies for managing superintelligent systems safely
  • Introduces influential concepts including the orthogonality thesis and instrumental convergence, foundational to modern alignment discourse

Cited by 2 pages

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
![](https://fls-na.amazon.com/1/batch/1/OP/ATVPDKIKX0DER:140-2356137-2323544:QKKHMNS1HVHQKPHDCF9T$uedata=s:%2Frd%2Fuedata%3Fstaticb%26id%3DQKKHMNS1HVHQKPHDCF9T:0)![](https://m.media-amazon.com/images/G/01/gno/sprites/nav-sprite-global-1x-reorg-privacy._CB779528203_.png)

[![Shop spring savings on bulk](https://m.media-amazon.com/images/I/21AfDthTr6L.jpg)](https://www.amazon.com/wholesale/?_encoding=UTF8&ref_=b2b_sow_ilm_largepack_ABBSS26_tier2_Eventall&pd_rd_w=wJFvM&content-id=amzn1.sym.3cf14cb1-2e4e-484e-b3f0-ee889021385b&pf_rd_p=3cf14cb1-2e4e-484e-b3f0-ee889021385b&pf_rd_r=QKKHMNS1HVHQKPHDCF9T&pd_rd_wg=BTR6S&pd_rd_r=57046fb8-90a5-412c-982b-1620b12e3812)

## Customers who viewed this item also viewed

1. [![Life 3.0: Being Human in the Age of Artificial Intelligence](https://m.media-amazon.com/images/I/81f4l5CHA0L._AC_SS115_.jpg)](https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101970316/ref=pd_lpo_d_sccl_1/140-2356137-2323544?pd_rd_w=xokyT&content-id=amzn1.sym.4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_p=4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_r=QKKHMNS1HVHQKPHDCF9T&pd_rd_wg=BTR6S&pd_rd_r=57046fb8-90a5-412c-982b-1620b12e3812&pd_rd_i=1101970316&psc=1)







[Life 3.0: Being Human in the Age of Artificial IntelligenceLife 3.0: Being Human in the Age of Artificial Intelligence](https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101970316/ref=pd_lpo_d_sccl_1/140-2356137-2323544?pd_rd_w=xokyT&content-id=amzn1.sym.4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_p=4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_r=QKKHMNS1HVHQKPHDCF9T&pd_rd_wg=BTR6S&pd_rd_r=57046fb8-90a5-412c-982b-1620b12e3812&pd_rd_i=1101970316&psc=1)



[4.46,146](https://www.amazon.com/product-reviews/1101970316/?_encoding=UTF8&pd_rd_w=xokyT&content-id=amzn1.sym.4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_p=4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_r=QKKHMNS1HVHQKPHDCF9T&pd_rd_wg=BTR6S&pd_rd_r=57046fb8-90a5-412c-982b-1620b12e3812&ref_=pd_lpo "4.4 out of 5 stars")





[$10.92$10.92](https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834#)

2. [![The Alignment Problem: Machine Learning and Human Values](https://m.media-amazon.com/images/I/714DHGFHitL._AC_SS115_.jpg)](https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/0393868338/ref=pd_lpo_d_sccl_2/140-2356137-2323544?pd_rd_w=xokyT&content-id=amzn1.sym.4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_p=4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_r=QKKHMNS1HVHQKPHDCF9T&pd_rd_wg=BTR6S&pd_rd_r=57046fb8-90a5-412c-982b-1620b12e3812&pd_rd_i=0393868338&psc=1)







[The Alignment Problem: Machine Learning and Human ValuesThe Alignment Problem: Machine Learning and Human Values](https://www.amazon.com/Alignment-Problem-Machine-Learning-Values/dp/0393868338/ref=pd_lpo_d_sccl_2/140-2356137-2323544?pd_rd_w=xokyT&content-id=amzn1.sym.4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_p=4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_r=QKKHMNS1HVHQKPHDCF9T&pd_rd_wg=BTR6S&pd_rd_r=57046fb8-90a

... (truncated, 98 KB total)
Resource ID: 29e83038187711cc | Stable ID: OGJhNjNhMz