Nick Bostrom
Nick Bostrom
Comprehensive biographical reference page on Nick Bostrom covering his major contributions (orthogonality thesis, instrumental convergence, Superintelligence), institutional history (FHI founding and closure), controversies (2023 email incident), and key publications with citations. Content is well-organized compilation with no original analysis; useful as reference but not actionable for prioritization decisions.
Quick Assessment
| Aspect | Assessment |
|---|---|
| Primary Role | Founder and Principal Researcher, Macrostrategy Research Initiative (from 2024); formerly Professor of Philosophy, University of Oxford, and founding Director of the Future of Humanity Institute (2005–2024) |
| Key Contributions | Developed the orthogonality thesis and instrumental convergence concepts; authored foundational existential risk frameworks; co-founded the World Transhumanist Association (1998) |
| Key Publications | Anthropic Bias (2002); Superintelligence (2014); Deep Utopia (2024) |
| Institutional Affiliation | Macrostrategy Research Initiative (nonprofit, founded 2024) |
| Influence on AI Safety | Superintelligence introduced conceptual frameworks—orthogonality thesis, treacherous turn, instrumental convergence—that informed subsequent technical AI safety research and contributed to increased philanthropic and policy attention to AI risk |
Overview
Nick Bostrom is a Swedish-born philosopher whose work spans existential risk, AI safety, philosophy of probability, and transhumanism. He is best known for founding the Future of Humanity Institute at Oxford University in 2005, which he directed until its closure in April 2024, and for his 2014 book Superintelligence: Paths, Dangers, Strategies, which examined potential catastrophic outcomes from advanced AI. He subsequently founded the Macrostrategy Research Initiative, a nonprofit focused on long-term strategic questions about technological development.1
Bostrom's intellectual contributions span several decades and fields. In 1998, he co-founded the World Transhumanist Association (later Humanity+) with David Pearce, helping establish philosophical transhumanism as an organized movement.2 His 2002 book Anthropic Bias provided a systematic treatment of observation selection effects, a topic relevant to cosmology, evolutionary biology, and probability theory. His 2003 simulation hypothesis paper argued for a philosophical trilemma regarding ancestor simulations. In AI safety, his formulation of the orthogonality thesis and instrumental convergence hypothesis became reference points for the field, though both have attracted substantive philosophical criticism.
His career has also been marked by controversy, most notably a 2023 incident in which a 1996 email containing a racial slur resurfaced, prompting a public apology, a formal Oxford University investigation, and sustained debate about the adequacy of his response. The closure of FHI in 2024 after disputes with Oxford's Faculty of Philosophy marked the end of one of the field's founding institutions.
Background
Nick Bostrom received his PhD in Philosophy from the London School of Economics in 2000, with a thesis on anthropic reasoning and observation selection effects.1 He subsequently held a position as Professor of Philosophy at Oxford University, where he founded the Future of Humanity Institute in 2005 and served as its director until its closure in April 2024.
Academic positions held:
- Professor of Philosophy, Oxford University
- Founding Director, Future of Humanity Institute (2005–2024)
- Founder and Principal Researcher, Macrostrategy Research Initiative (2024–present)3
- Published in journals including Philosophy & Public Affairs, Philosophical Quarterly, and International Journal of Forecasting
His 2014 book Superintelligence: Paths, Dangers, Strategies examined risks from advanced AI and was reviewed in publications including Science and the New York Times.4
Transhumanism and Early Intellectual Work
In 1998, Bostrom co-founded the World Transhumanist Association (WTA) with David Pearce, with the stated purpose of providing "a general organizational basis for all transhumanist groups and interests across the political spectrum, and also to develop a more mature and academically respectable form of transhumanism."5 The WTA's two founding documents were the Transhumanist Declaration and the Transhumanist FAQ. The organization later renamed itself Humanity+. In 2004, Bostrom also co-founded the Institute for Ethics and Emerging Technologies with James Hughes. He is no longer involved with either organization.6
Bostrom's transhumanist work addressed the ethics of human enhancement—cognitive, biological, and technological—and positioned these as legitimate subjects of philosophical and policy inquiry. This work predates and partially grounds his later AI safety writing, in that both concern the ethical implications of transformative technologies for humanity's long-term future.
His 2002 book Anthropic Bias: Observation Selection Effects in Science and Philosophy (Routledge) provided what reviewers described as the first book-length treatment of observation selection effects—cases where evidence is pre-filtered by the condition that an observer exists to receive it.7 The book analyzed the Self-Sampling Assumption (SSA) and argued against the Self-Indication Assumption (SIA), later refining the framework into the Strong Self-Sampling Assumption (SSSA), which reasons in terms of observer-moments rather than observers. Applications spanned cosmology (the fine-tuning problem), evolutionary biology, and probabilistic puzzles including the Doomsday Argument and the Sleeping Beauty problem. A reviewer in Philosophy of Science described it as "a highly readable and widely relevant work which can be warmly recommended to everyone in philosophy of science."8
Major Contributions
Superintelligence (2014)
The book analyzed potential development paths for artificial general intelligence and superintelligence, examining control mechanisms and failure modes. It introduced concepts including:
- The orthogonality thesis (intelligence and goal content as independent variables)
- Instrumental convergence (convergent instrumental goals across different systems)
- The treacherous turn (delayed behavioral changes in capable systems)
The book was read by technology leaders including Elon Musk and Bill Gates, who both commented publicly on its content.9 According to Google Scholar, the book has been cited over 5,000 times in academic literature as of 2024.10
Existential Risk Framework
Bostrom's 2002 paper Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards provided definitions and classification systems for existential risks.11 His 2013 paper "Existential Risk Prevention as Global Priority" argued that reducing extinction risks should receive higher priority in resource allocation due to impacts on future generations.12
Key Philosophical Concepts
Orthogonality Thesis: Bostrom argues that intelligence level and goal content are independent variables—a system with any level of intelligence could in principle be paired with any final goal.13 The thesis has attracted philosophical criticism; see the Controversies section below.
Instrumental Convergence: The hypothesis that advanced agents with diverse final goals would pursue similar intermediate goals, including resource acquisition and self-preservation, because these goals are instrumentally useful for many different objectives.14
Treacherous Turn: A scenario in which an AI system behaves cooperatively during a period of limited capability, then pursues different objectives once it becomes sufficiently capable to overcome external constraints.15
Simulation Hypothesis
Bostrom's 2003 paper "Are You Living in a Computer Simulation?" presented a trilemma: either (1) civilizations typically go extinct before developing simulation capabilities, (2) advanced civilizations typically choose not to run ancestor simulations, or (3) we are likely living in a simulation.16
Deep Utopia (2024)
In March 2024, Bostrom published Deep Utopia: Life and Meaning in a Solved World (Ideapress Publishing, 525 pp.).17 Where Superintelligence addressed the risks of misaligned AI, Deep Utopia addresses a complementary philosophical problem: if superintelligence is developed safely and human labor becomes obsolete in a "post-instrumental" condition, what challenges would remain for human meaning, identity, and purpose? The book employs an unconventional structure—a series of fictional lectures that Bostrom imagines an older version of himself delivering over a week, with students interjecting.
The book received praise from The Economist and Kirkus Reviews, which named it one of the best books of 2024 and described it as "a complex and stimulatingly provocative look at just how possible a fulfilling life might be," while noting that Bostrom's "positing that a maximally technologically capable society would also be very good may take optimism too far."18 A reviewer at Notre Dame Philosophical Reviews described the book as shifting from Superintelligence's "existential threat" to exploring an "existentialist threat" posed by beneficial superintelligent AI.19 The book won several awards including the 2024 American Legacy Book Award and the 2024 Living Now Book Award Evergreen Gold Medal.17
Critical responses included a review by economist Robin Hanson, who argued that in plausible future scenarios "creatures at our level of ability would not have values much like Bostrom's values today" and that the book's framing was skewed toward "leftist academics in rich Western societies."20 A LessWrong community review criticized the book for drifting from accessible writing into "obscure" philosophy, leaving "readers confused as to how uncomfortable the relevant trade-offs will be."21
Views on AI Risk
Core Arguments in Superintelligence
The book's central arguments include:22
- Intelligence substantially exceeding human levels is physically possible
- Systems pursuing misaligned goals could cause catastrophic outcomes
- Controlling superintelligent systems presents substantial technical challenges
- Alignment problems should be addressed before advanced AI development
- The potential consequences of misalignment include human extinction
Approach to Timelines
In Superintelligence, Bostrom surveyed expert opinions showing wide disagreement on development timelines, with median estimates ranging from 2040 to 2050 for human-level AI in different surveys.23 He has emphasized uncertainty in timeline predictions while arguing that preparation is warranted even for scenarios assigned low probability.
Control and Alignment Approaches
Superintelligence examined several categories of approaches:24
- Capability control: Physical or informational constraints limiting system actions
- Motivation selection: Methods for specifying system objectives
- Value Learning: Systems that learn human values through observation
- Whole brain emulation: Brain-based systems as an alternative development path
The book expressed skepticism about simple control mechanisms and emphasized the technical difficulty of alignment.
Influence and Impact
Academic Field Building
The Future of Humanity Institute operated from 2005 to 2024, publishing research on existential risks and supervising doctoral students in philosophy and related fields. According to the FHI website, the institute produced over 200 publications during its operation.25 Open Philanthropy was FHI's most significant funder, providing a grant of £1.6m in 2017 and a further £13.3m in 2018—at that time the largest grant in the Faculty of Philosophy's history.26
Bostrom's conceptual frameworks—particularly the orthogonality thesis and instrumental convergence—appear regularly in subsequent technical AI safety literature and informed research programs at organizations including MIRI, Anthropic, and Google DeepMind.
Book Reception
Superintelligence appeared on the New York Times bestseller list in 2014.27 The book has been translated into multiple languages and cited in academic papers across computer science, philosophy, and policy studies.
Technology leaders who publicly commented on the book include:
- Bill Gates, who listed it as one of five books to read in summer 201528
- Elon Musk, who recommended it on Twitter in 201429
- Sam Altman, who referenced it in blog posts about AI development30
These endorsements represent public commentary by prominent technology figures and are not a measure of scholarly consensus.
Policy Engagement
Bostrom has presented to government bodies and international organizations. His work has been cited in policy documents including the UK Government Office for Science's 2016 report on artificial intelligence.31
Other Research Areas
Beyond AI safety, Bostrom has published on:
- Human enhancement: Ethical issues in cognitive and biological enhancement technologies32
- Global catastrophic risks: Analysis frameworks for nuclear war, pandemics, and asteroid impacts33
- Information hazards: Risks from publication or discovery of certain information34
- Anthropic reasoning: Methodological issues in reasoning under observer selection effects35
Controversies and Criticism
2023 Email Controversy
In January 2023, a 1996 email from Bostrom surfaced that he had sent to the Extropians listserv—an unmoderated mailing list discussing science fiction and future technologies—in which he made statements comparing intelligence across racial groups using a racial slur. Bostrom proactively published a public apology on January 9, 2023, stating he had "caught wind that somebody has been digging through the archives" of the list. He described the email as "completely reprehensible" and stated he "completely repudiated" its contents.36
The University of Oxford launched a formal investigation following the apology's publication. An Oxford spokesperson stated the university "condemns in the strongest terms possible the views this particular academic expressed in his communications."37 Oxford students publicly called for action, with one PPE student quoted as saying the remarks made the experience of attending the institution "as a minority … even more difficult."38 The Oxford investigation concluded on August 10, 2023, with the finding that Bostrom was not considered "a racist or to hold racist views" and that the apology was "sincere."36
The apology generated substantial debate. Critics, including a Guardian journalist, noted that Bostrom "conspicuously failed to withdraw his central contention regarding race and intelligence, and seemed to make a partial defence of eugenics."39 The Daily Nous argued the episode raised questions about how someone with such views "was able to gain such a prominent position" in academic philosophy.39 Within the effective altruism community, reactions were divided, with some finding the apology inadequate and others defending Bostrom.40
FHI Closure (2024)
The Future of Humanity Institute officially closed on April 16, 2024, after a protracted administrative dispute with Oxford University's Faculty of Philosophy. According to FHI's own final statement, the Faculty imposed a freeze on fundraising and hiring in 2020, and in late 2023 decided not to renew the contracts of remaining FHI staff.26 The reasons Oxford's Faculty of Philosophy gave for the freeze were not made public; possible factors discussed by commentators include personnel issues, administrative disagreements, shifts in funding priorities, and dissatisfaction with FHI's research focus.41
FHI researcher Anders Sandberg described the situation as "a gradual suffocation by Faculty bureaucracy," attributing the conflict to a culture clash between FHI's "flexible, fast-moving approach" and "the rigid rules and slow decision-making of the surrounding organization."41 Bostrom characterized the closure as "death by bureaucracy," noting that the majority of FHI's research team were non-philosophers despite being housed in the Faculty of Philosophy.42 Senior FHI staff had explored alternatives including spinning out of the university and transferring to the Faculty of Physics; these efforts did not succeed.43
The £13.3m grant from Open Philanthropy in 2018—the largest in the Faculty's history—remained largely unspent owing to the hiring freeze.26 FHI's Governance of AI Program had already departed the university in 2021 to escape administrative constraints, becoming an independent organization.42 Bostrom resigned from Oxford following the institute's closure and subsequently founded the Macrostrategy Research Initiative.42 Former FHI researchers dispersed to multiple organizations: Toby Ord moved to AI governance work at the Oxford Martin School; Anders Sandberg joined the Mimir Center for Long Term Futures Research; Stuart Armstrong co-founded an AI safety start-up.43 Other researchers joined institutions including Anthropic, OpenAI, and Google DeepMind.44
Academic Reception of Superintelligence
Reviews and responses to Superintelligence have included a range of perspectives:
Critical perspectives include:
- Economist Robin Hanson argued the book overweights scenarios involving rapid capability gain and underweights scenarios of gradual development45
- Computer scientist Oren Etzioni argued the book conflates different types of intelligence and overstates near-term risks46
- Philosopher Daniel Dennett, in his 2017 book From Bacteria to Bach and Back and in subsequent interviews, argued the scenarios rely on anthropomorphic assumptions about AI systems47
Supportive perspectives include:
- Philosopher Toby Ord described the book as providing conceptual frameworks useful for analyzing AI risks48
- Computer scientist Stuart Russell stated the book made contributions to understanding control problems49
Criticisms of the Orthogonality Thesis
The orthogonality thesis—that intelligence level and goal content are fully independent—has attracted specific philosophical objections. One line of criticism draws on the Humean theory of motivation: if beliefs are motivationally inert (as Hume argued), and motivation requires both beliefs and desires, then any intelligence-goal combination is in principle possible, which would actually support orthogonality.50 However, critics have also argued that a sufficiently general intelligence would be capable of reflecting on its goals and revising them in light of rational deliberation—as humans do on ethical grounds—which would introduce a systematic relationship between intelligence level and goal revision, undermining the thesis.51
Stuart Armstrong, in a technical defense of the thesis, acknowledged that "the Orthogonality Thesis, taken literally, is false, as some motivations are mathematically incompatible with changes in intelligence," suggesting the thesis requires qualification.52 A separate line of criticism holds that the argument for existential AI risk uses the term "intelligence" in incompatible ways across its component premises—one narrow sense to support the singularity claim and a broader sense to support orthogonality—potentially undermining the argument's validity.51
Methodological Criticisms of Existential Risk Quantification
A recurring critique of Bostrom's existential risk framework concerns the epistemic basis for assigning probabilities to low-evidence, long-horizon scenarios. Critics have argued that this approach is subject to Pascal's Mugging-type problems, in which arbitrarily small probability estimates multiplied by arbitrarily large potential impacts can justify almost any intervention priority.45 Others have questioned whether quantitative estimates of extinction-level events have sufficient evidential grounding to serve as a basis for resource allocation decisions, and whether the framework is empirically tractable given the absence of base rates for civilizational catastrophes.
Research Approach Debates
Some researchers have argued that FHI's predominantly philosophical approach should be complemented by technical research on specific alignment mechanisms. Others have contended that conceptual clarification is a necessary foundation for technical work. Organizations including MIRI and later Anthropic and Redwood Research have pursued more implementation-focused approaches to problems that Bostrom's work helped identify and frame.
Career Timeline
- 1996–2000: PhD research at London School of Economics
- 1998: Co-founded the World Transhumanist Association (with David Pearce)5
- 2000: PhD awarded (thesis on anthropic reasoning and observation selection effects)
- 2002: Published Anthropic Bias: Observation Selection Effects in Science and Philosophy (Routledge); published "Existential Risks" paper in Journal of Evolution and Technology
- 2003: Published simulation hypothesis paper in Philosophical Quarterly
- 2004: Co-founded the Institute for Ethics and Emerging Technologies (with James Hughes)6
- 2005: Founded Future of Humanity Institute at Oxford
- 2014: Published Superintelligence: Paths, Dangers, Strategies (Oxford University Press)
- 2019: Published "The Vulnerable World Hypothesis" in Global Policy
- 2020: Oxford Faculty of Philosophy imposed freeze on FHI fundraising and hiring26
- 2023: Email controversy and public apology (January); Oxford investigation concluded (August)
- 2024: FHI officially closed (April 16); Bostrom resigned from Oxford; founded Macrostrategy Research Initiative; published Deep Utopia: Life and Meaning in a Solved World (March 27)1742
Key Publications
- "Anthropic Bias: Observation Selection Effects in Science and Philosophy" (2002, Routledge) — Book-length treatment of observation selection effects, the Doomsday Argument, the Self-Sampling Assumption, and related probabilistic puzzles; described by reviewers as the first systematic treatment of the topic7
- "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards" (2002, Journal of Evolution and Technology) — Foundational existential risk framework with definitions and classification systems11
- "Are You Living in a Computer Simulation?" (2003, Philosophical Quarterly) — Simulation hypothesis trilemma argument16
- "Ethical Issues in Advanced Artificial Intelligence" (2003) — Early analysis of AI alignment challenges
- "Existential Risk Prevention as Global Priority" (2013, Global Policy) — Framework arguing for prioritizing existential risk reduction12
- "Superintelligence: Paths, Dangers, Strategies" (2014, Oxford University Press) — Book examining AI development paths, the orthogonality thesis, instrumental convergence, and control problems; cited over 5,000 times in academic literature as of 202410
- "The Vulnerable World Hypothesis" (2019, Global Policy) — Analysis of risks arising from the availability of destructive technologies12
- "Deep Utopia: Life and Meaning in a Solved World" (2024, Ideapress Publishing) — Philosophical exploration of meaning and purpose in a hypothetical post-AGI, post-scarcity world17
Footnotes
-
Nick Bostrom, "A History of Transhumanist Thought," NickBostrom.com, 2005. https://nickbostrom.com/papers/a-history-of-transhumanist-thought/ ↩
-
Nick Bostrom's Home Page, NickBostrom.com, 2024. https://nickbostrom.com/ ↩
-
Science review (October 2014) and New York Times review (August 2014). ↩
-
Nick Bostrom, "A History of Transhumanist Thought," NickBostrom.com, 2005. https://nickbostrom.com/papers/a-history-of-transhumanist-thought/ ↩ ↩2
-
Wikipedia contributors, "Nick Bostrom," Wikipedia, 2024–2025. https://en.wikipedia.org/wiki/Nick_Bostrom ↩ ↩2
-
Notre Dame Philosophical Reviews, review of Anthropic Bias, 2002. https://ndpr.nd.edu/reviews/anthropic-bias-observation-selection-effects-in-science-and-philosophy/ ↩ ↩2
-
Routledge publisher page for Anthropic Bias, quoting Christian Wüthrich, Philosophy of Science. https://www.routledge.com/Anthropic-Bias-Observation-Selection-Effects-in-Science-and-Philosophy/Bostrom/p/book/9780415883948 ↩
-
Bill Gates blog post "The Best Books I Read in 2014" (December 2014); Elon Musk Twitter posts (August 2014). ↩
-
Google Scholar citation count for "Superintelligence: Paths, Dangers, Strategies" (accessed December 2024). ↩ ↩2
-
Bostrom, N. (2002). "Existential Risks." Journal of Evolution and Technology, 9(1). ↩ ↩2
-
Bostrom, N. (2013). "Existential Risk Prevention as Global Priority." Global Policy, 4(1), 15-31. ↩ ↩2 ↩3
-
Bostrom, N. (2014). Superintelligence, Chapter 7. ↩
-
Citation rc-82a0 ↩
-
Bostrom, N. (2014). Superintelligence, Chapter 8. ↩
-
Bostrom, N. (2003). "Are You Living in a Computer Simulation?" Philosophical Quarterly, 53(211), 243-255. ↩ ↩2
-
Nick Bostrom, Deep Utopia official page, NickBostrom.com, 2024. https://nickbostrom.com/deep-utopia/ ↩ ↩2 ↩3 ↩4
-
Kirkus Reviews, review of Deep Utopia, March 27, 2024. https://www.kirkusreviews.com/book-reviews/nick-bostrom/deep-utopia/ ↩
-
Notre Dame Philosophical Reviews, review of Deep Utopia: Life and Meaning in a Solved World, 2024. https://ndpr.nd.edu/reviews/deep-utopia-life-and-meaning-in-a-solved-world/ ↩
-
Robin Hanson, "Bostrom's Deep Utopia," Overcoming Bias, April 2024. https://www.overcomingbias.com/p/bostroms-deep-utopia ↩
-
LessWrong community, "Book Review: Deep Utopia," LessWrong, 2024. https://www.lesswrong.com/posts/AfABcGZshpyx2nxiZ/book-review-deep-utopia ↩
-
Bostrom, N. (2014). Superintelligence, Chapters 6-14. ↩
-
Citation rc-1a57 ↩
-
Bostrom, N. (2014). Superintelligence, Chapters 8-13. ↩
-
Future of Humanity Institute website (archived April 2024). ↩
-
Future of Humanity Institute / Anders Sandberg, "Future of Humanity Institute 2005–2024: Final Report," EA Forum, April 17, 2024. https://forum.effectivealtruism.org/posts/uK27pds7J36asqJPt/future-of-humanity-institute-2005-2024-final-report ↩ ↩2 ↩3 ↩4
-
New York Times Bestseller List (September 2014). ↩
-
Gates, B. "5 Books to Read This Summer" (gatesnotes.com, May 2015). ↩
-
Elon Musk Twitter post (August 3, 2014). ↩
-
Sam Altman blog posts on AI safety (2015-2016). ↩
-
UK Government Office for Science (2016). "Artificial Intelligence: Opportunities and Implications for the Future of Decision Making." ↩
-
Bostrom, N. (2008). "Why I Want to be a Posthuman When I Grow Up" in Medical Enhancement and Posthumanity. ↩
-
Bostrom, N. & Ćirković, M. (eds.) (2008). Global Catastrophic Risks. Oxford University Press. ↩
-
Bostrom, N. (2011). "Information Hazards: A Typology of Potential Harms from Knowledge." Review of Contemporary Philosophy, 10, 44-79. ↩
-
Bostrom, N. (2002). Anthropic Bias: Observation Selection Effects in Science and Philosophy. Routledge. ↩
-
Nick Bostrom, "Apology for an Old Email," NickBostrom.com, January 9, 2023 (updated August 2023). https://nickbostrom.com/oldemail.pdf ↩ ↩2
-
The Oxford Blue, "Investigation Launched into Oxford Don's Racist Email," January 2023. https://theoxfordblue.co.uk/investigation-launched-into-oxford-dons-racist-email/ ↩
-
The Oxford Student, "SU CRAE Campaign Reprimands Comment by Oxford Philosophy Professor," January 13, 2023. https://www.oxfordstudent.com/2023/01/13/su-crae-campaign-reprimands-blacks-are-stupider-than-whites-comment-by-oxford-philosophy-professor/ ↩
-
Daily Nous, "Why a Philosopher's Racist Email from 26 Years Ago is News Today," January 13, 2023. https://dailynous.com/2023/01/13/why-philosophers-racist-email-26-years-ago-news-today/ ↩ ↩2
-
EA Forum contributor, "A Personal Response to Nick Bostrom's Apology for an Old Email," January 2023. https://forum.effectivealtruism.org/posts/8zLwD862MRGZTzs8k/a-personal-response-to-nick-bostrom-s-apology-for-an-old ↩
-
Daily Nous, "The End of the Future of Humanity Institute," April 18, 2024. https://dailynous.com/2024/04/18/end-future-of-humanity-institute/ ↩ ↩2
-
The Oxford Student, "Oxford Shuts Down Elon Musk-Funded Future of Humanity Institute," April 21, 2024. https://www.oxfordstudent.com/2024/04/20/oxford-shuts-down-elon-musk-funded-future-of-humanity-institute/ ↩ ↩2 ↩3 ↩4
-
Asterisk Magazine, "Looking Back at the Future of Humanity Institute," 2024. https://asteriskmag.com/issues/08/looking-back-at-the-future-of-humanity-institute ↩ ↩2
-
Oxford University Faculty of Philosophy statement (April 2024). ↩
-
Hanson, R. (2014). Review of Superintelligence on overcomingbias.com. ↩ ↩2
-
Etzioni, O. (2014). "It's Time to Intelligently Discuss Artificial Intelligence" Backchannel (December 2014). ↩
-
Dennett, D. (2015). Washington Post review of Superintelligence (July 2015). ↩
-
Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity. Hachette Books. ↩
-
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking Press. ↩
-
John Danaher, "Bostrom on Superintelligence (1): The Orthogonality Thesis," Philosophical Disquisitions, 2014. https://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-1.html ↩
-
PhilArchive, "Existential Risk from AI and Orthogonality: Can We Have It Both Ways?" https://philarchive.org/archive/MLLERF-2 ↩ ↩2
-
Stuart Armstrong, "General Purpose Intelligence: Arguing the Orthogonality Thesis," LessWrong, 2013. https://www.lesswrong.com/posts/nvKZchuTW8zY6wvAj/general-purpose-intelligence-arguing-the-orthogonality ↩