The content intelligence: an argument against the lethality of artificial intelligence | Discover Artificial Intelligence | Springer Nature Link
paperAuthor
Credibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Springer
This peer-reviewed journal article challenges the existential risk narrative by questioning the applicability of human intelligence concepts to AI systems and critiquing foundational AI safety arguments like the Orthogonality Thesis, representing an important counterargument in AI safety discourse.
Paper Details
Metadata
Summary
This paper challenges the existential risk narrative surrounding artificial intelligence by examining the concept of intelligence itself and its applicability to contemporary AI systems. The author analyzes Eliezer Yudkowsky's arguments about AI lethality and argues that both weak and strong AI systems, lacking human-defined goals, would not inherently pose existential threats to humanity. The paper questions the validity of the Orthogonality Thesis and suggests that concerns about AI alignment may be overstated, while also exploring the theoretical possibility of artificial life through modular mind-function emulation.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Eliezer Yudkowsky | Person | 35.0 |
Cached Content Preview
[Skip to main content](https://link.springer.com/article/10.1007/s44163-024-00112-9#main)
## Search
Search by keyword or author
Search
## Navigation
- [Find a journal](https://link.springer.com/journals/)
- [Publish with us](https://www.springernature.com/gp/authors)
- [Track your research](https://link.springernature.com/home/)
# The content intelligence: an argument against the lethality of artificial intelligence
- Perspective
- [Open access](https://www.springernature.com/gp/open-science/about/the-fundamentals-of-open-access-and-open-research)
- Published: 22 February 2024
- Volume 4, article number 13, (2024)
- [Cite this article](https://link.springer.com/article/10.1007/s44163-024-00112-9#citeas)
You have full access to this [open access](https://www.springernature.com/gp/open-science/about/the-fundamentals-of-open-access-and-open-research) article
[Download PDF](https://link.springer.com/content/pdf/10.1007/s44163-024-00112-9.pdf)
[Save article](https://link.springer.com/article/10.1007/s44163-024-00112-9/save-research?_csrf=pL1S5Ypw50G2Lsj9nuWG176onza4R97P)
[View saved research](https://link.springer.com/saved-research)
[Discover Artificial Intelligence](https://link.springer.com/journal/44163) [Aims and scope](https://link.springer.com/journal/44163/aims-and-scope) [Submit manuscript](https://submission.nature.com/new-submission/44163/3)
The content intelligence: an argument against the lethality of artificial intelligence
[Download PDF](https://link.springer.com/content/pdf/10.1007/s44163-024-00112-9.pdf)
## Abstract
This paper navigates artificial intelligence’s recent advancements and increasing media attention. A notable focus is placed on Eliezer Yudkowsky, a leading figure within the domain of artificial intelligence alignment, who aims to bridge the understanding gap between public perceptions and rationalist viewpoints on artificial intelligence technology. This focus analyzes his predicted course of action for artificial intelligence outlined within his unpublished paper _AGI Ruin: A List of Lethalities._ This is achieved by attempting to understand the concept of intelligence itself and identifying a reasonable working definition of that concept. The concept of intelligence is then applied to contemporary artificial intelligence capabilities and developments to understand its applicability to the technologies. This paper finds contemporary artificial intelligence systems are, to some extent, intelligent. However, it argues that both weak and strong artificial intelligence systems, devoid of human-defined goals, would not inherently pose existential threats to humanity, challenging the notions of artificial intelligence alignment, bringing into question the validity of Nick Bostrom’s Orthogonality Thesis. Furthermore, the possibility of artificial life created through the method of assembling various modules each emula
... (truncated, 45 KB total)a4572a8ebeee9ecd | Stable ID: OWMyYmYwN2