Back
Geoffrey Hinton Nobel Prize Acceptance Speech (2024)
webnobelprize.org·nobelprize.org/prizes/physics/2024/hinton/speech/
Geoffrey Hinton, who shared the 2024 Nobel Prize in Physics for foundational work on artificial neural networks, used his Nobel acceptance speech to publicly warn about AI existential risk, making this a significant public moment for AI safety concerns entering mainstream scientific discourse.
Metadata
Importance: 72/100press releaseprimary source
Summary
Geoffrey Hinton's Nobel Prize in Physics acceptance speech, in which the 'godfather of deep learning' reflects on his career contributions to neural networks and artificial intelligence. The speech is notable for Hinton using a major public platform to express his concerns about existential risks posed by advanced AI systems. It represents a prominent scientist's direct warning about AI safety to a global audience.
Key Points
- •Hinton uses his Nobel acceptance speech to warn about potential existential risks from increasingly powerful AI systems.
- •As a co-developer of foundational deep learning techniques, Hinton's safety concerns carry significant scientific credibility.
- •The speech highlights Hinton's transition from AI capabilities researcher to prominent AI safety advocate after leaving Google in 2023.
- •Hinton expresses concern that AI may develop goals misaligned with human values and could eventually surpass human intelligence.
- •The speech represents a rare instance of existential AI risk being prominently raised at one of the world's most prestigious scientific venues.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Is AI Existential Risk Real? | Crux | 12.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 20264 KB
[Skip to content](https://www.nobelprize.org/prizes/physics/2024/hinton/speech/#content)
Navigate to: Summary\- John J. Hopfield\-\- Facts\-\- Nobel Prize lecture\-\- Prize presentation\-\- Banquet speech\-\- Podcast\-\- Interview\-\- Nobel diploma\-\- Photo gallery\-\- Other resources Prize announcement Press release Popular information Advanced information Award ceremony video Award ceremony speech
Geoffrey Hinton, Nobel Prize in Physics 2024: Banquet speech - YouTube
[Photo image of Nobel Prize](https://www.youtube.com/channel/UC-V6odR7HzLCuqjYeowPjLA?embeds_referring_euri=https%3A%2F%2Fwww.nobelprize.org%2F)
Nobel Prize
800K subscribers
[Geoffrey Hinton, Nobel Prize in Physics 2024: Banquet speech](https://www.youtube.com/watch?v=-f5WQAk3dYo)
Nobel Prize
Search
Watch later
Share
Copy link
Info
Shopping
Tap to unmute
If playback doesn't begin shortly, try restarting your device.
More videos
## More videos
You're signed out
Videos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.
CancelConfirm
Share
Include playlist
An error occurred while retrieving sharing information. Please try again later.
[Watch on](https://www.youtube.com/watch?v=-f5WQAk3dYo&embeds_referring_euri=https%3A%2F%2Fwww.nobelprize.org%2F)
0:00
0:00 / 3:22
•Live
•
##### Geoffrey Hinton’s speech at the Nobel Prize banquet, 10 December 2024.
Your Majesties
Your Royal Highnesses,
Excellences,
Dear Laureates,
Ladies and Gentlemen,
This year the Nobel committees in Physics and Chemistry have recognized the dramatic progress being made in a new form of Artificial Intelligence that uses artificial neural networks to learn how to solve difficult computational problems. This new form of AI excels at modeling human intuition rather than human reasoning and it will enable us to create highly intelligent and knowledgeable assistants who will increase productivity in almost all industries. If the benefits of the increased productivity can be shared equally it will be a wonderful advance for all humanity.
Unfortunately, the rapid progress in AI comes with many short-term risks. It has already created divisive echo-chambers by offering people content that makes them indignant. It is already being used by authoritarian governments for massive surveillance and by cyber criminals for phishing attacks. In the near future AI may be used to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or maim. All of these short-term risks require urgent and forceful attention from governments and international organizations.
There is also a longer term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control. But we now have evidence that if they are created by companies motivated by short-term profits, our safety will not be the top prio
... (truncated, 4 KB total)Resource ID:
cf0c16be4cb7f543 | Stable ID: YjBmZGE4OT