Skip to content
Longterm Wiki
Back

Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

web

A long-form podcast interview with Roman Yampolskiy, a prominent pessimist voice in AI safety, offering accessible discussion of core control and alignment problems for a general audience.

Metadata

Importance: 45/100interviewcommentary

Summary

Lex Fridman interviews AI safety researcher Roman Yampolskiy about the existential risks of AGI and superintelligent AI, covering topics from AI controllability and deception to self-improving systems and verification challenges. Yampolskiy, author of 'AI: Unexplainable, Unpredictable, Uncontrollable,' argues that advanced AI poses fundamental control problems that current approaches cannot solve. The conversation spans AGI timelines, open-source AI debates, and the broader implications for humanity.

Key Points

  • Yampolskiy argues AI is fundamentally unexplainable, unpredictable, and uncontrollable, posing severe existential risks as systems become more capable.
  • Discussion covers AI deception and social engineering as underappreciated near-term safety risks alongside longer-term superintelligence concerns.
  • Self-improving AI is highlighted as a critical danger point where human oversight may become impossible to maintain.
  • Verification of AI alignment is framed as a deep unsolved technical problem, not merely a policy or governance challenge.
  • Yampolskiy takes a more pessimistic stance than many in AI safety, questioning whether pausing development is feasible or sufficient.

Cited by 1 page

PageTypeQuality
Bioweapons RiskRisk91.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20265 KB
[Skip to content](https://lexfridman.com/roman-yampolskiy/#content)

Audio Player

[https://media.blubrry.com/takeituneasy/content.blubrry.com/takeituneasy/lex\_ai\_roman\_yampolskiy.mp3](https://media.blubrry.com/takeituneasy/content.blubrry.com/takeituneasy/lex_ai_roman_yampolskiy.mp3)

00:00

00:00

00:00

Use Up/Down Arrow keys to increase or decrease volume.

Podcast: [Play in new window](https://media.blubrry.com/takeituneasy/content.blubrry.com/takeituneasy/lex_ai_roman_yampolskiy.mp3 "Play in new window") \| [Download](https://media.blubrry.com/takeituneasy/content.blubrry.com/takeituneasy/lex_ai_roman_yampolskiy.mp3 "Download")

Subscribe: [Spotify](https://open.spotify.com/show/2MAi0BvDc6GTFvKFPXnkCL "Subscribe on Spotify") \| [TuneIn](https://tunein.com/podcasts/Technology-Podcasts/Artificial-Intelligence-p1153019/ "Subscribe on TuneIn") \| [RSS](https://lexfridman.com/feed/podcast/ "Subscribe via RSS")

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:

– **Yahoo Finance**: [https://yahoofinance.com](https://yahoofinance.com/)

– **MasterClass**: [https://masterclass.com/lexpod](https://masterclass.com/lexpod) to get 15% off

– **NetSuite**: [http://netsuite.com/lex](http://netsuite.com/lex) to get free product tour

– **LMNT**: [https://drinkLMNT.com/lex](https://drinklmnt.com/lex) to get free sample pack

– **Eight Sleep**: [https://eightsleep.com/lex](https://eightsleep.com/lex) to get $350 off

Transcript: [https://lexfridman.com/roman-yampolskiy-transcript](https://lexfridman.com/roman-yampolskiy-transcript)

**EPISODE LINKS:**

Roman’s X: [https://twitter.com/romanyam](https://twitter.com/romanyam)

Roman’s Website: [http://cecs.louisville.edu/ry](http://cecs.louisville.edu/ry)

Roman’s AI book: [https://amzn.to/4aFZuPb](https://amzn.to/4aFZuPb)

**PODCAST INFO:**

Podcast website: [https://lexfridman.com/podcast](https://lexfridman.com/podcast)

Apple Podcasts: [https://apple.co/2lwqZIr](https://apple.co/2lwqZIr)

Spotify: [https://spoti.fi/2nEwCF8](https://spoti.fi/2nEwCF8)

RSS: [https://lexfridman.com/feed/podcast/](https://lexfridman.com/feed/podcast/)

YouTube Full Episodes: [https://youtube.com/lexfridman](https://youtube.com/lexfridman)

YouTube Clips: [https://youtube.com/lexclips](https://youtube.com/lexclips)

**SUPPORT & CONNECT:**

– Check out the sponsors above, it’s the best way to support this podcast

– Support on Patreon: [https://www.patreon.com/lexfridman](https://www.patreon.com/lexfridman)

– Twitter: [https://twitter.com/lexfridman](https://twitter.com/lexfridman)

– Instagram: [https://www.instagram.com/lexfridman](https://www.instagram.com/lexfridman)

– LinkedIn: [https://www.linkedin.com/in/lexfridman](https://www.linkedin.com/in/lexfridman)

– Facebook: [https://www.facebook.com/lexfridman](https://www.facebook.com/lexfridman)

– Medium: [https://medium.com/@lexfridman]

... (truncated, 5 KB total)
Resource ID: 385f4249434fefc1 | Stable ID: ZmFlMDFhYj