Skip to content
Longterm Wiki
Back

Dan Hendrycks on AI Safety and Existential Risk (Future of Life Institute Podcast)

web

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Future of Life Institute

Dan Hendrycks is director of the Center for AI Safety and creator of widely-used safety benchmarks; this podcast bridges his technical ML safety work with broader existential risk concerns, making it useful context for understanding his research agenda.

Metadata

Importance: 58/100podcast episodecommentary

Summary

A Future of Life Institute podcast episode featuring Dan Hendrycks discussing AI safety, existential risk from advanced AI systems, and practical approaches to making AI more robust and aligned. Hendrycks, known for his work on benchmarks like MMLU and safety research at the Center for AI Safety, shares his perspectives on near-term and long-term AI risks.

Key Points

  • Dan Hendrycks discusses the landscape of AI existential risk and why he considers it a serious near-term concern
  • Covers practical ML safety approaches including robustness, anomaly detection, and evaluation benchmarks
  • Explores the gap between current AI capabilities research and safety-focused research priorities
  • Addresses how the AI safety community should prioritize efforts to reduce catastrophic and existential risks
  • Connects technical ML safety work (adversarial robustness, benchmarking) to broader existential risk reduction strategies

Cited by 1 page

PageTypeQuality
FAR AIOrganization76.0

Cached Content Preview

HTTP 200Fetched Mar 20, 20261 KB
[Skip to content](https://futureoflife.org/podcast/dan-hendrycks-on-ai-safety-and-x-risk/#main-content)

# 404

## Page not found

That page could not be found.

If you believe this is in error, please report the link as broken, or contact a member of our team for support.

[Home](https://futureoflife.org/) [Contact us](https://futureoflife.org/about-us/contact-us/) [Report a broken link](https://futureoflife.org/report-a-broken-link/)

# Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.

This field is required.

Submit

This field is required.

I want to receive Action Alerts too

This field is required.

This field is required.

Join our taskforce. Receive invitations to take direct, meaningful action at the moment when it matters most. We need you too. (max. 1 per week)

[View previous editions](https://futureoflife.org/newsletters)

angle-downcloudmagnifiercrossarrow-uparrow\_forwardbusiness\_centermemoryattach\_moneymonetization\_onwifi\_tetheringmailav\_timerlibrary\_booksrecent\_actorsaccount\_balance\_walletfingerprintgavellanguagelinkedinfacebookpinterestyoutubersstwitterinstagramfacebook-blankrss-blanklinkedin-blankpinterestyoutubetwitterinstagram
Resource ID: 7d7f635e9eb6e77d | Stable ID: N2U3OGRiMz