80,000 Hours: Toby Ord on The Precipice
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: 80,000 Hours
A long-running podcast from the 80,000 Hours career advice organization; widely listened to in the EA and AI safety communities as a source of accessible, substantive conversations with key researchers and thinkers.
Metadata
Summary
The 80,000 Hours Podcast hosts in-depth interviews with leading researchers and thinkers on AI safety, existential risk, effective altruism, and related high-impact topics. It covers technical AI safety, governance, alignment, superintelligence, AI deception, and emerging risks like AI-nuclear intersections. It serves as an accessible entry point and ongoing reference for the AI safety and EA communities.
Key Points
- •Features long-form interviews with prominent AI safety researchers including Ajeya Cotra, Toby Ord, and MIRI researchers on topics like deceptive alignment and corrigibility.
- •Covers a broad range from technical alignment (superintelligence, gradual disempowerment) to governance and policy (AI nuclear deterrence, AI timelines).
- •Includes episodes on adjacent topics like AI welfare research, consciousness, and effective altruism strategy to provide broader context for safety work.
- •Freely available across major podcast platforms and YouTube, making cutting-edge safety discussions widely accessible.
- •One of the most prominent public-facing media channels for communicating AI safety ideas beyond academic circles.
Cited by 3 pages
| Page | Type | Quality |
|---|---|---|
| 80,000 Hours | Organization | 45.0 |
| Bioweapons Risk | Risk | 91.0 |
| Multipolar Trap (AI Development) | Risk | 91.0 |
Cached Content Preview
# The most important conversations about artificial intelligence you won’t hear anywhere else.
- [](https://podcasts.apple.com/us/podcast/80-000-hours-podcast/id1245002988)
- [](https://www.youtube.com/playlist?list=PL-BRtcBm4Yj4aKn72p4PjyqHh0ZQFdI1A)
- [](https://open.spotify.com/show/2WzJwXWBDnn4iZ7odKwDib?si=T3Bboj1YQWGc383Tns0FaA)
- [](https://feeds.transistor.fm/80000-hours-podcast)

[](https://80000hours.org/podcast/episodes/)
## Selected highlights
[](https://80000hours.org/podcast/episodes/ajeya-cotra-accidentally-teaching-ai-to-deceive-us/)[](https://80000hours.org/podcast/episodes/toby-ord-inference-scaling-ai-governance/)[](https://80000hours.org/podcast/episodes/nate-silver-effective-altruism-sbf-art-of-risk/)[](https://80000hours.org/podcast/episodes/david-chalmers-nature-ethics-consciousness/)[](https://80000hours.org/podcast/episodes/rachel-glennerster-market-shaping-incentives/)[](https://80000hours.org/podcast/episodes/christopher-brown-slavery-abolition/)[](https://80000hours.org/podcast/episodes/vitalik-buterin-techno-optimism/)[](https://80000hours.org/podcast/episodes/randy-nesse-evolutionary-medicine-psychiatry/)[](https://80000hours.org/podcast/episodes/sharon-hewitt-rawlette-hedonistic-utilitarianism/)
[See all episodes](https://80000hours.org/podcast/episodes/)
## Latest episodes
[](https://80000hours.org/podcast/episodes/sam-winter-levy-nikita-lalwani-ai-nuclear-deterrence/)
[2656524aca2f08c0 | Stable ID: MTUwNjM5ND