Back
Carl Shulman and colleagues
webCredibility Rating
3/5
Good(3)Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: 80,000 Hours
This is Part 2 of a two-part 80,000 Hours podcast interview with Carl Shulman, a prominent researcher on AGI risk and long-run futures; Part 1 covers more technical AI trajectory topics.
Metadata
Importance: 72/100podcast episodeanalysis
Summary
Carl Shulman discusses how advanced AI could transform governance, including AI advisory systems for policymakers, risks of value lock-in from early AGI deployment, and mechanisms for maintaining democratic resilience. The episode addresses international coordination, AI forecasting capabilities, and why Shulman opposes enforced pauses on AI research.
Key Points
- •AI advisory systems could improve policy decisions by providing better forecasting and analysis, with COVID-19 used as a concrete example of where AI could have helped.
- •Value lock-in is a central concern: early AGI deployment could entrench particular values or power structures in ways that are difficult or impossible to reverse.
- •Democratic institutions may need new mechanisms to resist AI-enabled threats like coups or authoritarian consolidation of power.
- •International coordination and auditing frameworks are essential for managing AGI transitions safely across competing nation-states.
- •Shulman argues against enforced AI research pauses, believing governance adaptation and careful deployment are more tractable than halting development.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Value Lock-in | Risk | 64.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 202698 KB
## On this page:
- [Introduction](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#top)
- [1 Highlights](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#highlights)
- [2 Articles, books, and other media discussed in the show](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#articles-books-and-other-media-discussed-in-the-show)
- [3 Transcript](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#transcript)
- [3.1 Cold open \[00:00:00\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#cold-open-000000)
- [3.2 Rob's intro \[00:01:16\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#robs-intro-000116)
- [3.3 The interview begins \[00:03:24\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#the-interview-begins-000324)
- [3.4 COVID-19 concrete example \[00:11:18\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#covid-19-concrete-example-001118)
- [3.5 Sceptical arguments against the effect of AI advisors \[00:24:16\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#sceptical-arguments-against-the-effect-of-ai-advisors-002416)
- [3.6 Value lock-in \[00:33:59\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#value-lock-in-003359)
- [3.7 How democracies avoid coups \[00:48:08\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#how-democracies-avoid-coups-004808)
- [3.8 Where AI could most easily help \[01:00:25\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#where-ai-could-most-easily-help-010025)
- [3.9 AI forecasting \[01:04:30\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#ai-forecasting-010430)
- [3.10 Application to the most challenging topics \[01:24:03\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#application-to-the-most-challenging-topics-012403)
- [3.11 How to make it happen \[01:37:50\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#how-to-make-it-happen-013750)
- [3.12 International negotiations and coordination and auditing \[01:43:54\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#international-negotiations-and-coordination-and-auditing-014354)
- [3.13 Opportunities for listeners \[02:00:09\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#opportunities-for-listeners-020009)
- [3.14 Why Carl doesn't support enforced pauses on AI research \[02:03:58\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#why-carl-doesnt-support-enforced-pauses-on-ai-research-020358)
- [3.15 How Carl is feeling about the future \[02:15:47\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#how-carl-is-feeling-about-the-future-021547)
- [3.16 Rob's outro \[02:17:37\]](https://80000hours.org/podcast/episodes/carl-shulman-society-agi/#robs-outro-021737)
- [4 Learn more](https://80000hou
... (truncated, 98 KB total)Resource ID:
297ced45b445881c | Stable ID: ZmMxYTI0Zm