Nathan Calvin on California’s AI bill SB 1047 and its potential to shape US AI policy | 80,000 Hours
webCredibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: 80,000 Hours
Recorded during the active legislative debate around SB 1047 (which was ultimately vetoed by Governor Newsom in 2024), this episode is a useful reference for understanding early US state-level AI safety regulation efforts and the arguments for and against compute-threshold-based AI governance frameworks.
Metadata
Summary
An 80,000 Hours podcast episode featuring Nathan Calvin discussing California's SB 1047 AI safety bill, which would impose safety requirements on large AI model developers. The conversation covers the bill's provisions, the debate around its merits and risks, and its potential implications for broader US AI governance and policy.
Key Points
- •SB 1047 targeted large AI models (above a compute threshold) and would require developers to implement safety protocols and conduct risk assessments before deployment.
- •The bill generated significant controversy within the AI community, with supporters arguing it establishes important safety precedents and critics warning of stifling innovation.
- •California's legislative actions carry outsized national influence given it hosts most major AI labs, making SB 1047 a potential template for federal AI regulation.
- •The episode explores the challenge of regulating frontier AI models when harms are difficult to define and anticipate in advance.
- •Nathan Calvin provides an insider perspective on the lobbying, stakeholder dynamics, and political feasibility of AI safety legislation.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Safe and Secure Innovation for Frontier Artificial Intelligence Models Act | Policy | 66.0 |
Cached Content Preview
Toggle navigation
Search for:
[](https://80000hours.org/)
- [Career guide](https://80000hours.org/career-guide/)
- [AI careers](https://80000hours.org/ai/)
- [Podcast](https://80000hours.org/podcast/)
- [Videos](https://80000hours.org/video/)
- [Job board](https://jobs.80000hours.org/)
- [Resources](https://80000hours.org/podcast/episodes/nathan-calvin-sb-1047-california-ai-safety-bill/#)
- [**Problem profiles** What are the world's most pressing problems?](https://80000hours.org/problem-profiles/)
**[See all →](https://80000hours.org/problem-profiles/)**
**[Why focus on AI →](https://80000hours.org/problem-profiles/artificial-intelligence/?v=1)**
**Top areas to work on**
- [Power-seeking AI systems](https://80000hours.org/problem-profiles/artificial-intelligence/)
- [Extreme power concentration](https://80000hours.org/problem-profiles/extreme-power-concentration/)
- [Engineered pandemics](https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/)
- [Great power conflict](https://80000hours.org/problem-profiles/great-power-conflict/)
- [Factory farming](https://80000hours.org/problem-profiles/factory-farming/)
- [Wild animal suffering](https://80000hours.org/problem-profiles/wild-animal-welfare/)
- [Global health](https://80000hours.org/problem-profiles/health-in-poor-countries/)
- [Climate change](https://80000hours.org/problem-profiles/climate-change/)
**Emerging challenges**
- [Moral status of digital minds](https://80000hours.org/problem-profiles/moral-status-digital-minds/)
- [Gradual disempowerment](https://80000hours.org/problem-profiles/gradual-disempowerment/)
- ['S-risks'](https://80000hours.org/problem-profiles/s-risks/)
- [AI-enhanced decision making](https://80000hours.org/problem-profiles/ai-enhanced-decision-making/)
- [Space governance](https://80000hours.org/problem-profiles/space-governance/)
- [Catastrophic AI misuse](https://80000hours.org/problem-profiles/catastrophic-ai-misuse/)
**[More problem profiles →](https://80000hours.org/problem-profiles/)**
**[Why focus on AI →](https://80000hours.org/problem-profiles/artificial-intelligence/?v=1)**
- [**Career reviews** Write-ups on high-impact career paths.](https://80000hours.org/career-reviews/)
**[See all →](https://80000hours.org/career-reviews/)**
**AI-focused career paths**
- [AI governance and policy](https://80000hours.org/career-reviews/ai-policy-and-strategy/)
- [AI safety technical research](https://80000hours.org/career-reviews/ai-safety-researcher/)
- [Information security in high-impact areas](https://80000hours.org/career-reviews/information-security/)
- [Expert in AI hardware](https://80000hours.org/career-reviews/become-an-expert-in-ai-hardware/)
- [China-related AI safety and governance](https://80000hours.org/career-reviews/china-related-
... (truncated, 9 KB total)5e054e6b3b88566a | Stable ID: vRLgioxzLw