The Pros and Cons of California's Proposed SB-1047 AI Safety Law | Lawfare
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Lawfare
Published on Lawfare during the legislative debate over California SB-1047 (2024), this piece is useful for understanding the legal structure of frontier AI regulation and the gap between procedural compliance and substantive safety outcomes.
Metadata
Summary
A legal and policy analysis of California's SB-1047 (Safe and Secure Innovation for Frontier AI Models Act), praising it as an important step toward AI safety regulation while identifying key shortcomings, particularly that it holds developers liable only for failing specific precautionary measures rather than for actual harms, leaving gaps when precautions prove insufficient.
Key Points
- •SB-1047 covers frontier AI models trained with >10^26 FLOP or equivalent algorithmic efficiency, matching the Biden EO threshold but extending it to cover compute-efficient models.
- •The bill offers two compliance pathways: a 'limited duty exemption' for systems lacking dangerous capabilities, and a full safety protocol for those that may have them.
- •A central critique is that liability is tied to adopting specific precautionary measures rather than outcomes, reducing incentives to innovate on safety beyond statutory minimums.
- •Developers who reasonably but erroneously conclude their model lacks dangerous capabilities are shielded from liability, placing significant weight on self-assessment.
- •The author argues policymakers should make developers bear residual risk of harm, not just require compliance with enumerated precautions, since AI safety is technically unsolved.
Cached Content Preview
- 
## [Gabriel Weil](https://www.lawfaremedia.org/contributors/gweil)
* * *
[Meet The Authors](https://www.lawfaremedia.org/article/california-s-proposed-sb-1047-would-be-a-major-step-forward-for-ai-safety-but-there-s-still-room-for-improvement#postContributors)
[Subscribe to Lawfare](https://www.lawfaremedia.org/subscribe)
California [SB-1047](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047), the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was introduced by Sen. Scott Weiner in February and passed through the Judiciary Committee and the Committee on Governmental Organization in April. This legislation represents an important first step toward protecting humanity from the risks of advanced artificial intelligence (AI) systems. Weiner and his co-sponsors deserve praise for taking on this critical and neglected issue. Nonetheless, the bill falls short of its promise to protect public safety from the risks posed by frontier AI systems in a few key respects.
These shortcomings all relate to one central fact: AI safety remains unsolved as a technical problem. The best way to encourage frontier AI developers to continually push forward the safety frontier is to make them bear the risk that systems will cause harm. SB-1047 holds AI developers liable only when they fail to adopt specific precautionary measures laid out in the statute. Policymakers should not be confident that these precautionary measures will provide adequate protection for public safety.
SB-1047 would create a new regulatory framework for frontier AI systems, defined as models trained with more than 1026 floating-point operations (FLOP) of compute. This is the same compute threshold used in the Biden administration’s [Executive Order](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. No AI system has [yet been trained](https://ourworldindata.org/grapher/artificial-intelligence-training-computation?time=2018-09-17..latest&country=~Decision+tree+(classification)) with 1026 FLOP, but it is expected that the next generation of frontier models will exceed this figure.
Unlike the federal executive order, however, the California bill also covers systems trained with enough compute that they could reasonably be expected to perform as well as a model trained with 1026 FLOP in 2024. This provision is an important measure to account for potential improvements in algorithmic efficiency that would allow potentially quite powerful and dangerous models to be trained with less than 1026 FLOP.
Under the bill, covered systems would have two compliance pathways
... (truncated, 21 KB total)506bf37ab7007371 | Stable ID: YRwnwtUVtA