Skip to content
Longterm Wiki
Back

California's SB 1047 Would Impose New Safety Requirements for Developers of Large-Scale AI Models

web

Legal analysis from Morgan Lewis on SB 1047, a landmark but ultimately vetoed California bill representing one of the first major attempts at state-level frontier AI safety regulation; useful for understanding policy debates around AI governance.

Metadata

Importance: 45/100organizational reportanalysis

Summary

This Morgan Lewis legal analysis examines California's SB 1047, a proposed bill that would impose safety obligations on developers of large frontier AI models. The piece outlines key compliance requirements, liability provisions, and potential implications for AI companies operating in or developing models for California markets.

Key Points

  • SB 1047 targets developers of large-scale AI models above certain compute thresholds, requiring safety protocols before deployment.
  • The bill would mandate hazard assessments, incident reporting, and 'kill switch' capabilities for covered AI models.
  • Developers could face legal liability if their models are used to cause mass casualties or critical infrastructure attacks.
  • The legislation created significant industry debate about whether state-level AI regulation could stifle innovation or create compliance fragmentation.
  • California's Governor ultimately vetoed SB 1047 in September 2024, citing concerns about regulatory overreach and innovation impact.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 202613 KB
[SUBSCRIBE](https://marketing.morganlewis.com/REACTION/Home/RSForm?RSID=RXc52qJ-inISylu03Y5wmK5IdTsibSNnFxmMOAxEdFU "SUBSCRIBE")

LawFlash

# California’s SB 1047 Would Impose New Safety Requirements for Developers of Large-Scale AI Models

August 29, 2024

The California State Assembly passed on August 28, 2024 proposed bill SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which aims to add new requirements to the development of large AI models by setting out various testing, safety, and enforcement standards. The proposed bill seeks to curb AI’s “potential to be used to create novel threats to public safety and security” such as weapons of mass destruction and cyberattacks.

The bill will return to the Senate floor for a final vote and, if approved, Governor Gavin Newsom will have until [September 30, 2024](https://www.assembly.ca.gov/schedules-publications/legislative-deadlines#month8) to veto the bill.

### WHICH AI DEVELOPERS WILL BE AFFECTED?

The bill would only apply to developers of “covered models,” which is a defined term that shifts over time based on computing power threshold. Prior to January 1, 2027, “covered models” are defined as AI models that are either trained (1) using computing power “greater than 10^26 integer or floating-point operations” (FLOP) that cost over $100 million to develop or (2) using fine-tuning with computing power of three times 10^25 integer or FLOP costing over $10 million. [\[1\]](https://www.morganlewis.com/pubs/2024/08/californias-sb-1047-would-impose-new-safety-requirements-for-developers-of-large-scale-ai-models#_ftn1) This is the same computing threshold as set in the Biden administration’s recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. [\[2\]](https://www.morganlewis.com/pubs/2024/08/californias-sb-1047-would-impose-new-safety-requirements-for-developers-of-large-scale-ai-models#_ftn2)

After January 1, 2027, the cost threshold will remain the same (adjusted for inflation), but the computing power threshold will be determined by the US federal government’s Government Operations Agency. [\[3\]](https://www.morganlewis.com/pubs/2024/08/californias-sb-1047-would-impose-new-safety-requirements-for-developers-of-large-scale-ai-models#_ftn3) Notably, the pre-2027 computing power threshold exceeds current capabilities of AI training models, [\[4\]](https://www.morganlewis.com/pubs/2024/08/californias-sb-1047-would-impose-new-safety-requirements-for-developers-of-large-scale-ai-models#_ftn4) but it is expected that the next generation of highest-capability models will exceed this figure.

The bill would broadly cover any AI developers that offer their services in California regardless of whether the developer is headquartered in California.

### KEY TESTING AND SAFETY REQUIREMENTS

The bill sets out various testing and safety requirements, including the following:

- **Shutdown capabilities:** 

... (truncated, 13 KB total)
Resource ID: c671c4320a4adcab | Stable ID: YTIyOWJkZj