Skip to content
Longterm Wiki
Back

California Assembly Privacy and Consumer Protection Committee Analysis

government

Official California legislative committee analysis of SB 1047 (2024), a landmark state-level AI safety bill that sparked major debate about whether subnational governments should regulate frontier AI; ultimately vetoed by Governor Newsom but influential in shaping AI governance discourse.

Metadata

Importance: 72/100policy briefanalysis

Summary

This California Assembly Privacy and Consumer Protection Committee analysis examines SB 1047, which would impose comprehensive regulatory requirements on developers of frontier AI models costing $100M+ to train. The bill mandates governance programs, third-party audited risk assessments, whistleblower protections, and creates a new Division of Frontier Models for enforcement. It represents one of the most ambitious state-level AI safety regulatory efforts in the United States.

Key Points

  • Targets AI models costing $100M+ to train, requiring pre-training governance programs and pre-deployment risk assessments with mandatory third-party audits.
  • Creates a new Division of Frontier Models within California's Government Operations Agency to oversee compliance and enforcement.
  • Includes 'know your customer' requirements for large computing clusters to prevent misuse by bad actors.
  • Establishes CalCompute, a public computing resource aimed at democratizing access to AI infrastructure for researchers.
  • Passed California Senate 32-1, reflecting significant legislative momentum, though faced opposition from parts of the AI industry and some researchers.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 202698 KB
**SB 1047** Page 1

Date of Hearing: June 18, 2024

# ASSEMBLY COMMITTEE ON PRIVACY AND CONSUMER PROTECTION

Rebecca Bauer-Kahan, Chair

# SB 1047 (Wiener) – As Amended June 5, 2024

AS PROPOSED TO BE AMENDED

**SENATE VOTE: 32-1**

**SUBJECT: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act**

**SYNOPSIS**

_“We are current and former employees at frontier AI companies, and we believe in the_ _potential of AI technology to deliver unprecedented benefits to humanity._

_We also understand the serious risks posed by these technologies. These risks range from the_ _further entrenchment of existing inequalities, to manipulation and misinformation, to the loss_ _of control of autonomous AI systems potentially resulting in human extinction. AI companies_ _themselves have acknowledged these risks, as have governments across the world and other_ _AI experts._

_We are hopeful that these risks can be adequately mitigated with sufficient guidance from the_ _scientific community, policymakers, and the public. However, AI companies have strong_ _financial incentives to avoid effective oversight, and we do not believe bespoke structures of_ _corporate governance are sufficient to change this.”_

_The above paragraphs appear at the start of an open letter titled “A Right to Warn about_ _Advanced Artificial Intelligence,” released on June 2, 2024 by a group of current and former_ _OpenAI employees. The letter calls on AI companies to commit to various principles of openness_ _and transparency, highlighting these companies’ “weak obligations” to share their knowledge of_ _AI’s risks with the world._

_This bill seeks to strengthen those obligations in order to mitigate the risk of catastrophic harms_ _from AI models so advanced that they are not yet known to exist. SB 1047, as proposed to be_ _amended, would require the developers of such models – which cost at least $100 million in_ _computing power to train – to create good governance programs before initiating training._ _Following training, developers would be required to perform risk assessments on their models,_ _subject to third party auditing, before using or releasing them. The bill creates a Division of_ _Frontier Models in the Government Operations Agency to oversee this process. The bill also_ _adds whistleblower protections; requires operators of computer clusters to implement “know_ _your customer” requirements, including the ability to shut down any resources being used to_ _train an advanced AI model; and creates a public computing cluster known as “CalCompute” in_ _the Department of Technology. The Attorney General is charged with enforcing the bill’s_ _requirements._

_Proposed Committee amendments clarify and strengthen the bill’s provisions by, among other_ _things: eliminating the limited duty exemption; adjusting the structure of the Frontier Model_

**SB 1047** Page 2

_Division and placing it under the Board of Frontier Models; enabling the Frontier Model_ _Divi

... (truncated, 98 KB total)
Resource ID: 8ed00b431a52bee4 | Stable ID: MmQwYjgxMD