Back
A Heated California Debate Offers Lessons for AI Safety Governance
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Carnegie Endowment
Relevant to understanding how AI safety regulation is contested at the state level in the US; the SB 1047 veto is a key case study in the political economy of AI governance circa 2024.
Metadata
Importance: 52/100opinion piececommentary
Summary
This Carnegie Endowment commentary analyzes California's SB 1047, a bipartisan AI safety bill that passed the legislature but was vetoed by Governor Newsom in September 2024. It examines the divisions the bill exposed within the AI community and extracts lessons for future AI safety governance efforts at the subnational and national level.
Key Points
- •California's SB 1047 aimed to mandate safety testing for frontier AI models before release, addressing risks like weaponization for bioweapons or infrastructure attacks.
- •Governor Newsom vetoed the bill on Sept 29, 2024, citing need for a different approach while affirming safety objectives and promising new AI guardrail initiatives.
- •The debate exposed significant rifts among AI researchers, tech companies, and policymakers over the appropriate scope of government regulation.
- •The bill's veto offers lessons for proponents of AI safety regulation on how to tailor future legislative efforts more effectively.
- •The episode highlights the challenges of subnational AI governance and how state-level debates are watched closely by global policymakers.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| US State AI Legislation Landscape | Analysis | 70.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202617 KB
```
{
"authors": [\
"Scott Kohler",\
"Ian Klaus"\
],
"type": "commentary",
"centerAffiliationAll": "dc",
"centers": [\
"Carnegie Endowment for International Peace"\
],
"collections": [\
"Artificial Intelligence",\
"Emerging AI Policy",\
"Violence and Conflict",\
"Tech in Context"\
],
"englishNewsletterAll": "ctw",
"nonEnglishNewsletterAll": "",
"primaryCenter": "Carnegie Endowment for International Peace",
"programAffiliation": "CC",
"programs": [\
"Carnegie California"\
],
"projects": [],
"regions": [\
"United States"\
],
"topics": [\
"AI",\
"Technology",\
"Subnational Affairs"\
]
}
```

Photo by trekandshoot/iStock
Commentary
## A Heated California Debate Offers Lessons for AI Safety Governance
The bill exposed divisions within the AI community, but proponents of safety regulation can heed the lessons of SB 1047 and tailor their future efforts accordingly.
English
Link Copied
By[Scott Kohler](https://carnegieendowment.org/people/scott-kohler) and [Ian Klaus](https://carnegieendowment.org/people/ian-klaus)
Published onOct 8, 2024
In late August, the California legislature managed a feat that has eluded the U.S. Congress: passing a [bipartisan bill](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047) designed to ensure the safe development of advanced artificial intelligence (AI) models. That legislation, Senate Bill (SB) 1047, aimed to regulate frontier technologies emerging from an industry closely tied to California that is now raising hundreds of billions of dollars in investment and promising to reshape work, health care, national security, and even routine tasks of daily life.
On September 29, Governor Gavin Newsom [vetoed the bill](https://www.gov.ca.gov/2024/09/29/governor-newsom-announces-new-initiatives-to-advance-safe-and-responsible-ai-protect-californians/). His decision—following a [pitched debate exposing rifts among AI researchers, technology companies, and policymakers](https://carnegieendowment.org/posts/2024/09/california-sb1047-ai-safety-regulation?lang=en)—was tracked by leaders around the world. In his veto message, while Newsom affirmed his support for the bill’s safety objectives, announcing a new effort to craft guardrails for AI deployment and committing to continue working with the legislature, he ultimately concluded that a different approach was needed.
The problem the bill sought to address, at least in principle, is straightforward: the upcoming generation of frontier models could benefit millions of people. However, they could also risk serious harm to California’s 40 million residents and people around the world. For example, there are worries they could be weaponized to attack critical infra
... (truncated, 17 KB total)Resource ID:
61d484269e6dbd8c | Stable ID: NWE0M2UxZW