Skip to content
Longterm Wiki
Back

Carnegie Endowment: All Eyes on Sacramento: SB 1047 and the AI Safety Debate

web

Credibility Rating

4/5
High(4)

High quality. Established institution or organization with editorial oversight and accountability.

Rating inherited from publication venue: Carnegie Endowment

A Carnegie Endowment policy analysis of California's SB 1047, a landmark attempted state-level frontier AI safety bill that was vetoed in 2024; useful context for understanding U.S. sub-federal AI governance debates and the politics of frontier model regulation.

Metadata

Importance: 58/100opinion pieceanalysis

Summary

This Carnegie Endowment analysis examines California's SB 1047 bill, which sought to impose safety requirements on developers of large frontier AI models. It explores the policy debate around state-level AI regulation, the bill's key provisions, stakeholder reactions, and its implications for the broader U.S. AI governance landscape after Governor Newsom vetoed it.

Key Points

  • SB 1047 would have required developers of large AI models (above a compute threshold) to implement safety protocols and conduct risk assessments before deployment.
  • The bill sparked intense debate between AI safety advocates who supported it and industry groups and some researchers who argued it would stifle innovation and misdirect regulatory focus.
  • California's outsized role in the AI industry made SB 1047 a de facto national and international policy flashpoint, drawing attention beyond state borders.
  • Governor Newsom ultimately vetoed the bill, citing concerns about its blunt approach and potential to disadvantage California-based developers without meaningfully improving safety.
  • The episode highlights tensions in AI governance between proactive precautionary regulation of frontier models and more targeted, harm-specific regulatory frameworks.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Mar 20, 202620 KB
```
{
  "authors": [\
    "Scott Kohler"\
  ],
  "type": "commentary",
  "centerAffiliationAll": "dc",
  "centers": [\
    "Carnegie Endowment for International Peace"\
  ],
  "collections": [\
    "Emerging AI Policy",\
    "Tech in Context"\
  ],
  "englishNewsletterAll": "ctw",
  "nonEnglishNewsletterAll": "",
  "primaryCenter": "Carnegie Endowment for International Peace",
  "programAffiliation": "CC",
  "programs": [\
    "Carnegie California"\
  ],
  "projects": [],
  "regions": [\
    "United States"\
  ],
  "topics": [\
    "AI",\
    "Subnational Affairs",\
    "Technology"\
  ]
}
```

![map of the US with lights across it and a focus on California](https://assets.carnegieendowment.org/_/eyJrZXkiOiJzdGF0aWMvbWVkaWEvaW1hZ2VzL2NhbGlmb3JuaWEtYWktbGlnaHRzLXRlY2guanBnIn0=)

Photo by iStock

Commentary

## All Eyes on Sacramento: SB 1047 and the AI Safety Debate

The bill has galvanized a discussion about innovation, safety, and the appropriate role of government—particularly at the subnational level—in AI regulation.

English

Link Copied

By[Scott Kohler](https://carnegieendowment.org/people/scott-kohler)

Published onSep 11, 2024

Over the past few years, policymakers around the world have been racing to understand and respond to the remarkable progress of artificial intelligence, and particularly the advent of sophisticated generative models with rapidly expanding capabilities. Significant attention has focused on national and international initiatives—including safety institutes, multilateral summits, and flagship policy measures such as U.S. President Joe Biden’s [AI executive order](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/) and the European Union’s [AI Act](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai).

In parallel, subnational policy has emerged as a dynamic and consequential force in its own right. In the United States, states in particular have ramped up their work on tech policy over the past decade, flexing their broad regulatory powers and, in some areas, filling a perceived vacuum of congressional inaction. For example, since 2018, more than a dozen states have passed comprehensive privacy laws, many of which impose heightened consent and governance obligations on the use of autonomous decisionmaking systems. States have also acted to limit misleading or harmful outputs of AI systems, including through new [requirements](https://le.utah.gov/~2024/bills/static/SB0149.html) in Utah that generative AI systems disclose to consumers that they are synthetic, and [legislation](https://legiscan.com/CO/bill/SB205/2024) in Colorado requiring developers and deployers of high-risk AI systems to prevent algorithmic discrimination.

As wide-ranging as state efforts have been, none have generated as much discussion and controversy as [SB 1047](https://leginfo.legislature.ca.gov/faces/bi

... (truncated, 20 KB total)
Resource ID: f5ad1c4adccb2a25 | Stable ID: YjQ4NGJjMW