Back
A New York Legislator Wants to Pick Up the Pieces of the Dead California AI Bill
webCredibility Rating
4/5
High(4)High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: MIT Technology Review
Relevant to tracking US state-level AI governance efforts; follows the high-profile defeat of California SB 1047 and signals continued legislative interest in mandatory AI safety requirements.
Metadata
Importance: 45/100news articlenews
Summary
Following the veto of California's SB 1047 AI safety bill, a New York state legislator is introducing similar legislation that would impose safety requirements on developers of large AI models. The article covers the political and regulatory landscape around state-level AI governance efforts in the United States.
Key Points
- •California's SB 1047, which would have required safety testing and oversight for large AI models, was vetoed by Governor Newsom in 2024.
- •A New York legislator is drafting analogous legislation to apply similar AI safety mandates at the state level.
- •The effort reflects ongoing tension between AI industry opposition and advocates pushing for mandatory safety standards.
- •State-level AI regulation is gaining momentum as federal action on AI governance remains uncertain.
- •The bill would likely target frontier AI developers by imposing requirements tied to model compute thresholds.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Safe and Secure Innovation for Frontier Artificial Intelligence Models Act | Policy | 66.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202613 KB
[Skip to Content](https://www.technologyreview.com/2025/01/09/1109875/a-new-york-legislator-wants-to-pick-up-the-pieces-of-the-dead-california-ai-bill/#content)
The first Democrat in New York history with a computer science background wants to revive some of the ideas behind the failed California AI safety bill, [SB 1047](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047), with a new version in his state that would regulate the most advanced AI models. It’s called the RAISE Act, an acronym for “Responsible AI Safety and Education.”
Assemblymember Alex Bores hopes his bill, currently an unpublished draft—subject to change—that _MIT Technology Review_ has seen, will address many of the concerns that blocked SB 1047 from passing into law.
[SB 1047](https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047) was, at first, thought to be a fairly modest bill that would pass without much fanfare. In fact, it flew through the California statehouse with huge margins and received significant [public support](https://theaipi.org/april-voters-prefer-ai-regulation-over-self-regulation-2-3-2/).
However, before it even landed on Governor Gavin Newsom’s desk for signature in September, it sparked an intense national fight. Google, Meta, and OpenAI came out against the bill, alongside top congressional Democrats like [Nancy Pelosi](https://pelosi.house.gov/news/press-releases/pelosi-statement-opposition-california-senate-bill-1047) and [Zoe Lofgren](https://democrats-science.house.gov/imo/media/doc/2024-08-15%20to%20Gov%20Newsom_SB1047.pdf). Even Hollywood celebrities got involved, with Jane Fonda and Mark Hamill expressing [support](https://artists4safeai.com/) for the bill.
Ultimately, Newsom vetoed SB 1047, effectively killing regulation of so-called frontier AI models not just in California but, with the lack of laws on the [national level](https://www.technologyreview.com/2024/09/18/1104015/here-are-all-the-ai-bills-in-congress-right-now/), anywhere in the US, where the most powerful systems are developed.
Now Bores hopes to revive the battle. The main provisions in the RAISE Act include requiring AI companies to develop safety plans for the development and deployment of their models.
The bill also provides protections for whistleblowers at AI companies. It forbids retaliation against an employee who shares information about an AI model in the belief that it may cause “critical harm”; such whistleblowers can report the information to the New York attorney general. One way the bill defines critical harm is the use of an AI model to create a chemical, biological, radiological, or nuclear weapon that results in the death or serious injury of 100 or more people.
Alternatively, a critical harm could be a use of the AI model that results in 100 or more deaths or at least $1 billion in damages in an act with limited human oversight that if committed by a human would constitute a crime requiring i
... (truncated, 13 KB total)Resource ID:
179b5aa56b02e44e | Stable ID: GDcTIJGJdg