Skip to content
Longterm Wiki
Back

Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (Wikipedia)

reference

Credibility Rating

3/5
Good(3)

Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.

Rating inherited from publication venue: Wikipedia

SB 1047 was a major flashpoint in AI policy debates in 2024; understanding its provisions and veto provides essential context for ongoing discussions about how to regulate frontier AI development at state and federal levels.

Metadata

Importance: 58/100wiki pagereference

Summary

Wikipedia article covering California's SB 1047, a landmark proposed state-level AI safety bill that would have imposed safety obligations on developers of large frontier AI models. The bill required safety testing, incident reporting, and the ability to shut down AI systems, but was ultimately vetoed by Governor Gavin Newsom in September 2024. It represents a significant early attempt at comprehensive state-level AI regulation.

Key Points

  • SB 1047 targeted AI models trained above a compute threshold (~$100M), requiring developers to implement safety protocols and conduct pre-deployment testing.
  • The bill mandated that developers maintain the ability to fully shut down covered AI systems and establish a 'safety and security protocol'.
  • It would have created the Frontier Model Division within California's Department of Technology to oversee compliance.
  • Governor Newsom vetoed the bill in September 2024, citing concerns it was too broad and could hamper AI innovation without meaningfully improving safety.
  • The bill sparked major national debate about state vs. federal AI governance, industry self-regulation, and the appropriate scope of AI safety mandates.

Cited by 1 page

1 FactBase fact citing this source

Cached Content Preview

HTTP 200Fetched Feb 25, 2026236 KB
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - Wikipedia Jump to content From Wikipedia, the free encyclopedia California bill Safe and Secure Innovation for Frontier Artificial Intelligence Models Act California State Legislature Full name Safe and Secure Innovation for Frontier Artificial Intelligence Models Act Introduced February 7, 2024 Assembly voted August 28, 2024 (48–16) Senate voted August 29, 2024 (30–9) Sponsor Scott Wiener Governor Gavin Newsom Bill SB 1047 Website Bill Text Status: Not passed (Vetoed by Governor on September 29, 2024) The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act , or SB 1047 , was a failed [ 1 ] 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". [ 2 ] Specifically, the bill would have applied to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 10 26 integer or floating-point operations. [ 3 ] SB 1047 would have applied to all AI companies doing business in California—the location of the company would not matter. [ 4 ] The bill would have created protections for whistleblowers [ 5 ] and required developers to perform risk assessments of their models prior to release, with guidance from the Government Operations Agency . It would also have established CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups. Background [ edit ] The rapid increase in capabilities of AI systems in the 2020s, including the release of ChatGPT in November 2022, caused some researchers and members of the public to become concerned about the existential risks associated with increasingly powerful AI systems . [ 6 ] [ 7 ] Hundreds of tech executives and AI researchers, including two of the so-called "Godfathers of AI", Geoffrey Hinton and Yoshua Bengio , signed a statement in May 2023 calling for the mitigation of the "risk of extinction from AI" to be a global priority alongside "pandemics and nuclear war". [ 8 ] However, the plausibility of these risks is still widely debated. [ 9 ] Strong regulation of AI has been criticized for purportedly causing regulatory capture by large AI companies like OpenAI , a phenomenon in which regulation advances the interest of larger companies at the expense of smaller competition and the public in general, [ 7 ] although OpenAI ended up opposing the bill. [ 10 ] Other advocates of AI regulation aim to prevent bias and privacy violations, rather than existential risks. [ 7 ] For example, some experts who view existential concerns as overblown and unrealistic view them as a distraction from near-term harms of AI like discriminatory automated decision making. [ 11 ] In the face of existential concerns, technology companies have made

... (truncated, 236 KB total)
Resource ID: 9607d725074dfe2e | Stable ID: N2M3MjZkMD