Skip to content
Longterm Wiki
Navigation
Updated 2026-03-17HistoryData
Page StatusResponse
Edited 3 weeks ago6.5k words56 backlinksUpdated weeklyOverdue by 12 days
66QualityGood23ImportancePeripheral35ResearchLow
Content6/13
SummaryScheduleEntityEdit historyOverview
Tables12/ ~26Diagrams1/ ~3Int. links82/ ~52Ext. links73/ ~32Footnotes0/ ~19References46/ ~19Quotes0Accuracy0RatingsN:4.2 R:6.8 A:7.1 C:7.4Backlinks56
Issues1
Links54 links could use <R> components

California SB 1047

Policy

Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

California's SB 1047 required safety testing, shutdown capabilities, and third-party audits for AI models exceeding 10^26 FLOP or $100M training cost; it passed the legislature (Assembly 48-16, Senate 32-1) but was vetoed September 29, 2024, with Governor Newsom citing concerns about size-based rather than risk-based regulation. The bill's legislative success demonstrated political feasibility of frontier AI regulation while its veto revealed formidable industry opposition (from OpenAI, Anthropic, Google, Meta) and preference for federal approaches.

Introduced2024-02
Statusvetoed
AuthorSenator Scott Wiener
ScopeState
Passed CommitteeApril 2024
Passed SenateMay 21, 2024
Passed AssemblyAugust 28, 2024
Passed LegislatureAugust 29, 2024
VetoedSeptember 29, 2024
Introduced Version (PDF)https://leginfo.legislature.ca.gov/faces/billPdf.xhtml?bill_id=202320240SB1047&version=20230SB104799INT
Enrolled Version (PDF)https://leginfo.legislature.ca.gov/faces/billPdf.xhtml?bill_id=202320240SB1047&version=20230SB104788ENR
Version Comparison Toolhttps://leginfo.legislature.ca.gov/faces/billVersionsCompareClient.xhtml?bill_id=202320240SB1047
Compute Threshold10^26 FLOP
Cost Threshold$100M
Employee Letter Signatories113
Companion Bills Signed18
Total Amendments10
Days to Veto235
a16z Lobbying (Apr-Jun 2024)$39,750
Related
Policies
US Executive Order on Safe, Secure, and Trustworthy AIEU AI ActVoluntary AI Safety CommitmentsCalifornia SB 53US Executive Order on Safe, Secure, and Trustworthy AI
Concepts
Compute Governance
Organizations
AnthropicCenter for AI SafetyFuture of Life Institute
People
Scott WienerGavin Newsom
6.5k words · 56 backlinks

Summary

SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was California state legislation that would have required safety testing and liability measures for developers of the most powerful AI models.

The bill passed the California legislature but was vetoed by Governor Gavin Newsom on September 29, 2024.

SB 1047 was the most significant AI safety legislation attempted in the United States to date. Its passage through the legislature demonstrated growing political willingness to regulate frontier AI, while its veto illustrated the political challenges such regulation faces.

Quick Assessment

DimensionAssessmentEvidence
TractabilityMediumPassed legislature (Assembly 48-16, Senate 32-1) but vetoed; demonstrated political feasibility with refinement
EffectivenessMedium-High (if enacted)Would have created enforceable requirements for frontier AI safety testing, shutdown capabilities, and incident reporting
Political ViabilityLow-MediumStrong industry opposition including safety-focused labs; governor cited innovation concerns; federal approach preferred
Enforcement MechanismStrongAttorney General enforcement with civil penalties up to 10% of training costs; whistleblower protections; mandatory auditing
CoverageNarrowOnly frontier models >10^26 FLOP or >$100M training cost; exempted open-source and academic research
StatusVetoed (Sept 29, 2024)Legislative success followed by executive veto; precedent for future state/federal legislation
SourceLink
Official Websitesafesecureai.org
Bill Text (Enrolled)CA Legislature: SB-1047 Full Text
Bill StatusCA Legislature: SB-1047 Status
Committee AnalysesCA Legislature: SB-1047 Bill Analysis
Vote RecordsCA Legislature: SB-1047 Votes
Bill NavigationCA Legislature: SB-1047 Overview
Veto MessageGovernor Newsom's Veto Message (PDF)
Expert Panel ReportThe California Report on Frontier AI Policy (June 2025)
Bill TrackingFastDemocracy: SB 1047 Tracker

What the Bill Proposed

Scope: "Covered Models"

The bill would have applied to AI models meeting any of these criteria:

Training Compute:

  • Trained using >10^26 FLOP (floating-point operations)
  • Approximately GPT-4.5/Claude 3 Opus scale or larger
  • Threshold aligned with US Executive Order 14110 on AI safety

Training Cost:

  • Cost >$100 million to train
  • Adjusted annually for inflation
  • At current cloud compute prices, 10^26 FLOP costs approximately $70-100 million (Anthropic estimate)

Fine-tuned Models:

  • Fine-tuning cost >$10 million
  • Based on a covered model

Why these thresholds?

  • Target only frontier models from well-resourced labs
  • Exclude open-source models and academic research
  • Align with international compute governance efforts (US EO, EU AI Act)

Provisions Comparison Table

Requirement CategorySpecific ProvisionTimingPenalty for Non-ComplianceComparison to Alternatives
Safety TestingRed-team testing for CBRN weapons, cyber attacks >$500M damage, autonomous operationBefore deployment or third-party accessCivil penalties up to 10% of training costsStricter than voluntary commitments (Anthropic RSP, OpenAI Preparedness); similar scope to US EO requirements
Shutdown CapabilityFull shutdown of all instances including during trainingBefore beginning trainingAG enforcement + injunctive reliefUnique requirement; not in US EO, EU AI Act, or industry frameworks
CybersecurityProtection of model weights from theft; secure infrastructure; incident responseBefore training beginsCivil liability for security breachesSimilar to US EO reporting but with enforcement teeth
Third-Party AuditingAnnual independent audits starting Jan 1, 2026; 5-year record retentionAnnually after Jan 1, 2026Civil penalties for audit failuresMore stringent than US EO (voluntary); weaker than EU AI Act (ongoing)
Incident ReportingReport AI safety incidents to AG within 72 hoursWithin 72 hours of incidentCivil penalties + potential criminal referralFaster timeline than US EO (unspecified); AG enforcement vs. federal agencies
Whistleblower ProtectionProhibit retaliation; anonymous reporting process; 7-year complaint retentionImmediate; ongoingLabor Commissioner enforcement + civil damagesStronger than industry standards; similar to federal whistleblower laws
Compute Cluster ReportingClusters >10^26 ops/second must report to state; customer information requiredOngoing for CA-based clustersCivil penalties for non-reportingSimilar to US EO compute reporting but state-level jurisdiction
Liability FrameworkAffirmative defense for compliance; AG can sue for violations causing harmPost-deployment if harm occursUp to 10% of training costs + damagesSofter than strict liability (original draft); stronger than status quo

Bill Structure and Enforcement Framework

The following diagram illustrates how SB 1047 would have regulated frontier AI development, from initial determination through deployment and enforcement:

Diagram (loading…)
flowchart TD
  A[AI Developer Plans Model Training] --> B{Will model meet<br/>covered thresholds?}
  B -->|Yes:<br/>>10^26 FLOP OR<br/>>$100M cost| C[Covered Model Path]
  B -->|No| Z[Exempt - No Requirements]

  C --> D[Pre-Training Requirements]
  D --> D1[Implement Shutdown Capability]
  D --> D2[Document Safety Protocol]
  D --> D3[Establish Cybersecurity]

  D1 & D2 & D3 --> E[Begin Training]

  E --> F[Pre-Deployment Testing]
  F --> F1[CBRN Capability Testing]
  F --> F2[Cyber Attack Testing<br/>>$500M damage threshold]
  F --> F3[Autonomous Operation Testing]
  F --> F4[Self-Improvement Testing]

  F1 & F2 & F3 & F4 --> G{Hazardous<br/>Capabilities<br/>Found?}

  G -->|Yes| H[Enhanced Safety Measures<br/>+ AG Notification]
  G -->|No| I[Standard Deployment]

  H --> J[Deployment with Monitoring]
  I --> J

  J --> K[Ongoing Obligations]
  K --> K1[Annual Third-Party Audit<br/>starting Jan 1, 2026]
  K --> K2[72-Hour Incident Reporting]
  K --> K3[Whistleblower Protections]
  K --> K4[5-Year Record Retention]

  K1 & K2 & K3 & K4 --> L{Compliance?}

  L -->|Yes| M[Affirmative Defense<br/>from Liability]
  L -->|No| N[Enforcement Actions]

  N --> N1[Attorney General Investigation]
  N1 --> N2{Violation<br/>Confirmed?}
  N2 -->|Yes| N3[Civil Penalties<br/>up to 10% training costs]
  N2 -->|No| M

  J --> O{Safety Incident<br/>Occurs?}
  O -->|Yes| P[72-Hour Report to AG]
  P --> N1
  O -->|No| L

  style C fill:#ffe6e6
  style D fill:#fff0e6
  style F fill:#fff8e6
  style G fill:#ffffe6
  style H fill:#ffe6e6
  style N fill:#ffcccc
  style N3 fill:#ff9999
  style M fill:#e6ffe6
  style Z fill:#e6f3ff

Key Enforcement Mechanisms:

The diagram shows three primary enforcement pathways in SB 1047:

  1. Preventive Compliance Path (top): Developers who implement all requirements and pass audits receive affirmative defense from liability
  2. Incident Response Path (bottom): Safety incidents trigger mandatory 72-hour reporting and Attorney General investigation
  3. Penalty Path (right): Non-compliance or violation results in civil penalties up to 10% of training costs plus potential injunctive relief

The bill created overlapping accountability through third-party auditing (annual), whistleblower protections (continuous), and incident reporting (reactive), ensuring multiple mechanisms to detect non-compliance.

Core Requirements

1. Safety Testing Before Training

Pre-Training Requirements:

Developers must:

  • Determine whether model will be a "covered model"
  • Implement safety protocols before beginning training
  • Establish shutdown procedures

Covered Model Determination:

If expected to meet thresholds:

  • Document safety plan
  • Prepare for testing requirements
  • Establish compliance measures

2. Hazardous Capability Testing

Required Testing:

Before deployment or making available to third parties, test for:

Critical Harm Capabilities:

  • Creation of chemical, biological, radiological, or nuclear weapons (CBRN)
  • Mass casualty cyber attacks (>$500M damage or mass casualties)
  • Autonomous operation and self-exfiltration
  • Self-improvement and recursive self-modification

Testing Methods:

  • Red-team testing
  • Adversarial probing
  • Capability evaluations
  • Third-party auditing

Threshold: Model enables non-expert to cause mass casualties or >$500M in damage.

3. Safety and Security Protocol

Required Measures:

Developers must implement:

Cybersecurity:

  • Protection of model weights from theft
  • Secure infrastructure
  • Incident response plans

Shutdown Capability:

  • Full model shutdown ability
  • Separate from safety fine-tuning
  • Effective on all deployed instances

Ongoing Monitoring:

  • Detection of hazardous use
  • Capability creep tracking
  • Post-deployment evaluation

Documentation:

  • Written safety protocol
  • Regular updates
  • Public summary (redacted for security)

4. Whistleblower Protections

Employee Rights:

Protected disclosures about:

  • Safety violations
  • Unreasonable risk to public
  • Non-compliance with the act

Prohibitions:

  • Cannot retaliate against whistleblowers
  • Cannot require non-disclosure preventing safety reports
  • Civil penalties for violations

5. Frontier Model Division

New State Agency:

Created within California Government Operations Agency:

  • Oversee compliance
  • Receive safety protocols
  • Investigate violations
  • Issue guidance

Powers:

  • Subpoena authority
  • Civil penalty assessment
  • Emergency orders

6. Liability Framework

Affirmative Defense:

Developers protected from liability if:

  • Complied with all safety requirements
  • Conducted reasonable testing
  • Implemented safety protocols
  • Acted in good faith

Strict Liability Removed:

Does NOT create automatic liability for harms; must prove negligence or non-compliance.

Attorney General Enforcement:

California AG can sue for:

  • Violations of safety requirements
  • Civil penalties up to 10% of training costs
  • Injunctive relief

7. Compute Cluster Reporting

Reporting Requirement:

Owners of computing clusters with:

  • 10^26 integer or floating-point operations per second

  • Located in California

Must report to:

  • Frontier Model Division
  • Information about cluster
  • Customers using cluster

Purpose: Track who has capability to train covered models.

Exemptions and Safe Harbors

Explicitly Exempted:

  • Open-source models (unless developer makes $50M+/year from derivatives)
  • Academic research
  • Models below thresholds
  • Government use

Safe Harbor:

  • Compliance with safety requirements provides affirmative defense
  • Good faith efforts protected

Limitations and Criticisms

Technical Limitations

LimitationDescriptionCounterargument
Size-based thresholds become obsoleteAlgorithmic efficiency improvements (e.g., mixture-of-experts, distillation) mean smaller models can match larger ones; 10^26 FLOP threshold may be irrelevant within 2-3 yearsThresholds can be updated; captures current frontier models that pose near-term risk
Capability testing is impreciseRed-team testing for CBRN/cyber capabilities is nascent; false negatives possible; evaluation science not maturePerfect shouldn't be enemy of good; creates incentive to develop better evals
Shutdown capability may be technically infeasibleDistributed deployments, fine-tuned derivatives, and open-weight releases make "full shutdown" difficult to guaranteeRequirement applies pre-deployment; creates accountability for distribution decisions
Compliance verification is challengingDifficult for state agency to verify claims about training compute, safety testing, or model capabilitiesThird-party auditing and whistleblower protections create multiple accountability mechanisms

Policy Limitations

LimitationDescriptionCounterargument
State-level jurisdiction is limitedAI development is interstate and international; California cannot regulate models trained elsewhereCalifornia hosts majority of US AI development; creates de facto national standard (like CCPA did for privacy)
Enforcement capacity uncertainFrontier Model Division (later removed) would have needed significant expertise and resourcesAG enforcement model proven effective in other contexts; whistleblower protections supplement
Exemptions may be too broadOpen-source exemption (unless $50M+ revenue) could allow dangerous models to proliferateTargets commercial frontier development where most risk concentrates; academic/hobbyist use low-risk
Narrow scope misses risksOnly covers frontier models; smaller models in high-risk applications (medical, legal, critical infrastructure) not addressedDesigned to complement other regulations (AB 2885, SB 896) addressing deployment contexts

Political Limitations

LimitationDescription
Industry opposition unifiedEven safety-focused labs (Anthropic) opposed, undermining "industry wants reasonable regulation" narrative
Federal preemption riskFuture federal AI legislation could override state requirements, making compliance investments wasted
Innovation narrative dominatesEconomic arguments about $200B+ California AI industry and 300,000+ jobs proved more politically salient than safety concerns
Coalition too narrowLimited engagement from labor, civil rights, and consumer groups who might have provided broader political support

Criticisms from AI Safety Community

Some AI safety researchers opposed SB 1047 despite sharing its goals:

  • Premature codification: Locking in specific requirements (10^26 FLOP, specific capability tests) before evaluation science matures could create compliance theater without actual safety
  • Wrong target: Focus on model capabilities rather than deployment context may miss where risks actually materialize
  • Opportunity cost: Political capital spent on failed state bill could have supported federal efforts or voluntary frameworks with broader buy-in
  • Open-source concerns: Despite exemptions, uncertainty about derivative liability could chill beneficial open research

Path Through Legislature

Initial Introduction (February 2024)

Original Sponsors:

  • Senator Scott Wiener (D-San Francisco), representing District 11 (San Francisco tech corridor)
  • Co-sponsored by AI safety organizations including Center for AI Safety
  • Support from AI safety advocates and researchers
  • Immediately opposed by major AI companies and some researchers, sparking fierce national debate
  • Official bill text introduced February 7, 2024
  • CSET Georgetown's early analysis noted the bill quickly became "a lightning rod for safety advocates and developers alike"

Amendment Process

Major Changes:

  • Narrowed scope to truly frontier models (>10^26 FLOP or >$100M)
  • Added safe harbors and affirmative defenses for compliant developers
  • Reduced liability provisions (removed strict liability; kept negligence standard)
  • Clarified open-source exemptions (unless developer earns >$50M/year from derivatives)
  • Specified hazardous capabilities more precisely (CBRN, >$500M cyber damage)
  • Removed Frontier Model Division and criminal penalties in August 2024 amendments

Purpose of Amendments:

  • Address industry concerns about overbreadth and compliance costs
  • Balance innovation incentives with safety requirements
  • Build bipartisan coalition for passage
  • Respond to >50 stakeholder comments during committee process

Legislative Passage

August 29, 2024: Passed California Legislature (official vote records; CalMatters coverage)

  • Assembly: 48-16 (75% approval)
  • Senate: 32-1 (97% approval), concurrence 30-9
  • Bipartisan support across party lines
  • Most significant AI legislation to pass any US state legislature
  • Represented months of amendments responding to >50 industry comments
  • Final version removed criminal penalties and Frontier Model Division creation

Veto (September 29, 2024)

The veto was extensively covered by major outlets including the New York Times, Washington Post, The Verge, SF Standard, and TechCrunch, with policy analysis from CSET Georgetown and Carnegie Endowment. Zvi Mowshowitz's real-time analysis of the veto was widely shared in the AI safety community.

Governor Newsom's Rationale:

From Newsom's official veto message:

"While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it."

Additional concerns: "Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047."

Specific Concerns:

  • Focus on model size rather than deployment context
  • Could stifle innovation in California's tech sector
  • Regulatory approach not nuanced enough (described as not "informed by an empirical trajectory analysis")
  • Preferred federal regulation given interstate nature of AI

Accompanying Actions:

Newsom simultaneously:

  • Signed 18 other AI bills on narrower topics (deepfakes, discrimination, transparency)
  • Called for federal AI legislation to address interstate nature of technology
  • Committed to working with legislature on alternative approaches
  • Convened expert panel including Fei-Fei Li (Stanford), Tino Cuéllar (Carnegie Endowment), and Jennifer Tour Chayes (UC Berkeley) to develop "empirical, science-based trajectory analysis"

Support and Opposition

Supporters

AI Safety Organizations:

  • Center for AI Safety
  • Future of Life Institute
  • AI safety researchers

Arguments:

  • Frontier models pose catastrophic risks
  • Industry self-regulation insufficient
  • California can lead on AI safety
  • Requirements are reasonable and achievable

Notable Individual Supporters:

  • Yoshua Bengio (Turing Award winner, 2018)
  • Geoffrey Hinton (Turing Award winner 2018, "Godfather of AI")
  • Stuart Russell (UC Berkeley professor, author of leading AI textbook)
  • Max Tegmark (MIT professor, founder of Future of Life Institute)
  • Elon Musk (xAI CEO, publicly endorsed the bill)
  • 113+ current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI (September 9, 2024 letter to Governor Newsom)

Opponents

Major AI Companies:

  • OpenAI (initially opposed; later neutral) — Senator Wiener publicly responded to OpenAI's opposition, calling it inconsistent (covered by TechCrunch)
  • Anthropic (opposed initially; called final version "benefits likely outweigh costs" but remained uncertain; later published The Case for Targeted Regulation outlining their preferred regulatory framework)
  • Google/DeepMind (opposed)
  • Meta (strongly opposed)
  • Combined market value of opposing companies: >$3 trillion

Arguments:

  • Stifles innovation in California's $200+ billion AI industry
  • Drives development out of California (threatening 300,000+ tech jobs)
  • Premature to regulate models that don't yet exist
  • Better to focus on use cases than model capabilities (size-based vs. risk-based regulation)
  • Federal regulation more appropriate for interstate technology

Venture Capital:

  • Y Combinator
  • Andreessen Horowitz
  • Others concerned about startup ecosystem impact

Some Researchers:

  • Yann LeCun (Meta, Turing Award winner)
  • Andrew Ng (Stanford, Google Brain co-founder)
  • Fei-Fei Li (Stanford)

Concerns:

  • Open-source implications despite exemptions
  • Compliance costs for startups
  • Regulatory overreach
  • Vague standards

Labor and Progressive Groups:

  • Some supported
  • Some concerned it didn't address labor impacts enough

Why It Was Vetoed

Stated Reasons (Governor Newsom)

Size-Based vs. Risk-Based:

  • Bill focuses on model size (compute/cost) not deployment risks
  • Small models in high-risk contexts not covered
  • Large models in benign contexts over-regulated

Innovation Concerns:

  • California is hub of AI development
  • Regulation could drive companies elsewhere
  • Startups face compliance burdens

Federal Action Preferable:

  • AI transcends state borders
  • National framework more appropriate
  • International coordination needed

Political Analysis

Industry Pressure:

  • Major AI companies lobbied heavily against
  • Economic arguments about California's AI ecosystem
  • Threat of relocation

Presidential Politics:

  • Biden administration developing AI policy
  • Harris (VP, former CA Senator) in presidential race
  • National Democratic messaging on tech

Tactical Considerations:

  • Newsom signed 18 other AI bills simultaneously
  • Positioned as pro-innovation, pro-safety balance
  • Left door open for future iteration

Lack of Coalition:

Unstated Factors (Analysis)

Economic:

  • California tech industry contributes $200+ billion annually to state GDP
  • AI-focused companies employ 300,000+ workers in California
  • Competing jurisdictions (Texas, Florida, international) actively recruiting AI companies
  • Tech industry contributes 15-20% of California's general fund revenue
  • Estimated compliance costs for SB 1047: $10-50M per covered model annually (industry estimates)

Policy:

  • Precedent-setting implications
  • Uncertainty about effectiveness
  • Implementation challenges

Political:

  • Presidential election dynamics
  • Tech industry relationships
  • Future political ambitions

Implications for AI Safety Regulation

What SB 1047 Demonstrated

Political Will Exists:

  • Bipartisan legislative passage showed AI safety resonates
  • Not just fringe concern but mainstream political issue
  • Legislators willing to regulate despite industry opposition

Industry Opposition is Formidable:

  • Even safety-focused companies (Anthropic) opposed
  • Economic arguments effective
  • Innovation framing powerful

Federal vs. State Tension:

  • AI is inherently interstate and international
  • State-level regulation faces jurisdictional limits
  • But federal action is slow

Details Matter:

  • Size-based vs. risk-based framing was central
  • Specific thresholds and requirements heavily debated
  • Implementation details crucial to political viability

Lessons for Future Efforts

What Worked

Focused Scope:

  • Targeting only frontier models built support
  • Exemptions for open-source and research
  • Concrete thresholds (compute, cost)

Safety Framing:

  • Catastrophic risk resonated
  • Whistleblower protections popular
  • Bipartisan appeal

Expert Endorsement:

  • Turing Award winners lending credibility
  • Technical community engagement

What Didn't Work

Industry Consensus:

  • Even safety-concerned labs opposed
  • Economic arguments effective
  • Innovation framing won

Implementation Clarity:

  • Vague enforcement mechanisms
  • Uncertainty about compliance costs
  • Questions about Frontier Model Division capacity

Coalition Building:

  • Labor not fully engaged
  • Grassroots support limited
  • Competing priorities on left

Future Regulatory Approaches

State Level

Narrower Bills:

  • Focus on specific harms (deepfakes, discrimination)
  • Deployment context rather than model capabilities
  • Procurement standards

Coordination:

  • Multi-state coordination
  • Uniform standards
  • Regional compacts

California Iteration:

  • Newsom committed to continued dialogue
  • Future versions possible
  • Refined approach incorporating feedback

Federal Level

Legislation:

  • Comprehensive AI safety bill
  • Build on Executive Order
  • Bipartisan framework

Challenges:

  • Congressional gridlock
  • Lobbying pressure
  • Competing priorities

International

Coordination Imperative:

  • AI development global
  • Race to the bottom risk
  • Need for international standards

Precedents:

  • EU AI Act as model
  • UK approach
  • Multilateral frameworks

Impact on AI Safety Movement

EA and AI Safety Organization Involvement

SB 1047 represented the most direct engagement of the EA-aligned AI safety community with state-level legislation. Zvi Mowshowitz's comprehensive guide to SB 1047 became the single most widely-read analysis in the AI safety community, while Scott Alexander's post-veto account of the campaign provided an inside perspective on the advocacy effort:

Direct organizational support:

  • Center for AI Safety (CAIS)Dan Hendrycks was a key organizer of support, leveraging CAIS's network of researchers and the 2023 Statement on AI Risk signatories
  • Future of Life Institute — provided analysis and advocacy through their policy team
  • The bill's core framing (catastrophic risk from frontier models, compute thresholds) closely aligned with EA-adjacent thinking on AI governance

The 113-employee letter: One of the most striking developments was the September 2024 letter to Governor Newsom signed by 113+ current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI. This was significant because:

  • It showed safety-minded employees at frontier labs were willing to publicly break with their employers' positions
  • The letter specifically cited catastrophic risk concerns aligned with EA safety arguments
  • It demonstrated grassroots support within the AI industry that went beyond the organized safety community

EA Forum and LessWrong debate: SB 1047 generated extensive discussion within EA online communities, with coverage from Vox's Future Perfect and other EA-adjacent outlets. The 80,000 Hours podcast episode with Nathan Calvin provided a deep dive into the bill's development and the safety community's role, while the CAIS AI Safety Newsletter covered the bill's passage and the veto aftermath. Zvi Mowshowitz also published an early analysis on LessWrong that shaped initial community opinion. The debate centered on:

  • Whether the bill's specific mechanisms (compute thresholds, capability testing) were well-designed
  • Whether state-level legislation was the right venue for frontier AI regulation
  • The strategic implications of Anthropic's mixed position — a safety-focused lab declining to fully support a safety bill
  • Whether the political capital spent on SB 1047 could have been better deployed elsewhere

Positive Effects

Mainstream Attention:

  • SB 1047 brought frontier AI risk into public discourse beyond the EA/AI safety community
  • Generated extensive media coverage from NYT, Washington Post, Wired, Politico, and others
  • The campaign was documented in a 31-minute documentary film and analyzed in multi-perspective podcast debates
  • Created a concrete policy proposal that translated abstract x-risk concerns into legislative language

Overton Window Shift:

  • Made AI regulation a mainstream political issue rather than a fringe concern
  • Future regulatory proposals are now measured against SB 1047 as a reference point
  • The bill's passage through the legislature (Assembly 48-16, Senate 32-1) proved that AI safety legislation can achieve overwhelming bipartisan support

Community Building:

  • Built coalitions between AI safety researchers, policy advocates, and sympathetic legislators
  • Developed political skills and institutional knowledge within the AI safety community
  • The SB 1047 campaign infrastructure directly contributed to SB 53's eventual passage

Negative Effects

Backlash:

  • Some researchers became more skeptical of legislative approaches to AI safety
  • "Regulatory capture" accusations from both industry (regulation benefits incumbents) and open-source advocates (harms small developers)
  • Polarization between "regulate now" and "wait for better science" camps within AI safety

Movement Division:

  • The most consequential split was Anthropic's position — a company founded explicitly for AI safety opposing (then hedging on) a safety bill
  • Tensions between deontological ("we must act now") and consequentialist ("will this actually reduce risk?") approaches to AI governance
  • Open-source AI community alienated despite the bill's exemptions

Political Capital:

  • The veto was framed by opponents as evidence that AI safety regulation is politically unviable
  • Industry lobby appeared emboldened, with subsequent state AI bills facing increased opposition
  • However, the SB 53 outcome complicates this narrative — the veto was a setback, not a defeat

Strategic Lessons

For the AI safety movement:

  • Industry alignment has limits: Even safety-focused labs will oppose regulation that threatens their competitive position or operational flexibility
  • State-level regulation works as an Overton window tool: SB 1047's passage made federal proposals seem more moderate by comparison
  • Iterative beats comprehensive: The SB 1047 → SB 53 arc suggests incremental approaches achieve more durable outcomes than ambitious first attempts
  • Employee voice matters: The 113-employee letter demonstrated that individual AI researchers can influence policy debates independent of their employers

Strategic Debates

Should SB 1047 Have Been Pursued?

Arguments For:

  • Only way to test political viability
  • Built coalition and momentum
  • Shifted discourse even in defeat

Arguments Against:

  • Premature; should have built more support first
  • Better to focus on federal action
  • Antagonized potential allies

What Should Come Next?

Double Down:

  • Refine and reintroduce
  • Build broader coalition
  • Address veto concerns

Pivot to Federal:

  • Focus energy on Congress
  • Support Executive Order implementation
  • International coordination

Focus on Narrower Wins:

  • Procurement standards
  • Use-case specific regulation
  • Voluntary frameworks

Build Power:

  • Grassroots organizing
  • Labor coalition
  • Public education

Lobbying and Industry Tactics

The opposition to SB 1047 involved sophisticated political tactics from the technology industry:

Lobbyist connections to Newsom:

  • Andreessen Horowitz hired Axiom Advisors, led by Jason Kinney — a long-time Newsom advisor and dining companion (the "French Laundry connection"). a16z paid $39,750 for April-June 2024 lobbying work. a16z also backed the StopSB1047.com opposition campaign site.
  • Y Combinator engaged in formal California lobbying for the first time, hiring Platinum Advisors led by Darius Anderson, another firm with close Newsom ties.

Push polling: The California Chamber of Commerce commissioned a push poll (August 9-12, 2024) using loaded language about "small startup companies" paying "tens of millions in fines from state bureaucrats." Results: 28% support, 46% oppose, 26% neutral. The poll was conducted by Adam Rosenblatt of Bold Decision. Despite the framing, the bill's actual exemptions for small companies and open-source developers were not mentioned in the poll questions.

Campaign contribution context: Google was described as the biggest contributor to certain California representatives, with nearly $1 million in total campaign contributions, though these were not SB 1047-specific.

State-Level Copycat Legislation

SB 1047's influence extended beyond California even after its veto, inspiring frontier AI legislation in other states. As MIT Technology Review reported, New York legislators explicitly sought to "pick up the pieces" of the vetoed California bill:

StateBillStatusRelationship to SB 1047
New YorkRAISE ActSigned Dec 19, 2025Frontier model transparency and risk management; stricter penalties ($10M first offense, $30M repeat)
TexasTRAIGASigned 2025Final version stripped to public sector only; original draft inspired by SB 1047
Rhode IslandFrontier model billIntroduced 2025Centered on catastrophic risk prevention
MichiganFrontier model billIntroduced 2025Similar frontier model regulation
IllinoisFrontier model billIntroduced 2025Similar frontier model regulation

The New York RAISE Act became the first state law directly comparable to SB 1047's ambitions, though California's SB 53 took a lighter-touch transparency approach. This divergence illustrates the ongoing policy debate: whether frontier AI regulation should focus on mandatory safety requirements (SB 1047 / NY RAISE) or transparency and reporting (SB 53).

Successor Legislation: SB 53 and the Expert Panel

Newsom's Expert Panel

Following the SB 1047 veto, Governor Newsom convened the Joint California Policy Working Group on AI Frontier Models, comprising leading AI researchers and policy experts:

  • Fei-Fei Li (Stanford) — co-chair
  • Tino Cuéllar (Carnegie Endowment for International Peace) — co-chair
  • Jennifer Tour Chayes (UC Berkeley, Dean of Computing)

The panel released its recommendations in June 2025, which:

  • Rejected a compute-threshold-only approach (the core SB 1047 mechanism)
  • Endorsed a transparency-first framework with mandatory incident reporting
  • Recommended creation of CalCompute, a state computing resource for AI safety research
  • Advocated for risk-based rather than size-based regulatory triggers

SB 53: The Successor Bill

SB 53, also authored by Senator Scott Wiener, was signed into law by Governor Newsom on September 29, 2025 — exactly one year after the SB 1047 veto.

DimensionSB 1047 (Vetoed)SB 53 (Signed)
Regulatory approachMandatory safety testing + liabilityTransparency + incident reporting
Trigger mechanismCompute threshold (10^26 FLOP) or cost ($100M)Developer self-determination of frontier status
PenaltiesCivil penalties up to 10% of training costsPenalties for non-reporting; no pre-deployment requirements
Pre-deploymentSafety testing, shutdown capability requiredNo pre-deployment mandates
Reporting72-hour incident reporting to AGSafety incident reporting + annual transparency reports
AuditingMandatory third-party auditsNo mandatory audits
WhistleblowerStrong protectionsRetained from SB 1047
Kill switchRequiredNot required

The 18 Companion Bills

When vetoing SB 1047, Newsom simultaneously signed 18 other AI bills, signaling support for narrower regulation:

Privacy, Data, and Transparency:

BillTopicKey Provision
AB 1008CCPA + AIExtends CCPA coverage to personal information processed by AI
AB 2013AI training data transparencyRequires disclosure of datasets used to train AI (effective Jan 1, 2026)
AB 2885AI definitionEstablishes uniform statutory definition for "artificial intelligence"
SB 942AI transparencyCalifornia AI Transparency Act; watermarking and AI content detection tools

Protection Against Harmful AI Content:

BillTopicKey Provision
AB 1831Child safetyExpands child pornography statutes to cover AI-generated material
AB 1836Digital replicasProtects deceased individuals from unauthorized AI replicas
AB 2602Performer protectionsRequires consent for AI-generated replicas of performers
SB 926Deepfake crimesCriminal penalties for creating/distributing sexually explicit deepfakes
SB 981Platform obligationsSocial media must report and remove deepfake pornography
SB 1381Child safetyManufacturing offenses involving minors (companion to AB 1831)

Elections:

BillTopicKey Provision
AB 2355Campaign disclosureRequires disclosure when campaign ads use AI technology
AB 2655Election deceptionPlatforms must block/label deceptive AI-generated election content
AB 2839Campaign materialProhibits malicious distribution of deceptive AI-generated material

Public Sector and Healthcare:

BillTopicKey Provision
AB 2876EducationDirects consideration of AI literacy in K-12 curriculum standards
SB 1288SchoolsWorking groups on AI in schools; guidance by Jan 1, 2026
AB 3030Healthcare transparencyHealthcare providers must disclose GenAI use in patient communications
SB 1120Healthcare oversightLicensed physician supervision required for AI in healthcare decisions
SB 896Government riskGenerative AI Accountability Act; state procurement and critical infrastructure risk analysis

SB 1047 to SB 53 Arc as Policy Case Study

The SB 1047 → expert panel → SB 53 trajectory represents an important case study in AI governance:

  1. Ambitious bill demonstrates political feasibility — SB 1047 passed the legislature with overwhelming majorities, proving AI safety regulation is not politically toxic
  2. Veto channels energy into expert process — rather than killing the effort, the veto redirected it through a technical review panel
  3. Scaled-back version achieves enactment — SB 53 secured the transparency and reporting infrastructure that future enforcement-focused legislation can build on
  4. Iterative approach validated — the one-year arc from veto to signature suggests that incremental state-level regulation may be more durable than comprehensive approaches

Technical and Policy Debates

Size-Based vs. Risk-Based Regulation

Size-Based (SB 1047 Approach):

Pros:

  • Objective, measurable thresholds
  • Targets most capable models
  • Easier to enforce
  • Aligns with international compute governance

Cons:

  • Doesn't capture deployment context
  • Could miss dangerous applications of smaller models
  • Algorithmic efficiency makes thresholds obsolete

Risk-Based (Newsom's Preference):

Pros:

  • Focuses on actual harm potential
  • Context-appropriate
  • Adapts to changing technology

Cons:

  • Harder to define and measure
  • Enforcement challenges
  • Potentially broader scope (privacy, fairness, etc.)
  • Risk assessment subjective

Synthesis Possible:

  • Combination of both approaches
  • Size thresholds trigger risk assessments
  • Deployment context determines requirements

Liability Questions

SB 1047 Approach:

  • Affirmative defense for compliance
  • Attorney General enforcement
  • Civil penalties

Debate:

  • Too much liability deters innovation?
  • Too little fails to ensure safety?
  • Who should bear costs of AI harms?

Alternative Approaches:

  • Strict liability with caps
  • Insurance requirements
  • Tiered liability based on precautions
  • No-fault compensation schemes

Open Source Implications

SB 1047 Exemption:

  • Open-source models exempt unless developer profits >$50M from derivatives

Concerns Raised:

  • Could still chill open-source development
  • Uncertainty about liability
  • Derivative work tracking difficult

Counter-Arguments:

  • Exemption was broad
  • Open-source not inherently safe
  • Need some oversight of powerful models

Ongoing Debate:

  • How to encourage open research while managing risks
  • Different models for different risk levels
  • Role of open-source in AI safety ecosystem

Comparison to Other Policies

vs. US Executive Order

The compute thresholds in SB 1047 were deliberately aligned with Biden's Executive Order 14110.

Similarities:

  • Compute thresholds (10^26 FLOP for training)
  • Safety testing requirements for CBRN risks
  • Focus on frontier models only
  • Developer reporting obligations

Differences:

  • SB 1047 had enforcement teeth (civil penalties up to 10% of training costs, AG lawsuits)
  • EO has broader scope (government use, competition policy, immigration for AI talent)
  • SB 1047 state-level mandatory law; EO federal executive action (can be rescinded)
  • SB 1047 required shutdown capability (unique provision)
  • SB 1047 included third-party auditing requirement (EO relies on voluntary compliance)

Relationship:

  • SB 1047 would have complemented EO with state-level enforcement
  • State enforcement of federal principles with local adaptation
  • Potential model for other states considering AI legislation
  • Analysis from legal firms noted SB 1047 went further than EO on liability

vs. EU AI Act

EU Act:

  • Risk categories for deployed systems
  • Broader scope (not just frontier models)
  • Binding regulation with large fines

SB 1047:

  • Narrower focus on frontier models
  • More specific technical requirements (shutdown, testing)
  • State-level vs. EU-wide

Lessons:

  • EU's comprehensiveness politically difficult in US
  • SB 1047's focused approach still failed
  • Suggests US regulation will be patchwork

vs. Voluntary Commitments

Industry Commitments:

  • No enforcement
  • Self-defined standards
  • Flexible and adaptive

SB 1047:

  • Mandatory requirements
  • State enforcement
  • Specific standards

Debate:

  • Is voluntary compliance sufficient?
  • Does regulation stifle beneficial innovation?
  • Can industry self-regulate emerging risks?

Career and Research Implications

Policy Careers

Lessons Learned:

  • Understanding legislative process crucial
  • Coalition building essential
  • Technical expertise must translate to policy

Opportunities:

  • State-level AI policy growing
  • Need for policy entrepreneurs
  • Legislative staff positions

Research Questions

Regulatory Design:

  • How to balance innovation and safety?
  • What thresholds are appropriate?
  • How to make regulation adaptive?

Political Economy:

  • Industry influence on regulation
  • Public opinion on AI risk
  • Coalition formation strategies

Technical:

  • Measuring model capabilities
  • Shutdown mechanisms
  • Audit methodologies

Movement Building

Strategic Questions:

  • When to push for regulation vs. build support?
  • How to engage industry productively?
  • Building public constituency

Skills Needed:

  • Political strategy
  • Coalition management
  • Communications
  • Policy design

Sources

Primary Documents

  • California Legislature: SB-1047 Bill Text (Enrolled Version) - Full enrolled bill text
  • California Legislature: SB-1047 Overview - Official bill navigation and legislative history
  • California Legislature: SB-1047 Bill Status - Legislative status and timeline
  • California Legislature: SB-1047 Bill Analysis - Committee analyses from both chambers
  • California Legislature: SB-1047 Votes - Official vote records (Assembly 48-16, Senate 32-1)
  • Governor Newsom's Veto Message (PDF) - Official veto statement, September 29, 2024
  • The California Report on Frontier AI Policy (June 2025) - Expert panel final report
  • California Assembly Privacy and Consumer Protection Committee Analysis - Detailed bill analysis, June 18, 2024
  • Pelosi: Statement in Opposition to California Senate Bill 1047 - Official congressional opposition statement

News Coverage and Analysis

  • Morgan Lewis: California's SB 1047 Would Impose New Safety Requirements - Legal analysis of requirements
  • Gibson Dunn: Regulating the Future: Eight Key Takeaways from California's SB 1047 - Post-veto analysis
  • Orrick: California Looks to Regulate Cutting-Edge Frontier AI Models: 5 Things to Know - Technical requirements breakdown
  • DLA Piper: California's SB-1047: Understanding the Safe and Secure Innovation for Frontier AI Act - Early analysis
  • Fenwick: Technological Challenges for Regulatory Thresholds of AI Compute - Analysis of compute thresholds
  • Stanford HAI: Analysis of the Safe and Secure Innovation for Frontier AI Models Act - Academic policy analysis
  • Lawfare: SB 1047 and the Future of Frontier AI Safety Regulation - Legal and policy analysis
  • CSET Georgetown: Governor Newsom Vetoes Sweeping AI Regulation SB 1047 - Veto analysis from Georgetown's Center for Security and Emerging Technology
  • Carnegie Endowment: California SB 1047 AI Safety Bill Veto Lessons - Post-veto lessons and policy analysis
  • Lawfare: California's Proposed SB 1047 Would Be a Major Step Forward for AI Safety - Balanced pros/cons analysis of the bill's provisions
  • CSET Georgetown: California AI Bill Becomes a Lightning Rod - Early analysis from Georgetown's Center for Security and Emerging Technology

Senator Wiener's Office

  • Senator Wiener: Groundbreaking AI Bill Advances to Assembly Floor - Official statement on amendments
  • Senator Wiener: Bipartisan Vote, Senate Passes Landmark AI Safety Bill - Official statement on passage
  • Lawfare Daily Podcast: State Senator Scott Wiener on SB 1047 - In-depth interview
  • Senator Wiener Responds to OpenAI Opposition to SB 1047 - Official response to OpenAI's public opposition

Industry Perspectives

  • Andreessen Horowitz: What You Need to Know About SB 1047 - Venture capital perspective
  • Safe and Secure AI: Letter to YC & a16z - Response from supporters
  • Brookings: Misrepresentations of California's AI safety bill - Defense of bill against criticism
  • Anthropic: The Case for Targeted Regulation - Post-veto regulatory framework advocating for targeted rather than broad AI regulation
  • StopSB1047.com - a16z-backed opposition campaign site against the bill

EA Community and AI Safety Analysis

Reference

  • Wikipedia: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - Overview and timeline
  • FastDemocracy: SB 1047 Bill Tracking - Legislative tracking and timeline
  • SafeSecureAI.org - Senator Wiener's campaign site for SB 1047

References

Gibson Dunn law firm analyzes California's SB 1047, a significant AI safety bill that would have imposed safety requirements on developers of large frontier AI models, which was ultimately vetoed by Governor Newsom in 2024. The analysis covers the bill's key provisions, compliance obligations, and implications for AI developers. It provides a legal practitioner's perspective on the regulatory landscape for frontier AI.

DLA Piper provides a legal analysis of California's SB-1047, which mandates safety determinations before training frontier AI models, establishes compliance requirements, and creates a new Frontier Model Division within the state's Department of Technology. The article situates the bill within the broader AI regulatory landscape including the EU AI Act, Biden's Executive Order, and state-level legislative activity.

The California Senate passed SB 1047, Senator Scott Wiener's landmark AI safety bill, with bipartisan support. The bill establishes safety requirements for developers of large frontier AI models, including mandatory safety testing, incident reporting, and kill-switch capabilities. It represents one of the most significant state-level legislative efforts to regulate advanced AI systems in the United States.

A legal analysis from Fenwick examining evolving regulatory frameworks that use computational thresholds (e.g., FLOP counts) to define which AI systems trigger oversight requirements. The piece reviews key policy developments at state and federal levels relevant to frontier AI governance.

California's SB 1047, a landmark AI safety bill requiring safety standards for large-scale AI developers, advanced to the Assembly floor in August 2024 with significant amendments. Key changes include removing criminal penalties, eliminating a proposed Frontier Model Division regulatory body, softening legal standards from 'reasonable assurance' to 'reasonable care,' and establishing a $10 million fine-tuning cost threshold to exempt startups. The bill represents a major state-level attempt to regulate frontier AI development amid federal inaction.

California State Senator Scott Wiener joins Lawfare to discuss SB 1047, his 'Safe and Secure Innovation for Frontier Artificial Intelligence Models' bill. The conversation covers key debates the bill has sparked around AI safety vs. security, state vs. federal regulation, and model-level vs. end-user governance. Wiener addresses opposition from the tech industry and signals openness to amendments.

★★★★☆
7SB 1047 Veto Messagegov.ca.gov·Government

Governor Newsom vetoed California's SB 1047, which would have imposed safety requirements on large AI model developers based on computational thresholds. He argued the bill's size-based regulatory approach is flawed because smaller specialized models can pose equal risks, and that effective AI regulation must be risk-based, contextually aware of deployment environments, and empirically grounded rather than relying on model scale as a proxy for danger.

California Governor Gavin Newsom vetoed SB 1047, a landmark AI safety bill that would have imposed safety requirements on developers of large frontier AI models. The veto was a significant setback for state-level AI regulation, with Newsom arguing the bill was too broad and could stifle innovation while not targeting the highest-risk applications specifically.

This California Assembly Privacy and Consumer Protection Committee analysis examines SB 1047, which would impose comprehensive regulatory requirements on developers of frontier AI models costing $100M+ to train. The bill mandates governance programs, third-party audited risk assessments, whistleblower protections, and creates a new Division of Frontier Models for enforcement. It represents one of the most ambitious state-level AI safety regulatory efforts in the United States.

Wikipedia article covering California's SB 1047, a landmark proposed state-level AI safety bill that would have imposed safety obligations on developers of large frontier AI models. The bill required safety testing, incident reporting, and the ability to shut down AI systems, but was ultimately vetoed by Governor Gavin Newsom in September 2024. It represents a significant early attempt at comprehensive state-level AI regulation.

★★★☆☆

TechCrunch covers the California legislature's passage of SB 1047, a landmark AI safety bill targeting large frontier models. The bill imposes safety obligations on developers of powerful AI systems, while major tech companies and industry groups argue it will stifle innovation and push AI development out of California.

★★★☆☆

SB 1047 is California's 2024 landmark legislation requiring frontier AI model developers to implement safety protocols, maintain shutdown capabilities, and produce detailed safety documentation before training covered models. It establishes oversight through the California Department of Technology and creates liability frameworks for developers whose models cause specified harms. Though ultimately vetoed by Governor Newsom, it represents one of the most significant state-level AI regulatory efforts and shaped subsequent AI governance debates.

A Brookings Institution analysis defending California's AI safety bill (SB 1047) against what the authors characterize as misleading criticisms from opponents. The piece examines specific claims made against the bill and argues they distort the legislation's actual requirements and scope. It provides a fact-checking perspective on the public debate surrounding frontier AI regulation at the state level.

★★★★☆

Andreessen Horowitz presents its analysis and opposition to California's SB 1047 AI safety bill, arguing that the legislation would stifle innovation, harm open-source AI development, and impose impractical liability on developers. The discussion covers the bill's key provisions, a16z's objections, and the broader implications for AI governance in the US.

California Governor Gavin Newsom vetoed SB 1047, a landmark AI safety bill that would have imposed safety requirements on developers of large frontier AI models. The veto highlighted deep divisions within Silicon Valley between AI safety advocates and tech industry opponents, and raised broader questions about state-level AI governance.

Orrick law firm provides a legal analysis of California's SB 1047, a landmark bill proposing safety requirements for developers of large frontier AI models. The piece outlines key provisions including safety assessments, incident reporting, and kill-switch requirements, as well as potential compliance burdens and constitutional questions. It serves as a practical overview for businesses navigating the proposed regulatory landscape.

17Senator Scott Wienersd11.senate.ca.gov·Government

Official webpage of California State Senator Scott Wiener, who represents San Francisco and is a prominent legislator on AI safety and technology policy. Wiener has authored significant AI-related legislation including SB 1047, a major bill targeting frontier AI model safety requirements. His office is a key locus of state-level AI governance activity in the United States.

This Morgan Lewis legal analysis examines California's SB 1047, a proposed bill that would impose safety obligations on developers of large frontier AI models. The piece outlines key compliance requirements, liability provisions, and potential implications for AI companies operating in or developing models for California markets.

An open letter responding to Y Combinator and Andreessen Horowitz's opposition to AI safety legislation, arguing that safety regulations for frontier AI models are necessary and compatible with innovation. The letter counters claims made by prominent venture capital firms that state-level AI safety bills are harmful, presenting a pro-regulation perspective from AI safety advocates.

This Carnegie Endowment analysis examines California's SB 1047 bill, which sought to impose safety requirements on developers of large frontier AI models. It explores the policy debate around state-level AI regulation, the bill's key provisions, stakeholder reactions, and its implications for the broader U.S. AI governance landscape after Governor Newsom vetoed it.

★★★★☆

Campaign website advocating for California's SB 1047 AI safety legislation, urging Governor Newsom to sign the bill into law. The site aggregates supporting letters, statements from academics, AI experts, and whistleblowers, and calls public attention to the political dynamics around AI safety regulation in California.

California Senate Bill 1047 proposed sweeping safety requirements for developers of large frontier AI models, including mandatory safety plans, third-party audits, whistleblower protections, and the ability to shut down AI systems. Though passed by the legislature in 2024, it was vetoed by Governor Newsom, making it a landmark but ultimately unsuccessful attempt at state-level AI governance.

California Senate Bill 1047 proposed safety requirements for developers of large frontier AI models, including mandatory safety testing, incident reporting, and the ability to shut down AI systems. The bill passed both chambers of the California legislature in 2024 but was ultimately vetoed by Governor Newsom in September 2024.

California SB-1047 was a landmark proposed state law requiring developers of large frontier AI models to implement safety protocols, conduct pre-deployment safety evaluations, and establish shutdown capabilities. The bill aimed to hold AI developers liable for harms caused by models exceeding certain compute thresholds. It passed the legislature but was vetoed by Governor Newsom in September 2024.

This page records the legislative votes for California SB-1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, tracking its passage through the California legislature from introduction in February 2024 through enrollment in September 2024. The bill represented a landmark attempt at state-level regulation of frontier AI models, requiring safety protocols and oversight measures from large AI developers. It was ultimately vetoed by Governor Newsom despite passing both chambers.

This Fortune article covers the intense political and industry debate surrounding California's SB 1047, a landmark AI safety bill that would impose safety requirements on developers of large AI models. The piece frames the conflict as a battle between AI safety advocates pushing for regulation and tech industry opponents warning of innovation harm, capturing a pivotal moment in US AI governance.

★★★☆☆

This CSET article examines California's SB 1047, a landmark AI safety bill that sparked intense debate between AI safety advocates who supported its liability and safety requirements and tech industry developers who opposed it as overly burdensome. The piece analyzes the competing arguments and political dynamics that made the bill highly controversial before Governor Newsom ultimately vetoed it.

★★★★☆

This CalMatters article covers California's SB 1047, a landmark AI safety bill that passed the state legislature and was sent to Governor Gavin Newsom for signature or veto. The bill would require developers of large AI models to implement safety measures and conduct risk assessments before deployment. It represented one of the most significant state-level AI regulatory efforts in the United States.

California Governor Gavin Newsom vetoed SB 1047, a landmark AI safety bill that would have imposed safety requirements on large AI models. The veto was controversial, drawing criticism from AI safety advocates and praise from major tech companies who argued the bill was overly broad and could stifle innovation.

California Governor Gavin Newsom vetoed SB 1047 on September 29, 2024, a landmark AI safety bill authored by State Senator Scott Wiener that would have required companies developing large AI models to implement safety protocols to prevent 'critical harms.' The veto halted what would have been one of the most significant state-level AI regulatory frameworks in the US, applying liability to developers of frontier AI models above certain compute thresholds.

★★★☆☆

Zvi Mowshowitz analyzes California Governor Gavin Newsom's veto of SB 1047, a landmark AI safety bill that would have imposed safety requirements on large AI models. The post examines the reasoning behind the veto, the political dynamics involved, and what the outcome means for AI governance efforts more broadly.

★★☆☆☆

California Senator Scott Wiener rebuts OpenAI's opposition to SB 1047, noting that OpenAI did not criticize any specific provision of the bill and instead argued for federal action. Wiener defends the bill's requirements for safety evaluations and shutdown capabilities on large AI models, dismissing concerns about companies leaving California and citing national security endorsements from senior defense officials.

California State Senator Scott Wiener pushes back against OpenAI's opposition to SB 1047, an AI safety bill requiring safety evaluations and liability for developers of large AI models. Wiener argues that OpenAI's objections are inconsistent with the company's stated mission and public safety commitments. The article highlights the tension between AI industry self-regulation preferences and legislative oversight efforts.

★★★☆☆

Anthropic argues that governments have an urgent 18-month window to implement narrowly-targeted AI regulation to mitigate catastrophic risks—particularly in cyber and CBRN domains—while preserving innovation. The post outlines principles for risk-focused policy, citing rapid capability advances as evidence that surgical regulation is needed soon.

★★★★☆

Former House Speaker Nancy Pelosi publicly opposed California's SB 1047, a landmark AI safety bill that would have imposed pre-deployment safety requirements on large AI models. Her opposition, citing concerns about innovation and regulatory overreach, contributed to the broader political debate that ultimately led Governor Newsom to veto the bill.

Zvi Mowshowitz provides a comprehensive guide and analysis of California's SB 1047, a landmark AI safety bill that would impose safety requirements on large AI model developers. The post examines the bill's provisions, likely impacts, and the debate surrounding it in the AI safety and tech communities.

★★☆☆☆
37SB 1047: Our Side Of The Storyastralcodexten.com·Scott Alexander

Scott Alexander presents the case in favor of California's SB 1047 AI safety bill from the perspective of supporters, responding to criticisms and explaining the rationale behind the legislation's approach to regulating large AI models. The piece contextualizes the debate around AI governance and the difficulty of passing meaningful safety-oriented legislation.

An 80,000 Hours podcast episode featuring Nathan Calvin discussing California's SB 1047 AI safety bill, which would impose safety requirements on large AI model developers. The conversation covers the bill's provisions, the debate around its merits and risks, and its potential implications for broader US AI governance and policy.

★★★☆☆

This edition of the Center for AI Safety's newsletter covers California's AI legislation landscape, analyzing key bills and their implications for AI safety governance. It examines proposed regulations aimed at managing risks from advanced AI systems at the state level.

★★★★☆

This edition of the CAIS AI Safety Newsletter covers California Governor Gavin Newsom's veto of SB 1047, a landmark AI safety bill that would have imposed safety requirements on large AI models. The newsletter likely analyzes the implications of the veto for AI governance and the broader AI safety policy landscape.

★★★★☆

A 30-minute documentary examining California's SB-1047 AI safety bill through exclusive interviews with key stakeholders including bill sponsor Scott Wiener, Dan Hendrycks of the Center for AI Safety, tech industry opponents, and youth advocates. The film chronicles the political battle over AI regulation that culminated in Governor Newsom's veto, capturing both the arguments for proactive AI safety legislation and the Silicon Valley opposition that helped defeat it.

A podcast episode from The Cognitive Revolution in which Nathan Labenz and three experts provide detailed analysis of California's SB 1047 AI safety bill, examining its updated provisions, implications for frontier AI developers, and the broader debate over AI governance. The discussion covers both the potential benefits and drawbacks of state-level AI regulation as a model for safety oversight.

This resource appears to be an online casino promotional website unrelated to AI safety. It has no relevance to artificial intelligence, machine learning, or AI safety research.

Following the veto of California's SB 1047 AI safety bill, a New York state legislator is introducing similar legislation that would impose safety requirements on developers of large AI models. The article covers the political and regulatory landscape around state-level AI governance efforts in the United States.

★★★★☆

Stanford HAI provides an expert analysis of California's proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), examining its provisions for regulating large AI models, safety requirements, and compliance mechanisms. The analysis evaluates potential benefits, drawbacks, and implementation challenges of the legislation as a landmark state-level AI governance effort.

★★★★☆

This resource appears to be an article on Lawfare Media analyzing California's SB 1047 bill and its implications for frontier AI safety regulation, but the page returns a 404 error and the content is no longer accessible.

★★★★☆

Structured Data

2 factsView in FactBase →
Compute Threshold
10^26
as of Feb 2024
Cost Threshold
$100 million
as of Feb 2024

All Facts

2
Requirements
PropertyValueAs OfSource
Compute Threshold10^26Feb 2024
Cost Threshold$100 millionFeb 2024

Related Wiki Pages

Top Related Pages

Organizations

Meta AI (FAIR)Future of Life InstituteCenter for AI SafetyFAR AIRedwood ResearchGoogle DeepMind

Risks

AI Proliferation

Approaches

Pause Advocacy

Analysis

US State AI Legislation LandscapeFailed and Stalled AI Proposals

Policy

Voluntary AI Safety CommitmentsColorado Artificial Intelligence Act

Other

Elon MuskScott Wiener

Concepts

Compute GovernanceState Capacity and AI Governance

Key Debates

AI Governance and PolicyGovernment Regulation vs Industry Self-Governance