California SB 1047
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act
California's SB 1047 required safety testing, shutdown capabilities, and third-party audits for AI models exceeding 10^26 FLOP or $100M training cost; it passed the legislature (Assembly 48-16, Senate 32-1) but was vetoed September 29, 2024, with Governor Newsom citing concerns about size-based rather than risk-based regulation. The bill's legislative success demonstrated political feasibility of frontier AI regulation while its veto revealed formidable industry opposition (from OpenAI, Anthropic, Google, Meta) and preference for federal approaches.
Summary
SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was California state legislation that would have required safety testing and liability measures for developers of the most powerful AI models.
The bill passed the California legislature but was vetoed by Governor Gavin Newsom on September 29, 2024.
SB 1047 was the most significant AI safety legislation attempted in the United States to date. Its passage through the legislature demonstrated growing political willingness to regulate frontier AI, while its veto illustrated the political challenges such regulation faces.
Quick Assessment
| Dimension | Assessment | Evidence |
|---|---|---|
| Tractability | Medium | Passed legislature (Assembly 48-16, Senate 32-1) but vetoed; demonstrated political feasibility with refinement |
| Effectiveness | Medium-High (if enacted) | Would have created enforceable requirements for frontier AI safety testing, shutdown capabilities, and incident reporting |
| Political Viability | Low-Medium | Strong industry opposition including safety-focused labs; governor cited innovation concerns; federal approach preferred |
| Enforcement Mechanism | Strong | Attorney General enforcement with civil penalties up to 10% of training costs; whistleblower protections; mandatory auditing |
| Coverage | Narrow | Only frontier models >10^26 FLOP or >$100M training cost; exempted open-source and academic research |
| Status | Vetoed (Sept 29, 2024) | Legislative success followed by executive veto; precedent for future state/federal legislation |
Key Links
| Source | Link |
|---|---|
| Official Website | safesecureai.org |
| Bill Text (Enrolled) | CA Legislature: SB-1047 Full Text |
| Bill Status | CA Legislature: SB-1047 Status |
| Committee Analyses | CA Legislature: SB-1047 Bill Analysis |
| Vote Records | CA Legislature: SB-1047 Votes |
| Bill Navigation | CA Legislature: SB-1047 Overview↗🏛️ governmentSB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models ActSB 1047 was vetoed by California Governor Gavin Newsom in September 2024 despite passing the legislature; it remains highly influential as a template and reference point for ongoing AI governance efforts at state and federal levels.SB 1047 is California's 2024 landmark legislation requiring frontier AI model developers to implement safety protocols, maintain shutdown capabilities, and produce detailed safe...governancepolicyai-safetydeployment+5Source ↗ |
| Veto Message | Governor Newsom's Veto Message (PDF)↗🏛️ governmentSB 1047 Veto MessageThis official veto message is a key primary source documenting the political and regulatory debate around compute-threshold-based AI safety legislation in California, reflecting real-world tensions between innovation, safety, and governance design choices.Governor Newsom vetoed California's SB 1047, which would have imposed safety requirements on large AI model developers based on computational thresholds. He argued the bill's si...governancepolicyai-safetyregulation+5Source ↗ |
| Expert Panel Report | The California Report on Frontier AI Policy (June 2025) |
| Bill Tracking | FastDemocracy: SB 1047 Tracker |
What the Bill Proposed
Scope: "Covered Models"
The bill would have applied to AI models meeting any of these criteria:
Training Compute:
- Trained using >10^26 FLOP (floating-point operations)
- Approximately GPT-4.5/Claude 3 Opus scale or larger
- Threshold aligned with US Executive Order 14110↗🔗 webFenwick: Interesting Developments for Regulatory Thresholds of AI ComputeA law firm analysis relevant to AI governance practitioners tracking how compute-based regulatory thresholds are being developed; useful for understanding the legal landscape around frontier AI model oversight.A legal analysis from Fenwick examining evolving regulatory frameworks that use computational thresholds (e.g., FLOP counts) to define which AI systems trigger oversight require...governancepolicycomputeregulation+4Source ↗ on AI safety
Training Cost:
- Cost >$100 million to train
- Adjusted annually for inflation
- At current cloud compute prices, 10^26 FLOP costs approximately $70-100 million↗🔗 webCalifornia's SB 1047 Would Impose New Safety Requirements for Developers of Large-Scale AI ModelsLegal analysis from Morgan Lewis on SB 1047, a landmark but ultimately vetoed California bill representing one of the first major attempts at state-level frontier AI safety regulation; useful for understanding policy debates around AI governance.This Morgan Lewis legal analysis examines California's SB 1047, a proposed bill that would impose safety obligations on developers of large frontier AI models. The piece outline...governancepolicyregulationfrontier-models+5Source ↗ (Anthropic estimate)
Fine-tuned Models:
- Fine-tuning cost >$10 million
- Based on a covered model
Why these thresholds?
- Target only frontier models from well-resourced labs
- Exclude open-source models and academic research
- Align with international compute governance efforts (US EO, EU AI Act)
Provisions Comparison Table
| Requirement Category | Specific Provision | Timing | Penalty for Non-Compliance | Comparison to Alternatives |
|---|---|---|---|---|
| Safety Testing | Red-team testing for CBRN weapons, cyber attacks >$500M damage, autonomous operation | Before deployment or third-party access | Civil penalties up to 10% of training costs | Stricter than voluntary commitments (Anthropic RSP, OpenAI Preparedness); similar scope to US EO requirements |
| Shutdown Capability | Full shutdown of all instances including during training | Before beginning training | AG enforcement + injunctive relief | Unique requirement; not in US EO, EU AI Act, or industry frameworks |
| Cybersecurity | Protection of model weights from theft; secure infrastructure; incident response | Before training begins | Civil liability for security breaches | Similar to US EO reporting but with enforcement teeth |
| Third-Party Auditing | Annual independent audits starting Jan 1, 2026; 5-year record retention | Annually after Jan 1, 2026 | Civil penalties for audit failures | More stringent than US EO (voluntary); weaker than EU AI Act (ongoing) |
| Incident Reporting | Report AI safety incidents to AG within 72 hours | Within 72 hours of incident | Civil penalties + potential criminal referral | Faster timeline than US EO (unspecified); AG enforcement vs. federal agencies |
| Whistleblower Protection | Prohibit retaliation; anonymous reporting process; 7-year complaint retention | Immediate; ongoing | Labor Commissioner enforcement + civil damages | Stronger than industry standards; similar to federal whistleblower laws |
| Compute Cluster Reporting | Clusters >10^26 ops/second must report to state; customer information required | Ongoing for CA-based clusters | Civil penalties for non-reporting | Similar to US EO compute reporting but state-level jurisdiction |
| Liability Framework | Affirmative defense for compliance; AG can sue for violations causing harm | Post-deployment if harm occurs | Up to 10% of training costs + damages | Softer than strict liability (original draft); stronger than status quo |
Bill Structure and Enforcement Framework
The following diagram illustrates how SB 1047 would have regulated frontier AI development, from initial determination through deployment and enforcement:
Diagram (loading…)
flowchart TD
A[AI Developer Plans Model Training] --> B{Will model meet<br/>covered thresholds?}
B -->|Yes:<br/>>10^26 FLOP OR<br/>>$100M cost| C[Covered Model Path]
B -->|No| Z[Exempt - No Requirements]
C --> D[Pre-Training Requirements]
D --> D1[Implement Shutdown Capability]
D --> D2[Document Safety Protocol]
D --> D3[Establish Cybersecurity]
D1 & D2 & D3 --> E[Begin Training]
E --> F[Pre-Deployment Testing]
F --> F1[CBRN Capability Testing]
F --> F2[Cyber Attack Testing<br/>>$500M damage threshold]
F --> F3[Autonomous Operation Testing]
F --> F4[Self-Improvement Testing]
F1 & F2 & F3 & F4 --> G{Hazardous<br/>Capabilities<br/>Found?}
G -->|Yes| H[Enhanced Safety Measures<br/>+ AG Notification]
G -->|No| I[Standard Deployment]
H --> J[Deployment with Monitoring]
I --> J
J --> K[Ongoing Obligations]
K --> K1[Annual Third-Party Audit<br/>starting Jan 1, 2026]
K --> K2[72-Hour Incident Reporting]
K --> K3[Whistleblower Protections]
K --> K4[5-Year Record Retention]
K1 & K2 & K3 & K4 --> L{Compliance?}
L -->|Yes| M[Affirmative Defense<br/>from Liability]
L -->|No| N[Enforcement Actions]
N --> N1[Attorney General Investigation]
N1 --> N2{Violation<br/>Confirmed?}
N2 -->|Yes| N3[Civil Penalties<br/>up to 10% training costs]
N2 -->|No| M
J --> O{Safety Incident<br/>Occurs?}
O -->|Yes| P[72-Hour Report to AG]
P --> N1
O -->|No| L
style C fill:#ffe6e6
style D fill:#fff0e6
style F fill:#fff8e6
style G fill:#ffffe6
style H fill:#ffe6e6
style N fill:#ffcccc
style N3 fill:#ff9999
style M fill:#e6ffe6
style Z fill:#e6f3ffKey Enforcement Mechanisms:
The diagram shows three primary enforcement pathways in SB 1047:
- Preventive Compliance Path (top): Developers who implement all requirements and pass audits receive affirmative defense from liability
- Incident Response Path (bottom): Safety incidents trigger mandatory 72-hour reporting and Attorney General investigation
- Penalty Path (right): Non-compliance or violation results in civil penalties up to 10% of training costs plus potential injunctive relief
The bill created overlapping accountability through third-party auditing (annual), whistleblower protections (continuous), and incident reporting (reactive), ensuring multiple mechanisms to detect non-compliance.
Core Requirements
1. Safety Testing Before Training
Pre-Training Requirements:
Developers must:
- Determine whether model will be a "covered model"
- Implement safety protocols before beginning training
- Establish shutdown procedures
Covered Model Determination:
If expected to meet thresholds:
- Document safety plan
- Prepare for testing requirements
- Establish compliance measures
2. Hazardous Capability Testing
Required Testing:
Before deployment or making available to third parties, test for:
Critical Harm Capabilities:
- Creation of chemical, biological, radiological, or nuclear weapons (CBRN)
- Mass casualty cyber attacks (>$500M damage or mass casualties)
- Autonomous operation and self-exfiltration
- Self-improvement and recursive self-modification
Testing Methods:
- Red-team testing
- Adversarial probing
- Capability evaluations
- Third-party auditing
Threshold: Model enables non-expert to cause mass casualties or >$500M in damage.
3. Safety and Security Protocol
Required Measures:
Developers must implement:
Cybersecurity:
- Protection of model weights from theft
- Secure infrastructure
- Incident response plans
Shutdown Capability:
- Full model shutdown ability
- Separate from safety fine-tuning
- Effective on all deployed instances
Ongoing Monitoring:
- Detection of hazardous use
- Capability creep tracking
- Post-deployment evaluation
Documentation:
- Written safety protocol
- Regular updates
- Public summary (redacted for security)
4. Whistleblower Protections
Employee Rights:
Protected disclosures about:
- Safety violations
- Unreasonable risk to public
- Non-compliance with the act
Prohibitions:
- Cannot retaliate against whistleblowers
- Cannot require non-disclosure preventing safety reports
- Civil penalties for violations
5. Frontier Model Division
New State Agency:
Created within California Government Operations Agency:
- Oversee compliance
- Receive safety protocols
- Investigate violations
- Issue guidance
Powers:
- Subpoena authority
- Civil penalty assessment
- Emergency orders
6. Liability Framework
Affirmative Defense:
Developers protected from liability if:
- Complied with all safety requirements
- Conducted reasonable testing
- Implemented safety protocols
- Acted in good faith
Strict Liability Removed:
Does NOT create automatic liability for harms; must prove negligence or non-compliance.
Attorney General Enforcement:
California AG can sue for:
- Violations of safety requirements
- Civil penalties up to 10% of training costs
- Injunctive relief
7. Compute Cluster Reporting
Reporting Requirement:
Owners of computing clusters with:
-
10^26 integer or floating-point operations per second
- Located in California
Must report to:
- Frontier Model Division
- Information about cluster
- Customers using cluster
Purpose: Track who has capability to train covered models.
Exemptions and Safe Harbors
Explicitly Exempted:
- Open-source models (unless developer makes $50M+/year from derivatives)
- Academic research
- Models below thresholds
- Government use
Safe Harbor:
- Compliance with safety requirements provides affirmative defense
- Good faith efforts protected
Limitations and Criticisms
Technical Limitations
| Limitation | Description | Counterargument |
|---|---|---|
| Size-based thresholds become obsolete | Algorithmic efficiency improvements (e.g., mixture-of-experts, distillation) mean smaller models can match larger ones; 10^26 FLOP threshold may be irrelevant within 2-3 years | Thresholds can be updated; captures current frontier models that pose near-term risk |
| Capability testing is imprecise | Red-team testing for CBRN/cyber capabilities is nascent; false negatives possible; evaluation science not mature | Perfect shouldn't be enemy of good; creates incentive to develop better evals |
| Shutdown capability may be technically infeasible | Distributed deployments, fine-tuned derivatives, and open-weight releases make "full shutdown" difficult to guarantee | Requirement applies pre-deployment; creates accountability for distribution decisions |
| Compliance verification is challenging | Difficult for state agency to verify claims about training compute, safety testing, or model capabilities | Third-party auditing and whistleblower protections create multiple accountability mechanisms |
Policy Limitations
| Limitation | Description | Counterargument |
|---|---|---|
| State-level jurisdiction is limited | AI development is interstate and international; California cannot regulate models trained elsewhere | California hosts majority of US AI development; creates de facto national standard (like CCPA did for privacy) |
| Enforcement capacity uncertain | Frontier Model Division (later removed) would have needed significant expertise and resources | AG enforcement model proven effective in other contexts; whistleblower protections supplement |
| Exemptions may be too broad | Open-source exemption (unless $50M+ revenue) could allow dangerous models to proliferate | Targets commercial frontier development where most risk concentrates; academic/hobbyist use low-risk |
| Narrow scope misses risks | Only covers frontier models; smaller models in high-risk applications (medical, legal, critical infrastructure) not addressed | Designed to complement other regulations (AB 2885, SB 896) addressing deployment contexts |
Political Limitations
| Limitation | Description |
|---|---|
| Industry opposition unified | Even safety-focused labs (Anthropic) opposed, undermining "industry wants reasonable regulation" narrative |
| Federal preemption risk | Future federal AI legislation could override state requirements, making compliance investments wasted |
| Innovation narrative dominates | Economic arguments about $200B+ California AI industry and 300,000+ jobs proved more politically salient than safety concerns |
| Coalition too narrow | Limited engagement from labor, civil rights, and consumer groups who might have provided broader political support |
Criticisms from AI Safety Community
Some AI safety researchers opposed SB 1047 despite sharing its goals:
- Premature codification: Locking in specific requirements (10^26 FLOP, specific capability tests) before evaluation science matures could create compliance theater without actual safety
- Wrong target: Focus on model capabilities rather than deployment context may miss where risks actually materialize
- Opportunity cost: Political capital spent on failed state bill could have supported federal efforts or voluntary frameworks with broader buy-in
- Open-source concerns: Despite exemptions, uncertainty about derivative liability could chill beneficial open research
Path Through Legislature
Initial Introduction (February 2024)
Original Sponsors:
- Senator Scott Wiener↗🏛️ governmentSenator Scott WienerSenator Wiener's office is the origin of some of the most significant U.S. state-level AI safety legislation; relevant for tracking California AI regulatory developments that may influence national and global policy.Official webpage of California State Senator Scott Wiener, who represents San Francisco and is a prominent legislator on AI safety and technology policy. Wiener has authored sig...governancepolicyregulationstate-policy+4Source ↗ (D-San Francisco), representing District 11 (San Francisco tech corridor)
- Co-sponsored by AI safety organizations including Center for AI Safety
- Support from AI safety advocates and researchers
- Immediately opposed by major AI companies and some researchers, sparking fierce national debate
- Official bill text↗🏛️ governmentSB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models ActSB 1047 was vetoed by California Governor Gavin Newsom in September 2024 despite passing the legislature; it remains highly influential as a template and reference point for ongoing AI governance efforts at state and federal levels.SB 1047 is California's 2024 landmark legislation requiring frontier AI model developers to implement safety protocols, maintain shutdown capabilities, and produce detailed safe...governancepolicyai-safetydeployment+5Source ↗ introduced February 7, 2024
- CSET Georgetown's early analysis noted the bill quickly became "a lightning rod for safety advocates and developers alike"
Amendment Process
Major Changes:
- Narrowed scope to truly frontier models (>10^26 FLOP or >$100M)
- Added safe harbors and affirmative defenses for compliant developers
- Reduced liability provisions (removed strict liability; kept negligence standard)
- Clarified open-source exemptions (unless developer earns >$50M/year from derivatives)
- Specified hazardous capabilities more precisely (CBRN, >$500M cyber damage)
- Removed Frontier Model Division↗🏛️ governmentSenator Wiener's Groundbreaking AI Bill Advances to Assembly Floor with AmendmentsThis official press release documents the amended version of California's SB 1047 as it advanced toward a final Assembly vote in August 2024, capturing the legislative compromises made in response to industry feedback on a high-profile AI safety regulation effort.California's SB 1047, a landmark AI safety bill requiring safety standards for large-scale AI developers, advanced to the Assembly floor in August 2024 with significant amendmen...governancepolicyregulationfrontier-models+4Source ↗ and criminal penalties in August 2024 amendments
Purpose of Amendments:
- Address industry concerns about overbreadth and compliance costs
- Balance innovation incentives with safety requirements
- Build bipartisan coalition for passage
- Respond to >50 stakeholder comments during committee process
Legislative Passage
August 29, 2024: Passed California Legislature (official vote records; CalMatters coverage)
- Assembly: 48-16 (75% approval)
- Senate: 32-1 (97% approval), concurrence 30-9
- Bipartisan support across party lines
- Most significant AI legislation to pass any US state legislature
- Represented months of amendments responding to >50 industry comments
- Final version removed criminal penalties and Frontier Model Division creation
Veto (September 29, 2024)
The veto was extensively covered by major outlets including the New York Times, Washington Post, The Verge, SF Standard, and TechCrunch, with policy analysis from CSET Georgetown↗🔗 web★★★★☆CSET GeorgetownMeta, OpenAI, and House Speaker Nancy Pelosi opposed the billCovers a pivotal 2024 US AI policy moment: the defeat of California's most ambitious AI safety legislation, relevant to understanding lobbying dynamics and the political landscape around AI governance.California Governor Gavin Newsom vetoed SB 1047 in September 2024, a landmark AI safety bill that would have imposed extensive safety protocols on large AI systems. Newsom cited...governancepolicyai-safetydeployment+2Source ↗ and Carnegie Endowment↗🔗 web★★★★☆Carnegie EndowmentA Heated California Debate Offers Lessons for AI Safety GovernanceRelevant to understanding how AI safety regulation is contested at the state level in the US; the SB 1047 veto is a key case study in the political economy of AI governance circa 2024.This Carnegie Endowment commentary analyzes California's SB 1047, a bipartisan AI safety bill that passed the legislature but was vetoed by Governor Newsom in September 2024. It...governancepolicyai-safetydeployment+3Source ↗. Zvi Mowshowitz's real-time analysis of the veto was widely shared in the AI safety community.
Governor Newsom's Rationale:
From Newsom's official veto message↗🏛️ governmentSB 1047 Veto MessageThis official veto message is a key primary source documenting the political and regulatory debate around compute-threshold-based AI safety legislation in California, reflecting real-world tensions between innovation, safety, and governance design choices.Governor Newsom vetoed California's SB 1047, which would have imposed safety requirements on large AI model developers based on computational thresholds. He argued the bill's si...governancepolicyai-safetyregulation+5Source ↗:
"While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it."
Additional concerns: "Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047."
Specific Concerns:
- Focus on model size rather than deployment context
- Could stifle innovation in California's tech sector
- Regulatory approach not nuanced enough (described as not "informed by an empirical trajectory analysis")
- Preferred federal regulation given interstate nature of AI
Accompanying Actions:
Newsom simultaneously:
- Signed 18 other AI bills on narrower topics (deepfakes, discrimination, transparency)
- Called for federal AI legislation to address interstate nature of technology
- Committed to working with legislature on alternative approaches
- Convened expert panel including Fei-Fei Li (Stanford), Tino Cuéllar (Carnegie Endowment), and Jennifer Tour Chayes (UC Berkeley) to develop "empirical, science-based trajectory analysis"
Support and Opposition
Supporters
AI Safety Organizations:
- Center for AI Safety
- Future of Life Institute
- AI safety researchers
Arguments:
- Frontier models pose catastrophic risks
- Industry self-regulation insufficient
- California can lead on AI safety
- Requirements are reasonable and achievable
Notable Individual Supporters:
- Yoshua Bengio (Turing Award winner, 2018)
- Geoffrey Hinton (Turing Award winner 2018, "Godfather of AI")
- Stuart Russell (UC Berkeley professor, author of leading AI textbook)
- Max Tegmark (MIT professor, founder of Future of Life Institute)
- Elon Musk (xAI CEO, publicly endorsed the bill)
- 113+ current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI (September 9, 2024 letter↗📖 reference★★★☆☆WikipediaSafe and Secure Innovation for Frontier Artificial Intelligence Models Act (Wikipedia)SB 1047 was a major flashpoint in AI policy debates in 2024; understanding its provisions and veto provides essential context for ongoing discussions about how to regulate frontier AI development at state and federal levels.Wikipedia article covering California's SB 1047, a landmark proposed state-level AI safety bill that would have imposed safety obligations on developers of large frontier AI mod...governancepolicyregulationfrontier-models+4Source ↗ to Governor Newsom)
Opponents
Major AI Companies:
- OpenAI (initially opposed; later neutral) — Senator Wiener publicly responded to OpenAI's opposition, calling it inconsistent (covered by TechCrunch)
- Anthropic (opposed initially; called final version "benefits likely outweigh costs" but remained uncertain; later published The Case for Targeted Regulation outlining their preferred regulatory framework)
- Google/DeepMind (opposed)
- Meta (strongly opposed)
- Combined market value of opposing companies: >$3 trillion
Arguments:
- Stifles innovation in California's $200+ billion AI industry
- Drives development out of California (threatening 300,000+ tech jobs)
- Premature to regulate models that don't yet exist
- Better to focus on use cases than model capabilities (size-based vs. risk-based regulation)
- Federal regulation more appropriate for interstate technology
Venture Capital:
- Y Combinator
- Andreessen Horowitz
- Others concerned about startup ecosystem impact
Some Researchers:
- Yann LeCun (Meta, Turing Award winner)
- Andrew Ng (Stanford, Google Brain co-founder)
- Fei-Fei Li (Stanford)
Concerns:
- Open-source implications despite exemptions
- Compliance costs for startups
- Regulatory overreach
- Vague standards
Labor and Progressive Groups:
- Some supported
- Some concerned it didn't address labor impacts enough
Why It Was Vetoed
Stated Reasons (Governor Newsom)
Size-Based vs. Risk-Based:
- Bill focuses on model size (compute/cost) not deployment risks
- Small models in high-risk contexts not covered
- Large models in benign contexts over-regulated
Innovation Concerns:
- California is hub of AI development
- Regulation could drive companies elsewhere
- Startups face compliance burdens
Federal Action Preferable:
- AI transcends state borders
- National framework more appropriate
- International coordination needed
Political Analysis
Industry Pressure:
- Major AI companies lobbied heavily against
- Economic arguments about California's AI ecosystem
- Threat of relocation
Presidential Politics:
- Biden administration developing AI policy
- Harris (VP, former CA Senator) in presidential race
- National Democratic messaging on tech
Tactical Considerations:
- Newsom signed 18 other AI bills simultaneously
- Positioned as pro-innovation, pro-safety balance
- Left door open for future iteration
Lack of Coalition:
- Many Democrats skeptical — Nancy Pelosi and CA Congressional Democrats urged Newsom to veto (see Pelosi's official statement)
- Republicans opposed
- Labor not fully engaged
- Insufficient grassroots pressure
Unstated Factors (Analysis)
Economic:
- California tech industry contributes $200+ billion annually to state GDP
- AI-focused companies employ 300,000+ workers in California
- Competing jurisdictions (Texas, Florida, international) actively recruiting AI companies
- Tech industry contributes 15-20% of California's general fund revenue
- Estimated compliance costs for SB 1047: $10-50M per covered model annually (industry estimates)
Policy:
- Precedent-setting implications
- Uncertainty about effectiveness
- Implementation challenges
Political:
- Presidential election dynamics
- Tech industry relationships
- Future political ambitions
Implications for AI Safety Regulation
What SB 1047 Demonstrated
Political Will Exists:
- Bipartisan legislative passage showed AI safety resonates
- Not just fringe concern but mainstream political issue
- Legislators willing to regulate despite industry opposition
Industry Opposition is Formidable:
- Even safety-focused companies (Anthropic) opposed
- Economic arguments effective
- Innovation framing powerful
Federal vs. State Tension:
- AI is inherently interstate and international
- State-level regulation faces jurisdictional limits
- But federal action is slow
Details Matter:
- Size-based vs. risk-based framing was central
- Specific thresholds and requirements heavily debated
- Implementation details crucial to political viability
Lessons for Future Efforts
What Worked
Focused Scope:
- Targeting only frontier models built support
- Exemptions for open-source and research
- Concrete thresholds (compute, cost)
Safety Framing:
- Catastrophic risk resonated
- Whistleblower protections popular
- Bipartisan appeal
Expert Endorsement:
- Turing Award winners lending credibility
- Technical community engagement
What Didn't Work
Industry Consensus:
- Even safety-concerned labs opposed
- Economic arguments effective
- Innovation framing won
Implementation Clarity:
- Vague enforcement mechanisms
- Uncertainty about compliance costs
- Questions about Frontier Model Division capacity
Coalition Building:
- Labor not fully engaged
- Grassroots support limited
- Competing priorities on left
Future Regulatory Approaches
State Level
Narrower Bills:
- Focus on specific harms (deepfakes, discrimination)
- Deployment context rather than model capabilities
- Procurement standards
Coordination:
- Multi-state coordination
- Uniform standards
- Regional compacts
California Iteration:
- Newsom committed to continued dialogue
- Future versions possible
- Refined approach incorporating feedback
Federal Level
Legislation:
- Comprehensive AI safety bill
- Build on Executive Order
- Bipartisan framework
Challenges:
- Congressional gridlock
- Lobbying pressure
- Competing priorities
International
Coordination Imperative:
- AI development global
- Race to the bottom risk
- Need for international standards
Precedents:
- EU AI Act as model
- UK approach
- Multilateral frameworks
Impact on AI Safety Movement
EA and AI Safety Organization Involvement
SB 1047 represented the most direct engagement of the EA-aligned AI safety community with state-level legislation. Zvi Mowshowitz's comprehensive guide to SB 1047 became the single most widely-read analysis in the AI safety community, while Scott Alexander's post-veto account of the campaign provided an inside perspective on the advocacy effort:
Direct organizational support:
- Center for AI Safety (CAIS) — Dan Hendrycks was a key organizer of support, leveraging CAIS's network of researchers and the 2023 Statement on AI Risk signatories
- Future of Life Institute — provided analysis and advocacy through their policy team
- The bill's core framing (catastrophic risk from frontier models, compute thresholds) closely aligned with EA-adjacent thinking on AI governance
The 113-employee letter: One of the most striking developments was the September 2024 letter to Governor Newsom↗📖 reference★★★☆☆WikipediaSafe and Secure Innovation for Frontier Artificial Intelligence Models Act (Wikipedia)SB 1047 was a major flashpoint in AI policy debates in 2024; understanding its provisions and veto provides essential context for ongoing discussions about how to regulate frontier AI development at state and federal levels.Wikipedia article covering California's SB 1047, a landmark proposed state-level AI safety bill that would have imposed safety obligations on developers of large frontier AI mod...governancepolicyregulationfrontier-models+4Source ↗ signed by 113+ current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI. This was significant because:
- It showed safety-minded employees at frontier labs were willing to publicly break with their employers' positions
- The letter specifically cited catastrophic risk concerns aligned with EA safety arguments
- It demonstrated grassroots support within the AI industry that went beyond the organized safety community
EA Forum and LessWrong debate: SB 1047 generated extensive discussion within EA online communities, with coverage from Vox's Future Perfect and other EA-adjacent outlets. The 80,000 Hours podcast episode with Nathan Calvin provided a deep dive into the bill's development and the safety community's role, while the CAIS AI Safety Newsletter covered the bill's passage and the veto aftermath. Zvi Mowshowitz also published an early analysis on LessWrong that shaped initial community opinion. The debate centered on:
- Whether the bill's specific mechanisms (compute thresholds, capability testing) were well-designed
- Whether state-level legislation was the right venue for frontier AI regulation
- The strategic implications of Anthropic's mixed position — a safety-focused lab declining to fully support a safety bill
- Whether the political capital spent on SB 1047 could have been better deployed elsewhere
Positive Effects
Mainstream Attention:
- SB 1047 brought frontier AI risk into public discourse beyond the EA/AI safety community
- Generated extensive media coverage from NYT, Washington Post, Wired, Politico, and others
- The campaign was documented in a 31-minute documentary film and analyzed in multi-perspective podcast debates
- Created a concrete policy proposal that translated abstract x-risk concerns into legislative language
Overton Window Shift:
- Made AI regulation a mainstream political issue rather than a fringe concern
- Future regulatory proposals are now measured against SB 1047 as a reference point
- The bill's passage through the legislature (Assembly 48-16, Senate 32-1) proved that AI safety legislation can achieve overwhelming bipartisan support
Community Building:
- Built coalitions between AI safety researchers, policy advocates, and sympathetic legislators
- Developed political skills and institutional knowledge within the AI safety community
- The SB 1047 campaign infrastructure directly contributed to SB 53's eventual passage
Negative Effects
Backlash:
- Some researchers became more skeptical of legislative approaches to AI safety
- "Regulatory capture" accusations from both industry (regulation benefits incumbents) and open-source advocates (harms small developers)
- Polarization between "regulate now" and "wait for better science" camps within AI safety
Movement Division:
- The most consequential split was Anthropic's position — a company founded explicitly for AI safety opposing (then hedging on) a safety bill
- Tensions between deontological ("we must act now") and consequentialist ("will this actually reduce risk?") approaches to AI governance
- Open-source AI community alienated despite the bill's exemptions
Political Capital:
- The veto was framed by opponents as evidence that AI safety regulation is politically unviable
- Industry lobby appeared emboldened, with subsequent state AI bills facing increased opposition
- However, the SB 53 outcome complicates this narrative — the veto was a setback, not a defeat
Strategic Lessons
For the AI safety movement:
- Industry alignment has limits: Even safety-focused labs will oppose regulation that threatens their competitive position or operational flexibility
- State-level regulation works as an Overton window tool: SB 1047's passage made federal proposals seem more moderate by comparison
- Iterative beats comprehensive: The SB 1047 → SB 53 arc suggests incremental approaches achieve more durable outcomes than ambitious first attempts
- Employee voice matters: The 113-employee letter demonstrated that individual AI researchers can influence policy debates independent of their employers
Strategic Debates
Should SB 1047 Have Been Pursued?
Arguments For:
- Only way to test political viability
- Built coalition and momentum
- Shifted discourse even in defeat
Arguments Against:
- Premature; should have built more support first
- Better to focus on federal action
- Antagonized potential allies
What Should Come Next?
Double Down:
- Refine and reintroduce
- Build broader coalition
- Address veto concerns
Pivot to Federal:
- Focus energy on Congress
- Support Executive Order implementation
- International coordination
Focus on Narrower Wins:
- Procurement standards
- Use-case specific regulation
- Voluntary frameworks
Build Power:
- Grassroots organizing
- Labor coalition
- Public education
Lobbying and Industry Tactics
The opposition to SB 1047 involved sophisticated political tactics from the technology industry:
Lobbyist connections to Newsom:
- Andreessen Horowitz hired Axiom Advisors, led by Jason Kinney — a long-time Newsom advisor and dining companion (the "French Laundry connection"). a16z paid $39,750 for April-June 2024 lobbying work. a16z also backed the StopSB1047.com opposition campaign site.
- Y Combinator engaged in formal California lobbying for the first time, hiring Platinum Advisors led by Darius Anderson, another firm with close Newsom ties.
Push polling: The California Chamber of Commerce commissioned a push poll (August 9-12, 2024) using loaded language about "small startup companies" paying "tens of millions in fines from state bureaucrats." Results: 28% support, 46% oppose, 26% neutral. The poll was conducted by Adam Rosenblatt of Bold Decision. Despite the framing, the bill's actual exemptions for small companies and open-source developers were not mentioned in the poll questions.
Campaign contribution context: Google was described as the biggest contributor to certain California representatives, with nearly $1 million in total campaign contributions, though these were not SB 1047-specific.
State-Level Copycat Legislation
SB 1047's influence extended beyond California even after its veto, inspiring frontier AI legislation in other states. As MIT Technology Review reported, New York legislators explicitly sought to "pick up the pieces" of the vetoed California bill:
| State | Bill | Status | Relationship to SB 1047 |
|---|---|---|---|
| New York | RAISE Act | Signed Dec 19, 2025 | Frontier model transparency and risk management; stricter penalties ($10M first offense, $30M repeat) |
| Texas | TRAIGA | Signed 2025 | Final version stripped to public sector only; original draft inspired by SB 1047 |
| Rhode Island | Frontier model bill | Introduced 2025 | Centered on catastrophic risk prevention |
| Michigan | Frontier model bill | Introduced 2025 | Similar frontier model regulation |
| Illinois | Frontier model bill | Introduced 2025 | Similar frontier model regulation |
The New York RAISE Act became the first state law directly comparable to SB 1047's ambitions, though California's SB 53 took a lighter-touch transparency approach. This divergence illustrates the ongoing policy debate: whether frontier AI regulation should focus on mandatory safety requirements (SB 1047 / NY RAISE) or transparency and reporting (SB 53).
Successor Legislation: SB 53 and the Expert Panel
Newsom's Expert Panel
Following the SB 1047 veto, Governor Newsom convened the Joint California Policy Working Group on AI Frontier Models, comprising leading AI researchers and policy experts:
- Fei-Fei Li (Stanford) — co-chair
- Tino Cuéllar (Carnegie Endowment for International Peace) — co-chair
- Jennifer Tour Chayes (UC Berkeley, Dean of Computing)
The panel released its recommendations in June 2025, which:
- Rejected a compute-threshold-only approach (the core SB 1047 mechanism)
- Endorsed a transparency-first framework with mandatory incident reporting
- Recommended creation of CalCompute, a state computing resource for AI safety research
- Advocated for risk-based rather than size-based regulatory triggers
SB 53: The Successor Bill
SB 53, also authored by Senator Scott Wiener, was signed into law by Governor Newsom on September 29, 2025 — exactly one year after the SB 1047 veto.
| Dimension | SB 1047 (Vetoed) | SB 53 (Signed) |
|---|---|---|
| Regulatory approach | Mandatory safety testing + liability | Transparency + incident reporting |
| Trigger mechanism | Compute threshold (10^26 FLOP) or cost ($100M) | Developer self-determination of frontier status |
| Penalties | Civil penalties up to 10% of training costs | Penalties for non-reporting; no pre-deployment requirements |
| Pre-deployment | Safety testing, shutdown capability required | No pre-deployment mandates |
| Reporting | 72-hour incident reporting to AG | Safety incident reporting + annual transparency reports |
| Auditing | Mandatory third-party audits | No mandatory audits |
| Whistleblower | Strong protections | Retained from SB 1047 |
| Kill switch | Required | Not required |
The 18 Companion Bills
When vetoing SB 1047, Newsom simultaneously signed 18 other AI bills, signaling support for narrower regulation:
Privacy, Data, and Transparency:
| Bill | Topic | Key Provision |
|---|---|---|
| AB 1008 | CCPA + AI | Extends CCPA coverage to personal information processed by AI |
| AB 2013 | AI training data transparency | Requires disclosure of datasets used to train AI (effective Jan 1, 2026) |
| AB 2885 | AI definition | Establishes uniform statutory definition for "artificial intelligence" |
| SB 942 | AI transparency | California AI Transparency Act; watermarking and AI content detection tools |
Protection Against Harmful AI Content:
| Bill | Topic | Key Provision |
|---|---|---|
| AB 1831 | Child safety | Expands child pornography statutes to cover AI-generated material |
| AB 1836 | Digital replicas | Protects deceased individuals from unauthorized AI replicas |
| AB 2602 | Performer protections | Requires consent for AI-generated replicas of performers |
| SB 926 | Deepfake crimes | Criminal penalties for creating/distributing sexually explicit deepfakes |
| SB 981 | Platform obligations | Social media must report and remove deepfake pornography |
| SB 1381 | Child safety | Manufacturing offenses involving minors (companion to AB 1831) |
Elections:
| Bill | Topic | Key Provision |
|---|---|---|
| AB 2355 | Campaign disclosure | Requires disclosure when campaign ads use AI technology |
| AB 2655 | Election deception | Platforms must block/label deceptive AI-generated election content |
| AB 2839 | Campaign material | Prohibits malicious distribution of deceptive AI-generated material |
Public Sector and Healthcare:
| Bill | Topic | Key Provision |
|---|---|---|
| AB 2876 | Education | Directs consideration of AI literacy in K-12 curriculum standards |
| SB 1288 | Schools | Working groups on AI in schools; guidance by Jan 1, 2026 |
| AB 3030 | Healthcare transparency | Healthcare providers must disclose GenAI use in patient communications |
| SB 1120 | Healthcare oversight | Licensed physician supervision required for AI in healthcare decisions |
| SB 896 | Government risk | Generative AI Accountability Act; state procurement and critical infrastructure risk analysis |
SB 1047 to SB 53 Arc as Policy Case Study
The SB 1047 → expert panel → SB 53 trajectory represents an important case study in AI governance:
- Ambitious bill demonstrates political feasibility — SB 1047 passed the legislature with overwhelming majorities, proving AI safety regulation is not politically toxic
- Veto channels energy into expert process — rather than killing the effort, the veto redirected it through a technical review panel
- Scaled-back version achieves enactment — SB 53 secured the transparency and reporting infrastructure that future enforcement-focused legislation can build on
- Iterative approach validated — the one-year arc from veto to signature suggests that incremental state-level regulation may be more durable than comprehensive approaches
Technical and Policy Debates
Size-Based vs. Risk-Based Regulation
Size-Based (SB 1047 Approach):
Pros:
- Objective, measurable thresholds
- Targets most capable models
- Easier to enforce
- Aligns with international compute governance
Cons:
- Doesn't capture deployment context
- Could miss dangerous applications of smaller models
- Algorithmic efficiency makes thresholds obsolete
Risk-Based (Newsom's Preference):
Pros:
- Focuses on actual harm potential
- Context-appropriate
- Adapts to changing technology
Cons:
- Harder to define and measure
- Enforcement challenges
- Potentially broader scope (privacy, fairness, etc.)
- Risk assessment subjective
Synthesis Possible:
- Combination of both approaches
- Size thresholds trigger risk assessments
- Deployment context determines requirements
Liability Questions
SB 1047 Approach:
- Affirmative defense for compliance
- Attorney General enforcement
- Civil penalties
Debate:
- Too much liability deters innovation?
- Too little fails to ensure safety?
- Who should bear costs of AI harms?
Alternative Approaches:
- Strict liability with caps
- Insurance requirements
- Tiered liability based on precautions
- No-fault compensation schemes
Open Source Implications
SB 1047 Exemption:
- Open-source models exempt unless developer profits >$50M from derivatives
Concerns Raised:
- Could still chill open-source development
- Uncertainty about liability
- Derivative work tracking difficult
Counter-Arguments:
- Exemption was broad
- Open-source not inherently safe
- Need some oversight of powerful models
Ongoing Debate:
- How to encourage open research while managing risks
- Different models for different risk levels
- Role of open-source in AI safety ecosystem
Comparison to Other Policies
vs. US Executive Order
The compute thresholds in SB 1047 were deliberately aligned↗🔗 webFenwick: Interesting Developments for Regulatory Thresholds of AI ComputeA law firm analysis relevant to AI governance practitioners tracking how compute-based regulatory thresholds are being developed; useful for understanding the legal landscape around frontier AI model oversight.A legal analysis from Fenwick examining evolving regulatory frameworks that use computational thresholds (e.g., FLOP counts) to define which AI systems trigger oversight require...governancepolicycomputeregulation+4Source ↗ with Biden's Executive Order 14110.
Similarities:
- Compute thresholds (10^26 FLOP for training)
- Safety testing requirements for CBRN risks
- Focus on frontier models only
- Developer reporting obligations
Differences:
- SB 1047 had enforcement teeth (civil penalties up to 10% of training costs, AG lawsuits)
- EO has broader scope (government use, competition policy, immigration for AI talent)
- SB 1047 state-level mandatory law; EO federal executive action (can be rescinded)
- SB 1047 required shutdown capability (unique provision)
- SB 1047 included third-party auditing requirement (EO relies on voluntary compliance)
Relationship:
- SB 1047 would have complemented EO with state-level enforcement
- State enforcement of federal principles with local adaptation
- Potential model for other states considering AI legislation
- Analysis from legal firms↗🔗 webGibson Dunn: Regulating the Future -- Eight Key Takeaways from SB 1047Legal analysis from Gibson Dunn on California's SB 1047, a landmark but ultimately vetoed AI safety bill; useful reference for understanding state-level AI governance debates and the regulatory challenges of overseeing frontier AI models.Richard Manfredi (2024)Gibson Dunn law firm analyzes California's SB 1047, a significant AI safety bill that would have imposed safety requirements on developers of large frontier AI models, which was...governancepolicyregulationfrontier-models+4Source ↗ noted SB 1047 went further than EO on liability
vs. EU AI Act
EU Act:
- Risk categories for deployed systems
- Broader scope (not just frontier models)
- Binding regulation with large fines
SB 1047:
- Narrower focus on frontier models
- More specific technical requirements (shutdown, testing)
- State-level vs. EU-wide
Lessons:
- EU's comprehensiveness politically difficult in US
- SB 1047's focused approach still failed
- Suggests US regulation will be patchwork
vs. Voluntary Commitments
Industry Commitments:
- No enforcement
- Self-defined standards
- Flexible and adaptive
SB 1047:
- Mandatory requirements
- State enforcement
- Specific standards
Debate:
- Is voluntary compliance sufficient?
- Does regulation stifle beneficial innovation?
- Can industry self-regulate emerging risks?
Career and Research Implications
Policy Careers
Lessons Learned:
- Understanding legislative process crucial
- Coalition building essential
- Technical expertise must translate to policy
Opportunities:
- State-level AI policy growing
- Need for policy entrepreneurs
- Legislative staff positions
Research Questions
Regulatory Design:
- How to balance innovation and safety?
- What thresholds are appropriate?
- How to make regulation adaptive?
Political Economy:
- Industry influence on regulation
- Public opinion on AI risk
- Coalition formation strategies
Technical:
- Measuring model capabilities
- Shutdown mechanisms
- Audit methodologies
Movement Building
Strategic Questions:
- When to push for regulation vs. build support?
- How to engage industry productively?
- Building public constituency
Skills Needed:
- Political strategy
- Coalition management
- Communications
- Policy design
Sources
Primary Documents
- California Legislature: SB-1047 Bill Text (Enrolled Version) - Full enrolled bill text
- California Legislature: SB-1047 Overview↗🏛️ governmentSB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models ActSB 1047 was vetoed by California Governor Gavin Newsom in September 2024 despite passing the legislature; it remains highly influential as a template and reference point for ongoing AI governance efforts at state and federal levels.SB 1047 is California's 2024 landmark legislation requiring frontier AI model developers to implement safety protocols, maintain shutdown capabilities, and produce detailed safe...governancepolicyai-safetydeployment+5Source ↗ - Official bill navigation and legislative history
- California Legislature: SB-1047 Bill Status - Legislative status and timeline
- California Legislature: SB-1047 Bill Analysis - Committee analyses from both chambers
- California Legislature: SB-1047 Votes - Official vote records (Assembly 48-16, Senate 32-1)
- Governor Newsom's Veto Message (PDF)↗🏛️ governmentSB 1047 Veto MessageThis official veto message is a key primary source documenting the political and regulatory debate around compute-threshold-based AI safety legislation in California, reflecting real-world tensions between innovation, safety, and governance design choices.Governor Newsom vetoed California's SB 1047, which would have imposed safety requirements on large AI model developers based on computational thresholds. He argued the bill's si...governancepolicyai-safetyregulation+5Source ↗ - Official veto statement, September 29, 2024
- The California Report on Frontier AI Policy (June 2025) - Expert panel final report
- California Assembly Privacy and Consumer Protection Committee Analysis↗🏛️ governmentCalifornia Assembly Privacy and Consumer Protection Committee AnalysisOfficial California legislative committee analysis of SB 1047 (2024), a landmark state-level AI safety bill that sparked major debate about whether subnational governments should regulate frontier AI; ultimately vetoed by Governor Newsom but influential in shaping AI governance discourse.This California Assembly Privacy and Consumer Protection Committee analysis examines SB 1047, which would impose comprehensive regulatory requirements on developers of frontier ...governancepolicyregulationfrontier-models+6Source ↗ - Detailed bill analysis, June 18, 2024
- Pelosi: Statement in Opposition to California Senate Bill 1047 - Official congressional opposition statement
News Coverage and Analysis
- New York Times: California Governor Vetoes AI Safety Bill - Coverage of the veto decision
- Washington Post: California AI Bill SB 1047 Vetoed by Newsom - Analysis of the veto and its implications
- CalMatters: Newsom vetoes major California artificial intelligence bill↗🔗 webCalMatters: Newsom vetoes major California artificial intelligence billThis news article covers a pivotal moment in US AI governance: Newsom's veto of SB 1047 in September 2024, which was the most prominent attempt at state-level frontier AI safety regulation and sparked broad debate about the appropriate locus and design of AI oversight.California Governor Gavin Newsom vetoed SB 1047, a landmark AI safety bill that would have imposed safety requirements on developers of large frontier AI models. The veto was a ...governancepolicyregulationfrontier-models+3Source ↗ - Comprehensive coverage of veto decision
- NPR: California Gov. Newsom vetoes AI safety bill that divided Silicon Valley↗🔗 webNPR: California Gov. Newsom vetoes AI safety bill that divided Silicon ValleyKey news event in AI governance: Newsom's veto of SB 1047 in September 2024 was a pivotal moment for US state-level AI safety regulation, reflecting ongoing tensions between safety advocates and the tech industry.California Governor Gavin Newsom vetoed SB 1047, a landmark AI safety bill that would have imposed safety requirements on developers of large frontier AI models. The veto highli...governancepolicyai-safetydeployment+4Source ↗ - Context on industry division
- TechCrunch: California's legislature just passed AI bill SB 1047↗🔗 web★★★☆☆TechCrunchTechCrunch: California's legislature just passed AI bill SB 1047SB 1047 was a major 2024 state-level AI safety bill in California; understanding the debate around it is important context for AI governance discussions, even though Governor Newsom ultimately vetoed the bill in September 2024.TechCrunch covers the California legislature's passage of SB 1047, a landmark AI safety bill targeting large frontier models. The bill imposes safety obligations on developers o...governancepolicyregulationfrontier-models+4Source ↗ - Coverage of legislative passage
- The Verge: California Governor Gavin Newsom Vetoes AI Safety Bill SB 1047 - Tech press coverage of veto
- Wired: Over 100 AI Employees Wrote to Gov. Newsom About SB 1047 - Coverage of the 113-employee letter
- Politico: Pelosi Opposes California AI Regulation - Congressional Democratic opposition
- Vox Future Perfect: California AI Safety Bill SB 1047 - Analysis from effective altruism-adjacent outlet
- Carnegie Endowment: All Eyes on Sacramento: SB 1047 and the AI Safety Debate↗🔗 web★★★★☆Carnegie EndowmentCarnegie Endowment: All Eyes on Sacramento: SB 1047 and the AI Safety DebateA Carnegie Endowment policy analysis of California's SB 1047, a landmark attempted state-level frontier AI safety bill that was vetoed in 2024; useful context for understanding U.S. sub-federal AI governance debates and the politics of frontier model regulation.This Carnegie Endowment analysis examines California's SB 1047 bill, which sought to impose safety requirements on developers of large frontier AI models. It explores the policy...governancepolicyai-safetyfrontier-models+4Source ↗ - Policy analysis
- Fortune: California AI Bill SB 1047 Fierce Debate - Feature on the national debate over the bill (July 2024)
- SF Standard: Gavin Newsom Vetoes Controversial AI Safety Bill - Local coverage of the veto decision
- TechCrunch: Gov. Newsom Vetoes California's Controversial AI Bill SB 1047 - Tech press veto coverage
- TechCrunch: OpenAI's Opposition to California's AI Law Makes No Sense, Says State Senator - Coverage of Wiener's response to OpenAI
- CalMatters: AI Safety Bill Passes California Legislature - Coverage of legislative passage (August 2024)
- MIT Technology Review: A New York Legislator Wants to Pick Up the Pieces of the Dead California AI Bill - Coverage of SB 1047's influence on other states (January 2025)
Legal and Technical Analysis
- Morgan Lewis: California's SB 1047 Would Impose New Safety Requirements↗🔗 webCalifornia's SB 1047 Would Impose New Safety Requirements for Developers of Large-Scale AI ModelsLegal analysis from Morgan Lewis on SB 1047, a landmark but ultimately vetoed California bill representing one of the first major attempts at state-level frontier AI safety regulation; useful for understanding policy debates around AI governance.This Morgan Lewis legal analysis examines California's SB 1047, a proposed bill that would impose safety obligations on developers of large frontier AI models. The piece outline...governancepolicyregulationfrontier-models+5Source ↗ - Legal analysis of requirements
- Gibson Dunn: Regulating the Future: Eight Key Takeaways from California's SB 1047↗🔗 webGibson Dunn: Regulating the Future -- Eight Key Takeaways from SB 1047Legal analysis from Gibson Dunn on California's SB 1047, a landmark but ultimately vetoed AI safety bill; useful reference for understanding state-level AI governance debates and the regulatory challenges of overseeing frontier AI models.Richard Manfredi (2024)Gibson Dunn law firm analyzes California's SB 1047, a significant AI safety bill that would have imposed safety requirements on developers of large frontier AI models, which was...governancepolicyregulationfrontier-models+4Source ↗ - Post-veto analysis
- Orrick: California Looks to Regulate Cutting-Edge Frontier AI Models: 5 Things to Know↗🔗 webOrrick: California Looks to Regulate Cutting-Edge Frontier AI Models: 5 Things to KnowSB 1047 was a high-profile 2024 California bill that sparked major debate in the AI industry; it was ultimately vetoed by Governor Newsom in September 2024, but remains influential as a model for future AI legislation.Orrick law firm provides a legal analysis of California's SB 1047, a landmark bill proposing safety requirements for developers of large frontier AI models. The piece outlines k...governanceregulationpolicyfrontier-models+4Source ↗ - Technical requirements breakdown
- DLA Piper: California's SB-1047: Understanding the Safe and Secure Innovation for Frontier AI Act↗🔗 webDLA Piper: California's SB-1047: Understanding the Safe and Secure Innovation for Frontier AI ActLegal analysis from a major law firm on California's SB-1047, relevant for organizations developing frontier AI systems navigating emerging US state-level AI safety regulation; the bill was ultimately vetoed by Governor Newsom in September 2024.DLA Piper provides a legal analysis of California's SB-1047, which mandates safety determinations before training frontier AI models, establishes compliance requirements, and cr...governancepolicyregulationfrontier-models+4Source ↗ - Early analysis
- Fenwick: Technological Challenges for Regulatory Thresholds of AI Compute↗🔗 webFenwick: Interesting Developments for Regulatory Thresholds of AI ComputeA law firm analysis relevant to AI governance practitioners tracking how compute-based regulatory thresholds are being developed; useful for understanding the legal landscape around frontier AI model oversight.A legal analysis from Fenwick examining evolving regulatory frameworks that use computational thresholds (e.g., FLOP counts) to define which AI systems trigger oversight require...governancepolicycomputeregulation+4Source ↗ - Analysis of compute thresholds
- Stanford HAI: Analysis of the Safe and Secure Innovation for Frontier AI Models Act - Academic policy analysis
- Lawfare: SB 1047 and the Future of Frontier AI Safety Regulation - Legal and policy analysis
- CSET Georgetown: Governor Newsom Vetoes Sweeping AI Regulation SB 1047↗🔗 web★★★★☆CSET GeorgetownMeta, OpenAI, and House Speaker Nancy Pelosi opposed the billCovers a pivotal 2024 US AI policy moment: the defeat of California's most ambitious AI safety legislation, relevant to understanding lobbying dynamics and the political landscape around AI governance.California Governor Gavin Newsom vetoed SB 1047 in September 2024, a landmark AI safety bill that would have imposed extensive safety protocols on large AI systems. Newsom cited...governancepolicyai-safetydeployment+2Source ↗ - Veto analysis from Georgetown's Center for Security and Emerging Technology
- Carnegie Endowment: California SB 1047 AI Safety Bill Veto Lessons↗🔗 web★★★★☆Carnegie EndowmentA Heated California Debate Offers Lessons for AI Safety GovernanceRelevant to understanding how AI safety regulation is contested at the state level in the US; the SB 1047 veto is a key case study in the political economy of AI governance circa 2024.This Carnegie Endowment commentary analyzes California's SB 1047, a bipartisan AI safety bill that passed the legislature but was vetoed by Governor Newsom in September 2024. It...governancepolicyai-safetydeployment+3Source ↗ - Post-veto lessons and policy analysis
- Lawfare: California's Proposed SB 1047 Would Be a Major Step Forward for AI Safety↗📄 paper★★★☆☆arXiv[2311.04850] Rethinking Benchmark and Contamination for Language Models with Rephrased SamplesThis paper addresses data contamination in large language models by demonstrating that simple rephrasing techniques can bypass existing decontamination methods, highlighting critical vulnerabilities in benchmark trustworthiness and evaluation integrity.Shuo Yang, Wei-Lin Chiang, Lianmin Zheng et al. (2023)183 citationsThis paper reveals critical vulnerabilities in current data decontamination practices for large language models. The authors demonstrate that standard string-matching decontamin...capabilitiestrainingevaluationllmSource ↗ - Balanced pros/cons analysis of the bill's provisions
- CSET Georgetown: California AI Bill Becomes a Lightning Rod - Early analysis from Georgetown's Center for Security and Emerging Technology
Senator Wiener's Office
- Senator Wiener: Groundbreaking AI Bill Advances to Assembly Floor↗🏛️ governmentSenator Wiener's Groundbreaking AI Bill Advances to Assembly Floor with AmendmentsThis official press release documents the amended version of California's SB 1047 as it advanced toward a final Assembly vote in August 2024, capturing the legislative compromises made in response to industry feedback on a high-profile AI safety regulation effort.California's SB 1047, a landmark AI safety bill requiring safety standards for large-scale AI developers, advanced to the Assembly floor in August 2024 with significant amendmen...governancepolicyregulationfrontier-models+4Source ↗ - Official statement on amendments
- Senator Wiener: Bipartisan Vote, Senate Passes Landmark AI Safety Bill↗🏛️ governmentSenator Wiener: Bipartisan Vote, Senate Passes Landmark AI Safety BillSB 1047 was a landmark but controversial California AI safety bill in 2024; it was ultimately vetoed by Governor Newsom, but the legislative debate significantly shaped discussions around frontier AI regulation at the state level.The California Senate passed SB 1047, Senator Scott Wiener's landmark AI safety bill, with bipartisan support. The bill establishes safety requirements for developers of large f...ai-safetygovernancepolicydeployment+5Source ↗ - Official statement on passage
- Lawfare Daily Podcast: State Senator Scott Wiener on SB 1047↗🔗 web★★★★☆LawfareLawfare Daily Podcast: State Senator Scott Wiener on SB 1047This 2024 podcast provides direct insight from the sponsor of California's SB 1047, a landmark and contested state-level AI safety bill targeting frontier model developers, useful for understanding the political dynamics of AI governance efforts.California State Senator Scott Wiener joins Lawfare to discuss SB 1047, his 'Safe and Secure Innovation for Frontier Artificial Intelligence Models' bill. The conversation cover...governancepolicyregulationfrontier-models+4Source ↗ - In-depth interview
- Senator Wiener Responds to OpenAI Opposition to SB 1047 - Official response to OpenAI's public opposition
Industry Perspectives
- Andreessen Horowitz: What You Need to Know About SB 1047↗🔗 webAndreessen Horowitz: What You Need to Know About SB 1047This is a prominent VC firm's industry-perspective critique of SB 1047, useful for understanding opposition arguments to state-level AI safety legislation; represents a16z's policy stance rather than neutral analysis.Andreessen Horowitz presents its analysis and opposition to California's SB 1047 AI safety bill, arguing that the legislation would stifle innovation, harm open-source AI develo...governancepolicyregulationstate-policy+4Source ↗ - Venture capital perspective
- Safe and Secure AI: Letter to YC & a16z↗🔗 webSafe and Secure AI: Letter to YC & a16zThis letter is part of the public debate around AI safety legislation (likely California SB 1047), representing the pro-regulation side responding to prominent VC firms' lobbying against such bills.An open letter responding to Y Combinator and Andreessen Horowitz's opposition to AI safety legislation, arguing that safety regulations for frontier AI models are necessary and...ai-safetygovernancepolicyfrontier-models+3Source ↗ - Response from supporters
- Brookings: Misrepresentations of California's AI safety bill↗🔗 web★★★★☆Brookings InstitutionBrookings: Misrepresentations of California's AI safety billPublished during the contentious 2024 debate over California SB 1047, this Brookings piece is relevant for understanding arguments in favor of state-level AI safety regulation and the rhetorical landscape surrounding frontier model governance.A Brookings Institution analysis defending California's AI safety bill (SB 1047) against what the authors characterize as misleading criticisms from opponents. The piece examine...ai-safetygovernancepolicyregulation+3Source ↗ - Defense of bill against criticism
- Anthropic: The Case for Targeted Regulation - Post-veto regulatory framework advocating for targeted rather than broad AI regulation
- StopSB1047.com - a16z-backed opposition campaign site against the bill
EA Community and AI Safety Analysis
- Zvi Mowshowitz: Guide to SB 1047 - Comprehensive analysis of the bill's provisions and implications (August 2024)
- Zvi Mowshowitz: Newsom Vetoes SB 1047 - Analysis of the veto decision and its aftermath (October 2024)
- Scott Alexander: SB 1047: Our Side of the Story - Inside account of the advocacy campaign from an AI safety perspective (October 2024)
- 80,000 Hours Podcast: Nathan Calvin on SB 1047 - Deep dive into the bill's development and the safety community's role
- CAIS AI Safety Newsletter #40: California AI Legislation - Coverage of SB 1047's passage through the legislature
- CAIS AI Safety Newsletter #42: Newsom Vetoes SB 1047 - Analysis of the veto from an AI safety perspective
- Zvi Mowshowitz: On the Proposed California SB 1047 (LessWrong) - Early community analysis that shaped initial opinion
- The Inside View: SB 1047 Documentary - 31-minute documentary film on the bill's journey
- Cognitive Revolution: Final Analysis on CA's AI Bill SB 1047 - Multi-perspective podcast debate with Nathan Calvin, Dean Ball, and Steve Newman
- EA Forum: SB 1047 Analysis - Effective altruism community discussion and analysis
Reference
- Wikipedia: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act↗📖 reference★★★☆☆WikipediaSafe and Secure Innovation for Frontier Artificial Intelligence Models Act (Wikipedia)SB 1047 was a major flashpoint in AI policy debates in 2024; understanding its provisions and veto provides essential context for ongoing discussions about how to regulate frontier AI development at state and federal levels.Wikipedia article covering California's SB 1047, a landmark proposed state-level AI safety bill that would have imposed safety obligations on developers of large frontier AI mod...governancepolicyregulationfrontier-models+4Source ↗ - Overview and timeline
- FastDemocracy: SB 1047 Bill Tracking - Legislative tracking and timeline
- SafeSecureAI.org - Senator Wiener's campaign site for SB 1047
References
1Gibson Dunn: Regulating the Future -- Eight Key Takeaways from SB 1047gibsondunn.com·Richard Manfredi·2024▸
Gibson Dunn law firm analyzes California's SB 1047, a significant AI safety bill that would have imposed safety requirements on developers of large frontier AI models, which was ultimately vetoed by Governor Newsom in 2024. The analysis covers the bill's key provisions, compliance obligations, and implications for AI developers. It provides a legal practitioner's perspective on the regulatory landscape for frontier AI.
DLA Piper provides a legal analysis of California's SB-1047, which mandates safety determinations before training frontier AI models, establishes compliance requirements, and creates a new Frontier Model Division within the state's Department of Technology. The article situates the bill within the broader AI regulatory landscape including the EU AI Act, Biden's Executive Order, and state-level legislative activity.
3Senator Wiener: Bipartisan Vote, Senate Passes Landmark AI Safety Billsd11.senate.ca.gov·Government▸
The California Senate passed SB 1047, Senator Scott Wiener's landmark AI safety bill, with bipartisan support. The bill establishes safety requirements for developers of large frontier AI models, including mandatory safety testing, incident reporting, and kill-switch capabilities. It represents one of the most significant state-level legislative efforts to regulate advanced AI systems in the United States.
A legal analysis from Fenwick examining evolving regulatory frameworks that use computational thresholds (e.g., FLOP counts) to define which AI systems trigger oversight requirements. The piece reviews key policy developments at state and federal levels relevant to frontier AI governance.
5Senator Wiener's Groundbreaking AI Bill Advances to Assembly Floor with Amendmentssd11.senate.ca.gov·Government▸
California's SB 1047, a landmark AI safety bill requiring safety standards for large-scale AI developers, advanced to the Assembly floor in August 2024 with significant amendments. Key changes include removing criminal penalties, eliminating a proposed Frontier Model Division regulatory body, softening legal standards from 'reasonable assurance' to 'reasonable care,' and establishing a $10 million fine-tuning cost threshold to exempt startups. The bill represents a major state-level attempt to regulate frontier AI development amid federal inaction.
California State Senator Scott Wiener joins Lawfare to discuss SB 1047, his 'Safe and Secure Innovation for Frontier Artificial Intelligence Models' bill. The conversation covers key debates the bill has sparked around AI safety vs. security, state vs. federal regulation, and model-level vs. end-user governance. Wiener addresses opposition from the tech industry and signals openness to amendments.
Governor Newsom vetoed California's SB 1047, which would have imposed safety requirements on large AI model developers based on computational thresholds. He argued the bill's size-based regulatory approach is flawed because smaller specialized models can pose equal risks, and that effective AI regulation must be risk-based, contextually aware of deployment environments, and empirically grounded rather than relying on model scale as a proxy for danger.
California Governor Gavin Newsom vetoed SB 1047, a landmark AI safety bill that would have imposed safety requirements on developers of large frontier AI models. The veto was a significant setback for state-level AI regulation, with Newsom arguing the bill was too broad and could stifle innovation while not targeting the highest-risk applications specifically.
9California Assembly Privacy and Consumer Protection Committee Analysisapcp.assembly.ca.gov·Government▸
This California Assembly Privacy and Consumer Protection Committee analysis examines SB 1047, which would impose comprehensive regulatory requirements on developers of frontier AI models costing $100M+ to train. The bill mandates governance programs, third-party audited risk assessments, whistleblower protections, and creates a new Division of Frontier Models for enforcement. It represents one of the most ambitious state-level AI safety regulatory efforts in the United States.
Wikipedia article covering California's SB 1047, a landmark proposed state-level AI safety bill that would have imposed safety obligations on developers of large frontier AI models. The bill required safety testing, incident reporting, and the ability to shut down AI systems, but was ultimately vetoed by Governor Gavin Newsom in September 2024. It represents a significant early attempt at comprehensive state-level AI regulation.
TechCrunch covers the California legislature's passage of SB 1047, a landmark AI safety bill targeting large frontier models. The bill imposes safety obligations on developers of powerful AI systems, while major tech companies and industry groups argue it will stifle innovation and push AI development out of California.
12SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Actleginfo.legislature.ca.gov·Government▸
SB 1047 is California's 2024 landmark legislation requiring frontier AI model developers to implement safety protocols, maintain shutdown capabilities, and produce detailed safety documentation before training covered models. It establishes oversight through the California Department of Technology and creates liability frameworks for developers whose models cause specified harms. Though ultimately vetoed by Governor Newsom, it represents one of the most significant state-level AI regulatory efforts and shaped subsequent AI governance debates.
A Brookings Institution analysis defending California's AI safety bill (SB 1047) against what the authors characterize as misleading criticisms from opponents. The piece examines specific claims made against the bill and argues they distort the legislation's actual requirements and scope. It provides a fact-checking perspective on the public debate surrounding frontier AI regulation at the state level.
Andreessen Horowitz presents its analysis and opposition to California's SB 1047 AI safety bill, arguing that the legislation would stifle innovation, harm open-source AI development, and impose impractical liability on developers. The discussion covers the bill's key provisions, a16z's objections, and the broader implications for AI governance in the US.
California Governor Gavin Newsom vetoed SB 1047, a landmark AI safety bill that would have imposed safety requirements on developers of large frontier AI models. The veto highlighted deep divisions within Silicon Valley between AI safety advocates and tech industry opponents, and raised broader questions about state-level AI governance.
Orrick law firm provides a legal analysis of California's SB 1047, a landmark bill proposing safety requirements for developers of large frontier AI models. The piece outlines key provisions including safety assessments, incident reporting, and kill-switch requirements, as well as potential compliance burdens and constitutional questions. It serves as a practical overview for businesses navigating the proposed regulatory landscape.
Official webpage of California State Senator Scott Wiener, who represents San Francisco and is a prominent legislator on AI safety and technology policy. Wiener has authored significant AI-related legislation including SB 1047, a major bill targeting frontier AI model safety requirements. His office is a key locus of state-level AI governance activity in the United States.
This Morgan Lewis legal analysis examines California's SB 1047, a proposed bill that would impose safety obligations on developers of large frontier AI models. The piece outlines key compliance requirements, liability provisions, and potential implications for AI companies operating in or developing models for California markets.
An open letter responding to Y Combinator and Andreessen Horowitz's opposition to AI safety legislation, arguing that safety regulations for frontier AI models are necessary and compatible with innovation. The letter counters claims made by prominent venture capital firms that state-level AI safety bills are harmful, presenting a pro-regulation perspective from AI safety advocates.
This Carnegie Endowment analysis examines California's SB 1047 bill, which sought to impose safety requirements on developers of large frontier AI models. It explores the policy debate around state-level AI regulation, the bill's key provisions, stakeholder reactions, and its implications for the broader U.S. AI governance landscape after Governor Newsom vetoed it.
Campaign website advocating for California's SB 1047 AI safety legislation, urging Governor Newsom to sign the bill into law. The site aggregates supporting letters, statements from academics, AI experts, and whistleblowers, and calls public attention to the political dynamics around AI safety regulation in California.
22Bill Text - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.leginfo.legislature.ca.gov·Government▸
California Senate Bill 1047 proposed sweeping safety requirements for developers of large frontier AI models, including mandatory safety plans, third-party audits, whistleblower protections, and the ability to shut down AI systems. Though passed by the legislature in 2024, it was vetoed by Governor Newsom, making it a landmark but ultimately unsuccessful attempt at state-level AI governance.
23Bill Status - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.leginfo.legislature.ca.gov·Government▸
California Senate Bill 1047 proposed safety requirements for developers of large frontier AI models, including mandatory safety testing, incident reporting, and the ability to shut down AI systems. The bill passed both chambers of the California legislature in 2024 but was ultimately vetoed by Governor Newsom in September 2024.
24Bill Analysis - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.leginfo.legislature.ca.gov·Government▸
California SB-1047 was a landmark proposed state law requiring developers of large frontier AI models to implement safety protocols, conduct pre-deployment safety evaluations, and establish shutdown capabilities. The bill aimed to hold AI developers liable for harms caused by models exceeding certain compute thresholds. It passed the legislature but was vetoed by Governor Newsom in September 2024.
25Bill Votes - SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.leginfo.legislature.ca.gov·Government▸
This page records the legislative votes for California SB-1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, tracking its passage through the California legislature from introduction in February 2024 through enrollment in September 2024. The bill represented a landmark attempt at state-level regulation of frontier AI models, requiring safety protocols and oversight measures from large AI developers. It was ultimately vetoed by Governor Newsom despite passing both chambers.
This Fortune article covers the intense political and industry debate surrounding California's SB 1047, a landmark AI safety bill that would impose safety requirements on developers of large AI models. The piece frames the conflict as a battle between AI safety advocates pushing for regulation and tech industry opponents warning of innovation harm, capturing a pivotal moment in US AI governance.
This CSET article examines California's SB 1047, a landmark AI safety bill that sparked intense debate between AI safety advocates who supported its liability and safety requirements and tech industry developers who opposed it as overly burdensome. The piece analyzes the competing arguments and political dynamics that made the bill highly controversial before Governor Newsom ultimately vetoed it.
This CalMatters article covers California's SB 1047, a landmark AI safety bill that passed the state legislature and was sent to Governor Gavin Newsom for signature or veto. The bill would require developers of large AI models to implement safety measures and conduct risk assessments before deployment. It represented one of the most significant state-level AI regulatory efforts in the United States.
California Governor Gavin Newsom vetoed SB 1047, a landmark AI safety bill that would have imposed safety requirements on large AI models. The veto was controversial, drawing criticism from AI safety advocates and praise from major tech companies who argued the bill was overly broad and could stifle innovation.
California Governor Gavin Newsom vetoed SB 1047 on September 29, 2024, a landmark AI safety bill authored by State Senator Scott Wiener that would have required companies developing large AI models to implement safety protocols to prevent 'critical harms.' The veto halted what would have been one of the most significant state-level AI regulatory frameworks in the US, applying liability to developers of frontier AI models above certain compute thresholds.
3124usqpep0ejc5w6hod3dulxwciwp0djs6c6ufp96av3t4whuxovj72wfkdjxu82yacb7430qjm8adbd5ezlt4592dq4zrvadcn9j9n-0btgdzpiojfzno16-fnsnu7xd" /> <link rel="preconnect" href="https://substackcdn.com" /> <title data-rh="true">Newsom Vetoes SB 1047 - by Zvi Mowshowitz</title> <meta data-rh="true" property="og:type" content="article"/><meta data-rh="trueSubstack·Blog post▸
Zvi Mowshowitz analyzes California Governor Gavin Newsom's veto of SB 1047, a landmark AI safety bill that would have imposed safety requirements on large AI models. The post examines the reasoning behind the veto, the political dynamics involved, and what the outcome means for AI governance efforts more broadly.
California Senator Scott Wiener rebuts OpenAI's opposition to SB 1047, noting that OpenAI did not criticize any specific provision of the bill and instead argued for federal action. Wiener defends the bill's requirements for safety evaluations and shutdown capabilities on large AI models, dismissing concerns about companies leaving California and citing national security endorsements from senior defense officials.
California State Senator Scott Wiener pushes back against OpenAI's opposition to SB 1047, an AI safety bill requiring safety evaluations and liability for developers of large AI models. Wiener argues that OpenAI's objections are inconsistent with the company's stated mission and public safety commitments. The article highlights the tension between AI industry self-regulation preferences and legislative oversight efforts.
Anthropic argues that governments have an urgent 18-month window to implement narrowly-targeted AI regulation to mitigate catastrophic risks—particularly in cyber and CBRN domains—while preserving innovation. The post outlines principles for risk-focused policy, citing rapid capability advances as evidence that surgical regulation is needed soon.
Former House Speaker Nancy Pelosi publicly opposed California's SB 1047, a landmark AI safety bill that would have imposed pre-deployment safety requirements on large AI models. Her opposition, citing concerns about innovation and regulatory overreach, contributed to the broader political debate that ultimately led Governor Newsom to veto the bill.
3624usqpep0ejc5w6hod3dulxwciwp0djs6c6ufp96av3t4whuxovj72wfkdjxu82yacb7430qjm8adbd5ezlt4592dq4zrvadcn9j9n-0btgdzpiojfzno16-fnsnu7xd" /> <link rel="preconnect" href="https://substackcdn.com" /> <title data-rh="true">Guide to SB 1047 - by Zvi Mowshowitz</title> <meta data-rh="true" property="og:type" content="article"/><meta data-rh="trueSubstack·Blog post▸
Zvi Mowshowitz provides a comprehensive guide and analysis of California's SB 1047, a landmark AI safety bill that would impose safety requirements on large AI model developers. The post examines the bill's provisions, likely impacts, and the debate surrounding it in the AI safety and tech communities.
Scott Alexander presents the case in favor of California's SB 1047 AI safety bill from the perspective of supporters, responding to criticisms and explaining the rationale behind the legislation's approach to regulating large AI models. The piece contextualizes the debate around AI governance and the difficulty of passing meaningful safety-oriented legislation.
An 80,000 Hours podcast episode featuring Nathan Calvin discussing California's SB 1047 AI safety bill, which would impose safety requirements on large AI model developers. The conversation covers the bill's provisions, the debate around its merits and risks, and its potential implications for broader US AI governance and policy.
3924usqpep0ejc5w6hod3dulxwciwp0djs6c6ufp96av3t4whuxovj72wfkdjxu82yacb7430qjm8adbd5ezlt4592dq4zrvadcn9j9n-0btgdzpiojfzno16-fnsnu7xd" /> <link rel="preconnect" href="https://substackcdn.com" /> <title data-rh="true">AI Safety Newsletter #40: California AI Legislation</title> <meta data-rh="true" name="theme-color" content="#ffffff"/><meta data-rh="true" property="og:type" content="article"/><meta data-rh="trueCenter for AI Safety▸
This edition of the Center for AI Safety's newsletter covers California's AI legislation landscape, analyzing key bills and their implications for AI safety governance. It examines proposed regulations aimed at managing risks from advanced AI systems at the state level.
4024usqpep0ejc5w6hod3dulxwciwp0djs6c6ufp96av3t4whuxovj72wfkdjxu82yacb7430qjm8adbd5ezlt4592dq4zrvadcn9j9n-0btgdzpiojfzno16-fnsnu7xd" /> <link rel="preconnect" href="https://substackcdn.com" /> <title data-rh="true">AI Safety Newsletter #42: Newsom Vetoes SB 1047</title> <meta data-rh="true" name="theme-color" content="#ffffff"/><meta data-rh="true" property="og:type" content="article"/><meta data-rh="trueCenter for AI Safety▸
This edition of the CAIS AI Safety Newsletter covers California Governor Gavin Newsom's veto of SB 1047, a landmark AI safety bill that would have imposed safety requirements on large AI models. The newsletter likely analyzes the implications of the veto for AI governance and the broader AI safety policy landscape.
A 30-minute documentary examining California's SB-1047 AI safety bill through exclusive interviews with key stakeholders including bill sponsor Scott Wiener, Dan Hendrycks of the Center for AI Safety, tech industry opponents, and youth advocates. The film chronicles the political battle over AI regulation that culminated in Governor Newsom's veto, capturing both the arguments for proactive AI safety legislation and the Silicon Valley opposition that helped defeat it.
42Final Analysis on CA’s AI Bill SB 1047 with Nathan Calvin, Dean W. Ball, and Steve Newmancognitiverevolution.ai▸
A podcast episode from The Cognitive Revolution in which Nathan Labenz and three experts provide detailed analysis of California's SB 1047 AI safety bill, examining its updated provisions, implications for frontier AI developers, and the broader debate over AI governance. The discussion covers both the potential benefits and drawbacks of state-level AI regulation as a model for safety oversight.
This resource appears to be an online casino promotional website unrelated to AI safety. It has no relevance to artificial intelligence, machine learning, or AI safety research.
44A New York Legislator Wants to Pick Up the Pieces of the Dead California AI BillMIT Technology Review▸
Following the veto of California's SB 1047 AI safety bill, a New York state legislator is introducing similar legislation that would impose safety requirements on developers of large AI models. The article covers the political and regulatory landscape around state-level AI governance efforts in the United States.
Stanford HAI provides an expert analysis of California's proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), examining its provisions for regulating large AI models, safety requirements, and compliance mechanisms. The analysis evaluates potential benefits, drawbacks, and implementation challenges of the legislation as a landmark state-level AI governance effort.
This resource appears to be an article on Lawfare Media analyzing California's SB 1047 bill and its implications for frontier AI safety regulation, but the page returns a 404 error and the content is no longer accessible.