Skip to content
Longterm Wiki
Updated 2026-02-11HistoryData
Citations verified11 accurate5 flagged12 unchecked
Page StatusResponse
Edited 8 weeks ago4.5k words1 backlinksUpdated weeklyOverdue by 46 days
63QualityGood •41ImportanceReference83.5ResearchHigh
Content9/13
SummaryScheduleEntityEdit history1Overview
Tables4/ ~18Diagrams2/ ~2Int. links6/ ~36Ext. links86/ ~22Footnotes28/ ~13References33/ ~13Quotes16/28Accuracy16/28RatingsN:4.1 R:5.8 A:5.4 C:7.5Backlinks1
Change History1
Fix audit report findings from PR #2167 weeks ago

Reviewed PR #216 (comprehensive wiki audit report) and implemented fixes for the major issues it identified: fixed 181 path-style EntityLink IDs across 33 files, converted 164 broken EntityLinks (referencing non-existent entities) to plain text across 38 files, fixed a temporal inconsistency in anthropic.mdx, and added missing description fields to 53 ai-transition-model pages.

Issues2
QualityRated 63 but structure suggests 100 (underrated by 37 points)
Links34 links could use <R> components

Failed and Stalled AI Policy Proposals

Analysis

Failed and Stalled AI Proposals

Analysis of AI policy failures reveals systematic patterns: 150+ federal bills with 0% passage rate, industry lobbying up 141% YoY to 648 companies, and 97% public support for AI safety rules versus minimal binding regulation. Comprehensive frameworks fail at ~5% rate while incremental transparency approaches succeed at 35-60% rates.

PurposeLearning from unsuccessful efforts
CoverageUS, International
4.5k words · 1 backlinks

Quick Assessment

DimensionAssessmentEvidence
Policy Success Rate2/10150+ federal bills introduced with 0 passed; 97% comprehensive framework failure rate
Industry Opposition Intensity9/10577% increase in OpenAI lobbying spend; 648 companies lobbying on AI (up 141% YoY)
Public-Private Alignment3/1097% public support for AI safety rules vs. minimal binding regulation
International Coordination2/10No binding treaties; US-China competition blocks multilateral frameworks

Assessment reflects 2024-2025 data on AI governance outcomes versus stated policy objectives.

Overview

Failed and stalled AI policy proposals provide critical insights into the political economy of AI governance, revealing systematic patterns that explain why comprehensive regulation remains elusive despite widespread concern about AI risks. The failure rate for ambitious AI legislation is remarkably high: during the 118th Congress, lawmakers introduced over 150 AI-related bills with none passing into law[^1]. Meanwhile, industry opposition has intensified dramatically, with 648 companies lobbying on AI in 2024 versus 458 in 2023[^2]—a 141% year-over-year increase—while AI companies collectively spent $2.71 million on lobbying in 2024, four times their 2023 spending[^3].

The disconnect between public demand and policy outcomes is striking. Gallup polling from May 2025 found 97% of Americans agree AI safety should be subject to rules and regulations[^4], while Public Citizen analysis shows 72% of Americans want more regulation of the AI industry—a 15-point increase from the prior year[^5]. Yet comprehensive binding regulation remains absent at the federal level.

These failures illuminate fundamental tensions in AI governance: the speed mismatch between rapid technological development and deliberative legislative processes, the challenge of defining "artificial intelligence" in legally precise terms, and the complex jurisdictional landscape where multiple agencies and levels of government claim regulatory authority. Perhaps most significantly, failed proposals demonstrate how industry opposition mobilizes around specific regulatory mechanisms, particularly liability provisions and mandatory compliance requirements, while showing greater tolerance for disclosure obligations and voluntary frameworks.

The pattern of failures suggests that successful AI governance may require accepting incremental progress rather than comprehensive solutions, with voluntary industry commitments serving as necessary stepping stones to eventual binding regulation. This dynamic has profound implications for AI safety, as it may mean that meaningful oversight emerges only after significant harms occur, rather than through proactive prevention.

Major Failed and Stalled AI Proposals (2024-2025)

ProposalJurisdictionYearKey ProvisionsPrimary Failure ReasonStatusIndustry Opposition
California SB 1047California2024Safety testing for models >$100M compute, shutdown requirements, liabilityGovernor veto citing federal preemption, industry oppositionVetoed Sep 2024OpenAI, Meta, Nancy Pelosi
Algorithmic Accountability ActUS Federal2019, 2022, 2023Impact assessments for automated decision systemsFailed to exit committee in three consecutive CongressesStalledAmazon, Google lobbying
SAFE Innovation FrameworkUS Federal2024Regulatory sandboxes for AI developmentJurisdictional conflicts, partisan disagreementStalled in committeeModerate
AI Labeling ActUS Federal2024Mandatory disclosure for AI-generated contentIndustry lobbying, definitional challengesStalled in committeeMeta opposition
National AI Commission ActUS Federal2023-2024Independent AI oversight bodyDiluted to advisory function, opposition to new authoritiesWeakened/StalledHigh
UN AI TreatyInternational2024Binding international AI governanceUS-China competition, verification challenges, sovereignty concernsNo progressN/A

How It Works: The Systematic Failure Mechanisms

AI policy proposals fail through predictable mechanisms that operate across multiple levels simultaneously. Understanding these patterns reveals why even popular initiatives with broad support consistently stall or collapse.

The Industry Opposition Architecture

Technology companies deploy sophisticated multi-layered strategies that go far beyond traditional lobbying. The 577% increase in OpenAI's lobbying spend to $1.76 million in 2024[^3] exemplifies how AI companies have rapidly professionalized their government relations operations. Anthropic more than doubled its spending to $720,000[^3], while established tech giants maintained their dominance with Meta spending $6.5 million in Q4 2025 alone[^6].

The opposition operates through three primary channels:

Information Asymmetry Exploitation: Companies hire former government officials and establish dedicated government relations teams, creating superior technical knowledge compared to legislative staff. This allows industry representatives to shape debates around implementation feasibility and unintended consequences, often successfully arguing that proposed regulations are technically infeasible or would harm innovation.

Narrative Warfare: The "innovation flight" argument warns that strict regulation will drive AI development to more permissive jurisdictions, resonating particularly with policymakers in technology-dependent regions like California. Companies increasingly position AI as pivotal to national security competition with China, transforming regulatory discussions into debates about American competitiveness.

Coalition Building and Astroturfing: Industry-funded organizations present themselves as grassroots voices while obscuring corporate funding sources. This creates an echo chamber where manufactured opposition appears to represent diverse stakeholder concerns, making it difficult for policymakers to assess genuine public opinion.

The Definitional Trap

Failed proposals consistently founder on the challenge of defining "artificial intelligence" in legally precise terms. Traditional legal frameworks assume clear categorical boundaries, but AI exists on a capability spectrum that resists binary classification. SB 1047's reliance on compute thresholds and cost metrics ($100 million in compute) created apparent precision but masked underlying questions about what constitutes a "frontier" AI model.

The dual-use nature of AI technology further complicates regulatory design. Unlike nuclear technology where weapons applications are clearly distinguishable from civilian uses, AI systems designed for legitimate purposes can be adapted for malicious ends with minimal modification. This reality makes it difficult to craft regulations that prevent harm without stifling beneficial innovation.

Speed Mismatch and Jurisdictional Complexity

The fundamental mismatch between technological development speed and legislative cycles creates persistent regulatory lag. By the time comprehensive legislation addresses current AI systems, the technology has evolved in ways that make existing frameworks obsolete. This dynamic is exacerbated by jurisdictional fragmentation, where the Federal Trade Commission, Securities and Exchange Commission, Food and Drug Administration, and various other agencies claim overlapping authority.

State versus federal tensions proved particularly problematic for SB 1047[^7], where federal preemption arguments assumed uniform national standards are preferable to regulatory experimentation, but federal inaction left this assumption untested. The resulting stalemate benefits companies that prefer regulatory uncertainty to clear but demanding requirements.

Major Failed Initiatives

California SB 1047: The Comprehensive Framework Collapse

California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act represented the most ambitious state-level AI regulation attempted in the United States. Introduced by Senator Scott Wiener in February 2024, the bill established safety testing requirements for AI models trained with more than $100 million in compute resources, required developers to implement shutdown capabilities and conduct red-team evaluations before deployment, and created potential liability for developers whose models caused critical harms defined as mass casualties, critical infrastructure damage, or economic losses exceeding $500 million.

Governor Newsom vetoed the bill on September 29, 2024[^7], stating that "SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data." The veto came despite the bill passing both houses of the California legislature with bipartisan support, demonstrating that concerns about AI risks transcend traditional partisan divides.

The opposition coalition proved formidable and unusual. Meta, OpenAI, and House Speaker Nancy Pelosi opposed the bill[^8], with Members of California's congressional delegation—including Speaker Emerita Pelosi, Representatives Ro Khanna, Anna Eschoo, Zoe Lofgren, and Jay Obernolte—publicly urging the Governor to reject it[^9]. Meanwhile, the Center for AI Safety, Elon Musk, and the L.A. Times editorial board supported the bill, revealing unusual cross-cutting alliances on AI safety.

The veto message highlighted several critical vulnerabilities[^7] in state-level AI regulation. Newsom cited concerns that the bill's focus on high-cost, large-scale models would provide a "false sense of security," emphasizing that smaller, specialized models could pose equally significant risks. The definitional challenges proved decisive: the bill's reliance on compute thresholds created apparent precision but could not anticipate future technological developments or distinguish between beneficial and harmful applications.

However, California's approach evolved strategically. California enacted 18 AI-related laws in 2024[^10], and SB 53 emerged as a "pared-down version of SB 1047" focusing on transparency rather than strict safety requirements[^10]. This evolution from comprehensive framework failure to incremental transparency success illustrates how policy approaches can adapt to overcome initial resistance.

Federal Legislative Gridlock: The 150-Bill Failure

The 118th Congress saw unprecedented introduction of AI-related legislation, with over 150 bills addressing various aspects of artificial intelligence governance, yet none passed into law[^1]. This represents a complete failure of the federal legislative process to address AI governance despite widespread bipartisan concern about AI risks.

Major failed proposals included the Algorithmic Accountability Act[^11], reintroduced for the third time by Senators Ron Wyden and Cory Booker and Rep. Yvette Clarke, which would have required automated decision system impact assessments for large companies. The bill consistently failed to exit committee despite growing evidence of algorithmic bias in hiring, lending, and criminal justice applications.

The AI LEAD Act introduced by Senators Durbin and Hawley[^12] and the GUARD Act of 2025 introduced by Senators Hawley and Blumenthal addressing AI chatbot safety for minors[^12] represent the latest attempts at federal AI legislation, but face similar obstacles to their predecessors.

Congressional dysfunction played a significant role in these failures, but industry opposition proved decisive. The vacuum drove state-level action: nearly 700 AI-related state bills were introduced in 2024[^13], prompting a federal counter-reaction. The House attempted to include a 10-year moratorium on state and local AI laws in budget reconciliation[^14], which was stripped by a near-unanimous 99-1 Senate vote[^15].

The systematic failure pattern reveals how industry influence shapes legislative outcomes. Combined AI company lobbying reached $2.71 million in 2024, four times their 2023 spending[^3], while the narrative positioning AI as essential to national security competition with China proved especially potent in preventing regulatory constraints.

Trump Administration's Regulatory Reversal

President Trump's Executive Order "Ensuring a National Policy Framework for Artificial Intelligence" issued December 11, 2025[^16], represents a systematic reversal of previous AI safety initiatives. The order seeks to advance "a minimally burdensome national policy framework" for AI and tasks agencies with preparing legislative recommendations for a federal AI framework that would preempt state laws.

This approach reflects the broader pattern where industry-preferred minimal regulation gains political traction by framing comprehensive oversight as economically harmful. Trump's January 2025 Executive Order "Removing Barriers to American Leadership in Artificial Intelligence" focused on revoking directives perceived as restrictive[^17], demonstrating how political transitions can eliminate even modest regulatory progress.

International Treaty Failures: Great Power Competition Blocks Coordination

International efforts to establish binding AI governance agreements have consistently failed despite widespread recognition that AI's global nature requires coordinated responses. The UN Secretary-General's High-level Advisory Body on AI released its final report "Governing AI for Humanity" in September 2024[^18], recommending seven initiatives including an international scientific panel and global AI data framework, but no binding treaty or enforcement mechanism has emerged.

Great power competition, particularly US-China tensions[^19], represents the primary obstacle to international AI agreements. The US explicitly rejected "centralized control and global governance" of AI at UN discussions, while both superpowers view AI capabilities as strategic assets essential to military and economic competitiveness.

Verification challenges compound these political obstacles. Unlike nuclear or chemical weapons, AI capabilities are largely software-based and can be rapidly modified or concealed. International monitoring would require unprecedented access to corporate research facilities and source code, raising both security and intellectual property concerns that make comprehensive treaties practically infeasible.

The Paris AI Action Summit in February 2025 called for harmonised global standards[^20], but achieved no binding commitments. The EU AI Act's entry into force with first provisions becoming applicable on February 2, 2025[^21], represents the most significant international AI governance development, but remains a regional rather than global framework.

Systematic Failure Patterns

Diagram (loading…)
flowchart TD
  subgraph Proposal["Regulatory Proposal Introduced"]
      A[Comprehensive AI Bill]
  end

  subgraph Opposition["Industry Response"]
      B[Lobbying Mobilization<br/>648 companies, +141% YoY]
      C[Innovation Flight<br/>Narrative]
      D[National Security<br/>Competition Argument]
      E[Technical Complexity<br/>Claims]
  end

  subgraph Barriers["Structural Barriers"]
      F[Definitional<br/>Challenges]
      G[Jurisdictional<br/>Conflicts]
      H[Speed Mismatch:<br/>Tech vs Legislation]
      I[Partisan<br/>Divisions]
  end

  subgraph PublicSupport["97% Public Support"]
      J[Safety Rules<br/>Gallup 2025]
      K[More Regulation<br/>72% Americans]
  end

  subgraph Outcomes["Typical Outcomes"]
      L[Stalled in Committee<br/>150+ bills, 0 passed]
      M[Weakened/Diluted]
      N[Executive Veto<br/>SB 1047]
      O[Voluntary Framework<br/>Substitute]
  end

  A --> B
  B --> C & D & E
  C --> L
  D --> M
  E --> F
  A --> F & G & H & I
  F & G --> L
  H & I --> M
  J & K --> A
  L & M & N --> O

  style A fill:#e74c3c,color:#fff
  style B fill:#f39c12,color:#fff
  style J fill:#27ae60,color:#fff
  style K fill:#27ae60,color:#fff
  style O fill:#3498db,color:#fff

AI Industry Lobbying Evolution (2023-2025)

Company/Organization2023 Spend2024 Spend2025 Q4YoY ChangeKey Regulatory Targets
OpenAI$260K$1.76M$890K+577%Safety testing, liability limits
Anthropic$280K$720K+157%Constitutional AI frameworks
Cohere$40K$130K+225%Enterprise AI regulation
Meta$5.1M$6.5M+27%Content moderation, SB 1047
Google/Alphabet$3.4MSearch regulation, AI Act
Amazon$4.6MCloud AI services
Nvidia$950K$1.4M-26%Export controls, compute access
AI Companies Total$680K$2.71M+299%Comprehensive frameworks
Total AI Lobbyists458648+141%All AI policy areas

The Regulatory Capture Pattern

Analysis of failed proposals reveals sophisticated industry strategies that create asymmetric influence over policy outcomes. The dramatic increase in lobbying expenditure—with AI companies spending four times their 2023 levels by 2024[^3]—represents rapid professionalization of government relations operations designed to shape regulatory conversations before formal proposals emerge.

Regulatory capture operates through multiple mechanisms beyond direct lobbying. Academic research on AI safety and regulatory capture published in 2025[^22] identifies how organizations with economic power use regulations to unjustly enrich themselves, while empirical studies examine how AI companies influence policy through agenda-setting, advocacy, and academic capture[^23].

The "innovation flight" narrative proves particularly effective at the state level, warning that strict regulation will drive development to more permissive jurisdictions. However, empirical evidence for this claim remains limited—financial services and pharmaceutical companies continue investing heavily in highly regulated jurisdictions when market opportunities justify compliance costs. The difference may be that AI companies face fewer location constraints than traditional industries, making the threat more credible.

Public Opinion Disconnect

The disconnect between public support for AI regulation and policy outcomes reveals systematic democratic representation failures. Gallup polling from May 2025 found 97% of Americans agree AI safety should be subject to rules and regulations[^4], with majority-level support across all key subgroups including 88% of Democrats and 79% of Republicans and independents[^4].

Public Citizen analysis shows consistent polling trends with 72% of Americans wanting more regulation of the AI industry—a 15-point increase from the prior year[^5]. Additionally, 76% of voters prefer candidates who support AI regulation, including 71% of Republicans[^5], while 77% of voters, including 75% of Republicans, say government should do more to regulate AI[^5].

This overwhelming public support creates political opportunities for AI regulation, but concentrated industry opposition and information asymmetries prevent these preferences from translating into policy outcomes. The pattern suggests that successful AI governance may require mobilizing diffuse public preferences into focused political pressure comparable to industry lobbying efforts.

State-Level Innovation and Federal Preemption

State governments have emerged as the primary venue for AI governance innovation, with frontier model regulation reintroduced in California and New York, with similar bills in Rhode Island, Michigan, and Illinois[^13]. However, unlike 2024 when Colorado enacted broad AI regulation, 2025 saw a shift toward fewer governance requirements and more transparency measures[^13].

This evolution from comprehensive frameworks to transparency-focused approaches reflects lessons learned from SB 1047's failure. California's approach of enacting 18 AI-related laws in 2024[^10] demonstrates how incremental sectoral regulation can achieve passage where comprehensive frameworks fail.

Federal preemption arguments remain a significant challenge for ambitious state initiatives. Trump's executive order tasks agencies with preparing legislative recommendations for federal AI framework[^16] that would explicitly preempt state laws, creating pressure for state action before federal override occurs.

Successful Approaches and Lessons Learned

Regulatory Approach Success Rates by Type

ApproachExamplesSuccess RateIndustry Opposition LevelKey Success Factors
Comprehensive FrameworksSB 1047, Algorithmic Accountability Act5%Very High ($10M+ lobbying)Requires crisis catalyst
Sectoral/Use-CaseColorado AI Act, AI in hiring laws35%ModerateNarrow scope, clear harms
Transparency/DisclosureCalifornia AB 2013, AI labeling60%Low-ModerateNo liability, information-focused
Executive ActionsBiden AI EO90% (within admin)LowUses existing authority
Voluntary StandardsG7 Hiroshima Process, NIST AI RMF95% (adoption)MinimalIndustry-friendly, non-binding
International TreatiesUN binding AI frameworks0%N/AGreat power competition

Success rates based on passage/implementation data from 2019-2025 policy initiatives

Incremental Progress Models

The most successful AI governance initiatives build incrementally rather than attempting comprehensive solutions. California's evolution from SB 1047's veto to SB 53's passage in 2025[^10] demonstrates how transparency-focused approaches can succeed where liability-based frameworks fail. This pattern suggests that information-based interventions serve as necessary precursors to more substantive regulatory obligations.

Executive actions prove highly effective within their scope limitations. The Biden AI Executive Order of October 2023[^24] succeeded by building on existing regulatory authorities rather than creating new ones, while narrow sectoral regulations addressing specific applications (like AI in hiring decisions) face less systematic opposition than broad technology mandates.

Disclosure requirements consistently achieve higher passage rates than performance mandates or liability provisions. Requirements for algorithmic transparency or AI-generated content labeling generate less industry opposition than rules imposing legal responsibility for harmful outcomes, suggesting that procedural obligations face fewer barriers than substantive constraints.

Learning from Successful Technology Regulation

Analysis of successful technology regulation in other domains provides insights for AI governance approaches. The EU's GDPR serves as "the most comprehensive data protection regulation to date" and "arguably the first ambitious regulation specifically targeted at regulating the digital economy"[^25]. However, studies of GDPR's economic impact show initial compliance costs for Fortune Global 500 companies amounted to $7.8 billion[^26], while bureaucratically complex laws build barriers to entry and concentrate market share among large incumbents[^26].

The GDPR experience suggests both opportunities and risks for AI regulation. A 2021 study showed persistent decline in venture investment deals for EU technology firms compared to US[^27], while IT sector SMEs faced twice the profit hit compared to other industries[^26]. These outcomes warn about potential competitive disadvantages from complex regulatory frameworks, but also demonstrate that ambitious technology regulation can be implemented despite industry opposition.

The key difference may be timing and catalyzing events. GDPR emerged after major data breaches that shifted public opinion toward favoring privacy protection over industry concerns. AI regulation may require similar catalyzing events—significant AI-caused harms that overcome current industry opposition and political inertia.

Building Effective Coalitions

Successful AI governance initiatives build broad coalitions that include both technology industry participants and civil rights advocates, rather than relying solely on safety-focused arguments. Bipartisan framing that emphasizes economic competitiveness, national security, and innovation leadership alongside safety concerns proves more durable than approaches that position regulation as primarily constraining industry.

The percentage of companies providing board oversight disclosure increased by more than 84% year-over-year[^28], while shareholder proposals related to AI more than quadrupled compared to 2023[^28]. This growing corporate governance pressure creates potential allies within industry for reasonable regulatory requirements.

Technical communities within industry can serve as allies for safety-focused regulation when their concerns align with external advocacy. AI safety researchers within major technology companies often share external researchers' concerns about risks, though their ability to influence corporate positions remains limited. Building relationships with these internal allies provides valuable intelligence about industry positions and can potentially moderate opposition to reasonable proposals.

International Coordination Opportunities

International coordination faces continued challenges from great power competition, but technical cooperation offers pathways for progress. The UK AI Safety Institute became the AI Security Institute[^21], while the EU AI Act compliance became reality with obligations for general-purpose AI models becoming applicable on August 2, 2025[^21].

The EU approach of implementing binding regulation creates market pressure that extends beyond European jurisdiction. Companies developing AI systems must comply with EU requirements to access European markets, effectively globalizing European standards through market mechanisms rather than diplomatic agreements. This "Brussels Effect" may prove more effective than treaty-based coordination that requires consensus among competing nations.

Limitations of Current Analysis

The analysis of failed AI policy proposals faces several important limitations that affect conclusions about systematic patterns and future trajectories:

Selection Bias in Failure Cases: The focus on failed and stalled proposals may overestimate the difficulty of AI governance by excluding successful but less visible initiatives. Regulatory achievements through executive actions, agency rulemaking, and incremental policy changes may create more governance progress than legislative failures suggest. The emphasis on high-profile comprehensive frameworks like SB 1047 may miss successful sectoral regulations that address specific AI applications without broad media attention.

Temporal Limitations and Policy Lag: Most analyzed failures occurred during 2019-2025, representing early-stage AI governance attempts when both technology capabilities and regulatory understanding were less mature. Policy cycles typically require multiple proposal iterations before achieving success, suggesting that current failures may represent normal early-stage policy development rather than permanent barriers to regulation. The time horizon may be insufficient to identify longer-term success patterns.

Industry Opposition Measurement Challenges: Lobbying expenditure data captures only reported influence activities, excluding informal relationships, revolving door appointments, and indirect influence through think tanks and academic institutions. The focus on monetary spending may underestimate industry influence exercised through information provision, technical expertise, and regulatory capture mechanisms that operate outside formal lobbying requirements.

Jurisdictional and Definitional Scope: The analysis primarily focuses on US federal and California state initiatives, potentially missing successful AI governance approaches in other jurisdictions that could provide different models. The definition of "AI policy" itself creates boundaries that may exclude relevant technology governance approaches in related domains like data protection, algorithmic accountability, or automated decision systems.

Public Opinion Polling Limitations: Survey data on AI regulation support may reflect general attitudes rather than preferences about specific policy mechanisms. Public support for "AI safety rules" in abstract may not translate to support for particular regulatory approaches once costs and tradeoffs become apparent. The disconnect between public opinion and policy outcomes may reflect normal democratic representation patterns rather than AI-specific governance failures.

Counterfactual Analysis Gaps: The analysis cannot definitively establish what would have happened under different conditions—whether failed proposals might have succeeded with different timing, framing, or coalition-building strategies. The identification of "systematic patterns" may reflect correlation rather than causation, where apparent failure mechanisms actually represent symptoms of deeper political or institutional challenges affecting technology governance generally.

Current State and Future Trajectory

2025-2026 Political Dynamics

The immediate trajectory of AI governance is being shaped by significant political transitions and evolving industry strategies. Trump's Executive Order promoting "a minimally burdensome national policy framework" for AI[^16] represents a systematic reversal of previous safety-focused initiatives, while expectations that Congress will continue considering AI legislation that preempts state laws[^16] creates pressure for state action before federal override.

State-level initiatives show signs of strategic adaptation. The shift from California's vetoed SB 1047 to transparency-focused SB 53 in 2025[^10] demonstrates how policy approaches evolve to overcome initial resistance. Multiple states following suit with frontier model regulation in New York, Rhode Island, Michigan, and Illinois[^13] suggests that regulatory experimentation continues despite federal preemption threats.

Industry positions show tactical evolution rather than fundamental change. While some AI companies acknowledge the inevitability of regulation and seek to influence its form, the pullback in Q4 lobbying spending with Nvidia dropping from $1.9M to $1.4M and OpenAI from $920K to $890K[^6] may reflect confidence that the new administration will pursue industry-favorable policies rather than reduced engagement with policy processes.

International Governance Evolution

International AI governance is proceeding along parallel tracks with limited coordination. The EU AI Act's implementation starting February 2, 2025, with obligations for general-purpose AI models becoming applicable on August 2, 2025[^21], creates the world's first comprehensive binding AI regulation framework. This "Brussels Effect" may prove more influential than treaty-based coordination by establishing market standards that companies must meet to access European markets.

The Paris AI Action Summit's call for harmonised global standards[^20] represents continued diplomatic engagement, but without binding commitments from major powers. The UK AI Safety Institute's evolution to the AI Security Institute[^21] demonstrates how technical cooperation can continue despite political disagreements about comprehensive governance frameworks.

US-China competition continues to block multilateral treaty approaches, but may create opportunities for bilateral technical cooperation on shared risks like critical infrastructure protection or financial system stability. The recognition that AI risks transcend national boundaries may eventually overcome sovereignty concerns, but likely requires demonstrated harms that create compelling incentives for coordination.

Technology Development and Governance Adaptation

The rapid evolution of AI capabilities continues to outpace regulatory responses, but may also create new political dynamics that favor governance. If AI systems begin causing significant economic disruption or safety incidents, public pressure for regulation could overcome current industry opposition. The 97% public support for AI safety rules[^4] provides a foundation for policy change if catalyzing events mobilize this latent support.

Technical developments in AI safety and interpretability may also reduce industry opposition to governance requirements. As safety evaluation methods improve and compliance costs decline, regulatory requirements may become more feasible and less economically burdensome. The development of technical standards through organizations like NIST provides foundations for future mandatory requirements while building consensus around best practices.

The emergence of more capable AI systems may also shift industry incentives toward supporting regulation as a competitive moat. If safety requirements become expensive to implement, established companies with resources to ensure compliance may favor regulation that creates barriers to entry for competitors. This dynamic has historical precedents in industries like finance and pharmaceuticals, where regulatory compliance creates competitive advantages for large incumbents.

Long-term Institutional Evolution

The systematic failure of traditional legislative approaches to AI governance may drive institutional innovation in democratic decision-making. The speed and technical complexity of AI development may exceed the capacity of traditional legislative deliberation, potentially requiring new hybrid institutions that combine democratic accountability with technical expertise and adaptive capacity.

Regulatory sandboxes, algorithmic audit requirements, and adaptive governance frameworks represent potential alternatives to traditional rule-making processes. These approaches may allow more flexible responses to technological change while maintaining democratic oversight. However, such institutional innovations face their own political obstacles and may require constitutional changes in some jurisdictions.

The relationship between private industry and public governance may also evolve toward new forms of collaboration or oversight. Public-private partnerships, industry self-regulatory organizations with government backstops, and hybrid governance institutions represent potential alternatives to traditional adversarial regulatory relationships. The effectiveness of such approaches depends on designing accountability mechanisms that prevent industry capture while enabling efficient coordination.

Diagram (loading…)
flowchart LR
  subgraph Current["Current State 2025"]
      A[97% Public Support<br/>for AI Safety Rules]
      B[Federal Legislative<br/>Gridlock: 150+ Bills, 0 Laws]
      C[Industry Opposition:<br/>$2.71M AI Lobbying]
      D[State Innovation:<br/>18 CA Laws, SB 53 Success]
  end

  subgraph Scenarios["Future Scenarios"]
      E[Catalyzing Event<br/>Pathway]
      F[Incremental Progress<br/>Pathway]
      G[Industry Capture<br/>Pathway]
      H[International Pressure<br/>Pathway]
  end

  subgraph Outcomes["Potential Outcomes"]
      I[Comprehensive<br/>US Federal Regulation]
      J[Sectoral/State<br/>Patchwork]
      K[Voluntary Industry<br/>Self-Regulation]
      L[EU Standards<br/>Global Adoption]
  end

  A --> E
  B --> F & G
  C --> G & K
  D --> F & J
  
  E --> I
  F --> J
  G --> K
  H --> L

  style E fill:#e74c3c,color:#fff
  style F fill:#f39c12,color:#fff
  style G fill:#95a5a6,color:#fff
  style H fill:#3498db,color:#fff

The ultimate success or failure of AI governance may depend on whether democratic institutions can adapt to govern rapidly evolving technologies while preserving accountability and human agency. The pattern of current failures suggests that existing approaches are inadequate, but whether better alternatives will emerge before they become urgently necessary remains the critical uncertainty shaping humanity's relationship with artificial intelligence.


References

1Algorithmic Accountability ActUS Congress·Government

The Algorithmic Accountability Act of 2023 is a U.S. Senate bill requiring companies to assess automated decision systems for bias, accuracy, and privacy risks before deployment. It mandates impact assessments for high-risk AI systems and grants the FTC authority to enforce compliance. The bill represents a significant legislative effort to establish federal oversight of consequential algorithmic systems.

★★★★★

This ISS STOXX analysis examines the rapid growth of corporate board oversight of AI among S&P 500 companies, finding an 84% year-over-year increase in AI oversight disclosures and a quadrupling of AI-related shareholder proposals in 2024. The report projects intensifying scrutiny in 2025 as investors demand greater transparency on AI ethics, risk management, and return on investment.

Claims (1)
The [percentage of companies providing board oversight disclosure increased by more than 84% year-over-year](https://corpgov.law.harvard.edu/2025/04/02/ai-in-focus-in-2025-boards-and-shareholders-set-their-sights-on-ai/), while [shareholder proposals related to AI more than quadrupled compared to 2023](https://corpgov.law.harvard.edu/2025/04/02/ai-in-focus-in-2025-boards-and-shareholders-set-their-sights-on-ai/).
Accurate100%Feb 22, 2026
The percentage of companies providing some disclosure of board oversight increased by more than 84% year over year and more than 150% since 2022.

Issue One analysis reveals that six major tech companies (Meta, Alphabet, Microsoft, ByteDance, X, and Snap) spent a record $61.5 million on lobbying in 2024, a 13% increase from 2023, while simultaneously cultivating relationships with the incoming Trump administration through inaugural donations. These companies deployed roughly 300 lobbyists as Congress debated child safety, data privacy, and foreign adversary legislation. The report highlights how concentrated corporate influence shapes the regulatory environment for AI and technology governance.

California Governor Gavin Newsom vetoed SB 1047 in September 2024, a landmark AI safety bill that would have imposed extensive safety protocols on large AI systems. Newsom cited concerns about the bill's narrow focus on large models while ignoring smaller system risks, though lobbying by major tech firms and opposition from congressional leaders also played a role. Newsom did sign more targeted AI bills covering training data disclosure and AI-generated content watermarking.

★★★★☆

The UN Secretary-General's High-level Advisory Body on AI released 'Governing AI for Humanity' in September 2024, proposing a globally inclusive and distributed architecture for AI governance. The report includes seven recommendations to address gaps in current AI governance, calls for international cooperation on AI risks and opportunities, and is based on extensive global consultations involving over 2,000 participants across all regions.

★★★★☆

This is the official U.S. Senate roll call vote record for the confirmation of Jeannette A. Vargas to serve as U.S. District Judge for the Southern District of New York, voted on September 10, 2024. The nomination was confirmed 51-43 along largely partisan lines. This vote has no direct relevance to AI safety or related topics.

Claims (1)
The House attempted to include a [10-year moratorium on state and local AI laws in budget reconciliation](https://www.congress.gov/bill/118th-congress/house-bill/815), which was [stripped by a near-unanimous 99-1 Senate vote](https://www.senate.gov/legislative/LIS/roll_call_votes/vote1182/vote_118_2_00234.htm).

The Hiroshima Process International Code of Conduct, developed under the G7 Hiroshima AI Process, provides voluntary guidance for organizations developing advanced AI systems including foundation models and generative AI. It outlines specific actions organizations should take across the AI lifecycle—covering risk identification, mitigation, transparency, and accountability—grounded in a risk-based approach and building on OECD AI Principles.

The G7's Hiroshima AI Process produced a voluntary International Code of Conduct for organizations developing advanced AI systems, including foundation models and generative AI. Building on OECD AI Principles, it provides a non-exhaustive, risk-based framework covering the full AI lifecycle from design through deployment. The document is intended as a living framework updated through multistakeholder consultations.

★★★★☆
9SB 1047 Veto Messagegov.ca.gov·Government

Governor Newsom vetoed California's SB 1047, which would have imposed safety requirements on large AI model developers based on computational thresholds. He argued the bill's size-based regulatory approach is flawed because smaller specialized models can pose equal risks, and that effective AI regulation must be risk-based, contextually aware of deployment environments, and empirically grounded rather than relying on model scale as a proxy for danger.

Claims (1)
[State versus federal tensions proved particularly problematic for SB 1047](https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf), where federal preemption arguments assumed uniform national standards are preferable to regulatory experimentation, but federal inaction left this assumption untested.

CSIS analysts examine the UN's September 2025 launch of the Global Dialogue on AI Governance, which establishes an annual multilateral platform for discussing AI safety, developing-country capacity gaps, governance interoperability, and socioeconomic implications. The analysis situates this initiative within broader geopolitical shifts and competing visions for global AI governance.

★★★★☆

Analysis of the Council of Europe Framework Convention on AI, Human Rights, Democracy and the Rule of Law — the first legally binding international AI governance treaty, opened for signature in September 2024. The article examines whether this treaty can bridge global AI governance gaps, drawing parallels to the Budapest Convention on Cybercrime and highlighting challenges of achieving broad international adherence, particularly from Global South nations and major powers like China and Russia.

This CEPA analysis examines Western democracies' concerns about UN Secretary-General Guterres' High-Level AI Advisory Board, arguing that the UN's opaque governance process and inclusive framing could enable China and autocratic states to shape global AI policy. The article critiques the UN's rushed consultations and proposed new bureaucratic structures as potentially undermining democratic AI governance frameworks already established by the G7, EU, and OECD.

AI companies significantly increased federal lobbying expenditures in 2024, with 648 companies spending on AI lobbying versus 458 in 2023. Anthropic more than doubled its spending from $280,000 to $720,000, while major tech firms like Microsoft, Google, and OpenAI also ramped up engagement. This reflects growing industry effort to shape U.S. AI policy amid regulatory uncertainty.

★★★☆☆
Claims (1)
Meanwhile, industry opposition has intensified dramatically, with [648 companies lobbying on AI in 2024 versus 458 in 2023](https://www.opensecrets.org/federal-lobbying/issues/summary?cycle=2024&id=AIS)—a 141% year-over-year increase—while [AI companies collectively spent \$2.71 million on lobbying in 2024, four times their 2023 spending](https://techcrunch.com/2025/01/24/ai-companies-upped-their-federal-lobbying-spend-in-2024-amid-regulatory-uncertainty/).
Minor issues90%Feb 22, 2026
According to data compiled by OpenSecrets, 648 companies spent on AI lobbying in 2024 versus 458 in 2023, representing a 141% year-over-year increase.

The claim states that AI companies collectively spent $2.71 million on lobbying in 2024, while the source specifies that OpenAI, Anthropic, and Cohere spent this amount combined. The claim also states that AI companies spent four times their 2023 spending, while the source specifies that OpenAI, Anthropic, and Cohere spent more than four times the total that the three AI labs spent in 2023.

California Governor Gavin Newsom vetoed SB 1047 on September 29, 2024, a landmark AI safety bill authored by State Senator Scott Wiener that would have required companies developing large AI models to implement safety protocols to prevent 'critical harms.' The veto halted what would have been one of the most significant state-level AI regulatory frameworks in the US, applying liability to developers of frontier AI models above certain compute thresholds.

★★★☆☆

This American Progress article examines a proposed federal legislative provision that would impose a 10-year moratorium on state and local governments enacting or enforcing their own AI regulations. The piece analyzes the policy implications, arguing this would preempt a patchwork of state-level AI safety laws and centralize regulatory authority—or create a regulatory vacuum—at the federal level.

The ASSESS AI Act is a 2023 Senate bill directing the President to establish an 18-month task force to evaluate privacy, civil rights, and civil liberties implications of federal AI use. The task force would identify regulatory and policy gaps and recommend legislative reforms to ensure AI systems comply with constitutional protections. It represents a congressional effort to proactively address governance gaps before AI-related harms proliferate in government applications.

★★★★★

This White House fact sheet outlines concrete AI governance actions taken by the Biden-Harris administration following President Biden's commitments to AI governance at the United Nations General Assembly in September 2024. It details executive branch initiatives spanning safety standards, international coordination, federal agency AI use, and risk mitigation measures. The document represents a significant policy milestone in U.S. government efforts to operationalize responsible AI development and deployment.

★★★★☆
Claims (1)
[Great power competition, particularly US-China tensions](https://www.whitehouse.gov/briefing-room/statements-releases/2024/09/24/fact-sheet-biden-harris-administration-announces-key-ai-actions-following-president-bidens-commitment-to-ai-governance-at-the-united-nations/), represents the primary obstacle to international AI agreements.

This Brookings Institution analysis examines the surge in state-level AI legislation in 2024, with nearly 700 AI-related bills introduced across U.S. states, and argues that a federal moratorium on state AI laws could undermine this regulatory momentum. The piece highlights the diversity of state approaches to AI governance and the tension between federal preemption and state innovation in AI oversight.

★★★★☆
19Protect Elections from Deceptive AI ActBrennan Center for Justice

The Brennan Center for Justice maintains a tracker of artificial intelligence legislation across the United States, with particular focus on laws addressing deceptive AI use in elections. It serves as a reference resource for tracking the evolving policy landscape around AI regulation, including the Protect Elections from Deceptive AI Act.

★★★★☆

OpenSecrets reports that lobbying activity related to artificial intelligence reached record levels in 2024, with a surge in companies and organizations seeking to influence AI-related legislation and regulation. The data highlights the growing political economy around AI governance, as tech firms, industry groups, and other stakeholders invest heavily in shaping federal AI policy.

★★★★☆

Representative Ro Khanna led a letter from California's congressional delegation to Governor Gavin Newsom addressing AI policy and governance matters in California. The letter likely concerns state-level AI legislation, safety standards, or regulatory frameworks relevant to the AI industry concentrated in California.

Claims (1)
[Meta, OpenAI, and House Speaker Nancy Pelosi opposed the bill](https://www.politico.com/news/2024/08/22/pelosi-california-ai-regulation-00175510), with [Members of California's congressional delegation—including Speaker Emerita Pelosi, Representatives Ro Khanna, Anna Eschoo, Zoe Lofgren, and Jay Obernolte—publicly urging the Governor to reject it](https://khanna.house.gov/media/press-releases/release-khanna-leads-california-congressional-delegation-letter-governor-newsom).

OpenAI dramatically expanded its lobbying expenditures in 2024, increasing spending nearly sevenfold compared to the prior year. This reflects the company's growing efforts to shape AI-related legislation and regulatory frameworks in Washington, D.C., as policymakers increasingly focus on governing advanced AI systems.

★★★★☆

This legal analysis from Brownstein Hyatt Farber Schreck examines the Senate's near-unanimous 99-1 vote stripping a federal AI preemption provision from a bill, preserving states' authority to regulate artificial intelligence. The piece explains the implications for the patchwork of state-level AI laws and what it means for companies navigating compliance across jurisdictions.

The Algorithmic Accountability Act of 2023 is U.S. House legislation requiring companies to assess and audit automated decision systems for accuracy, fairness, bias, privacy, and security impacts. It mandates impact assessments for high-risk AI systems used in critical decisions affecting consumers, with reporting requirements to the FTC. The bill represents a significant legislative effort to impose accountability requirements on AI deployment in consequential domains.

★★★★★
Claims (1)
Major failed proposals included the [Algorithmic Accountability Act](https://www.congress.gov/bill/118th-congress/house-bill/5628), reintroduced for the third time by Senators Ron Wyden and Cory Booker and Rep.
Inaccurate75%Feb 22, 2026
H.R.5628 - Algorithmic Accountability Act of 2023

The bill was introduced by Rep. Clarke, Yvette D., not Senators Ron Wyden and Cory Booker and Rep.

H.R.815 is an emergency supplemental appropriations bill for fiscal year 2024 that became Public Law 118-50 on April 24, 2024. While primarily a broad federal spending bill, it is notable in the AI governance context as it included provisions related to national security and technology funding. The bill passed with bipartisan support in the Senate (70-29).

★★★★★
Claims (1)
The House attempted to include a [10-year moratorium on state and local AI laws in budget reconciliation](https://www.congress.gov/bill/118th-congress/house-bill/815), which was [stripped by a near-unanimous 99-1 Senate vote](https://www.senate.gov/legislative/LIS/roll_call_votes/vote1182/vote_118_2_00234.htm).

This article analyzes the landscape of US federal AI-related legislation, examining why over 150 bills have been introduced in Congress without any successfully passing into law. It explores the regulatory gaps, political dynamics, and challenges in governing rapidly advancing AI technology at the federal level.

A national Gallup survey finds that 80% of Americans prioritize AI safety regulations over rapid AI development, with strong bipartisan support for government oversight and independent testing of AI systems. The poll reflects broad public concern about AI risks and desire for structured governance frameworks. Results suggest significant public backing for safety-focused policy approaches.

★★★★☆
Claims (1)
[Gallup polling from May 2025 found 97% of Americans agree AI safety should be subject to rules and regulations](https://news.gallup.com/poll/694685/americans-prioritize-safety-data-security.aspx), while [Public Citizen analysis shows 72% of Americans want more regulation of the AI industry—a 15-point increase from the prior year](https://www.citizen.org/article/years-of-polling-show-overwhelming-voter-support-for-a-crackdown-on-ai/).
Minor issues90%Feb 22, 2026
Almost all Americans (97%) agree that AI safety and security should be subject to rules and regulations, but views diverge on who should be responsible for creating them.

The Gallup poll was conducted April 25-May 5, 2025, not just May 2025. The Gallup poll found 80% of U.S. adults believe the government should maintain rules for AI safety and data security, even if it means developing AI capabilities more slowly. The 97% figure refers to the percentage of Americans who agree that AI safety and security should be subject to rules and regulations.

Former House Speaker Nancy Pelosi publicly opposed California's SB 1047 AI safety bill, calling it 'more harmful than helpful' and adding significant political pressure against the legislation. Her opposition joined a growing chorus of tech industry and political figures pushing back against the bill, which would have imposed safety requirements on large AI model developers. The bill was ultimately vetoed by Governor Newsom in September 2024.

★★★☆☆
Claims (1)
[Meta, OpenAI, and House Speaker Nancy Pelosi opposed the bill](https://www.politico.com/news/2024/08/22/pelosi-california-ai-regulation-00175510), with [Members of California's congressional delegation—including Speaker Emerita Pelosi, Representatives Ro Khanna, Anna Eschoo, Zoe Lofgren, and Jay Obernolte—publicly urging the Governor to reject it](https://khanna.house.gov/media/press-releases/release-khanna-leads-california-congressional-delegation-letter-governor-newsom).
29analysis in AI & SocietySpringer (peer-reviewed)·Paper

This paper argues that AI safety regulation is particularly vulnerable to regulatory capture, where powerful incumbents exploit safety rules for economic or political advantage. It details the specific harms and injustices that captured AI safety regulations could produce, and critically reviews existing proposals to mitigate this risk, cautioning that well-intentioned safety frameworks may be weaponized by dominant industry players.

★★★★☆
Claims (1)
[Academic research on AI safety and regulatory capture published in 2025](https://link.springer.com/article/10.1007/s00146-025-02534-0) identifies how organizations with economic power use regulations to unjustly enrich themselves, while [empirical studies examine how AI companies influence policy through agenda-setting, advocacy, and academic capture](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4931927).
Inaccurate50%Feb 22, 2026
Researchers, politicians, and the general public support safety regulations on the production and use of AI technology. Yet regulations on new technology are susceptible to the harmful phenomenon of regulatory capture, in which organizations and institutions with economic or political power exert that power to use regulations to unjustly enrich themselves.

The claim mentions 'empirical studies examine how AI companies influence policy through agenda-setting, advocacy, and academic capture', but this information is not found in the provided source text.

SB 1047 is California's 2024 landmark legislation requiring frontier AI model developers to implement safety protocols, maintain shutdown capabilities, and produce detailed safety documentation before training covered models. It establishes oversight through the California Department of Technology and creates liability frameworks for developers whose models cause specified harms. Though ultimately vetoed by Governor Newsom, it represents one of the most significant state-level AI regulatory efforts and shaped subsequent AI governance debates.

31Colorado AI Act (SB 24-205)leg.colorado.gov·Government

Colorado's SB 24-205 is a landmark state-level AI governance law establishing consumer protection duties for high-risk AI systems, effective February 1, 2026. It creates obligations for both AI developers and deployers to prevent algorithmic discrimination, conduct impact assessments, and provide consumers with transparency and appeal mechanisms for consequential AI-driven decisions.

Executive Order 14110, signed by President Biden on October 30, 2023, established comprehensive federal directives for AI safety, security, and governance in the United States. It required safety testing and reporting for frontier AI models, directed agencies to address AI risks across sectors including national security and civil rights, and aimed to position the US as a global leader in responsible AI development. The page content is currently unavailable, but the order is a landmark AI governance document.

★★★★☆
Claims (1)
The [Biden AI Executive Order of October 2023](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/) succeeded by building on existing regulatory authorities rather than creating new ones, while narrow sectoral regulations addressing specific applications (like AI in hiring decisions) face less systematic opposition than broad technology mandates.

The NIST AI RMF is a voluntary, consensus-driven framework released in January 2023 to help organizations identify, assess, and manage risks associated with AI systems while promoting trustworthiness across design, development, deployment, and evaluation. It provides structured guidance organized around core functions and is accompanied by a Playbook, Roadmap, and a Generative AI Profile (2024) addressing risks specific to generative AI systems.

★★★★★
Citation verification: 7 verified, 3 flagged, 12 unchecked of 28 total

Related Wiki Pages

Top Related Pages

Risks

AI Surveillance and US Democratic Erosion

Approaches

AI-Assisted Deliberation

Analysis

US State AI Legislation LandscapeAuthoritarian Tools Diffusion Model

Other

Scott Wiener

Concepts

International Compute Regimes

Historical

International AI Safety Summit Series

Key Debates

AI Structural Risk CruxesGovernment Regulation vs Industry Self-Governance

Policy

China AI Regulatory Framework