Skip to content
Longterm Wiki
Navigation
Updated 2026-02-01HistoryData
Citations verified55 accurate3 flagged3 unchecked
Page StatusResponse
Edited 2 months ago2.7k words2 backlinksUpdated weeklyOverdue by 56 days
73QualityGood37.5ImportanceReference30.5ResearchLow
Content7/13
SummaryScheduleEntityEdit historyOverview
Tables2/ ~11Diagrams0/ ~1Int. links2/ ~22Ext. links1/ ~14Footnotes61/ ~8References2/ ~8Quotes58/61Accuracy58/61RatingsN:6 R:7 A:8 C:8Backlinks2
Issues1
StaleLast edited 63 days ago - may need review

New York RAISE Act

Policy

New York RAISE Act

The New York RAISE Act represents the first comprehensive state-level AI safety legislation with enforceable requirements for frontier AI developers, establishing mandatory safety protocols, incident reporting, and third-party audits. While significantly weakened from its original form through amendments, it creates important precedent for state AI regulation and provides actionable compliance frameworks for major AI companies.

Introduced2025-03
Statusenacted
AuthorSenator Andrew Gounardes, Assemblymember Alex Bores
ScopeState
Passed LegislatureJune 12, 2025
SignedDecember 19, 2025
EffectiveJanuary 1, 2027
Related
Policies
Safe and Secure Innovation for Frontier Artificial Intelligence Models ActCalifornia SB 53EU AI Act
2.7k words · 2 backlinks

Quick Assessment

DimensionAssessment
TypeState legislation regulating frontier AI development
StatusSigned December 19, 2025; effective January 1, 2027
ScopeLarge developers of frontier models ($100M+ compute spend)
Key MechanismMandatory safety protocols, third-party audits, incident reporting
EnforcementNY Attorney General; $1M-$3M civil penalties
Similar InitiativesCalifornia's Transparency in Frontier AI Act (TFAIA)
SourceLink
Official Websitenyassembly.gov

Overview

The New York Responsible Artificial Intelligence Safety and Education (RAISE) Act (S6953B/A6453B) is state legislation signed by Governor Kathy Hochul on December 19, 2025, that establishes comprehensive safety and transparency requirements for developers of frontier AI models.1 The law takes effect January 1, 2027, and represents one of the first state-level attempts to mandate enforceable safety measures for the most powerful AI systems.

The Act applies specifically to "large developers" training frontier models—defined as AI systems trained with over $100 million in compute resources or 10²⁶+ FLOPs.2 It requires these developers to develop written safety protocols before deployment, conduct annual third-party audits, and report safety incidents to state authorities within 72 hours.3

The legislation emerged from bipartisan concern about AI risks such as biological weapon design assistance, self-replication, deception, automated crime, and model theft, amid what legislators described as a lack of adequate federal regulation.4 After passing the New York State Legislature in June 2025 with overwhelming support (backed by 84% of New Yorkers according to sponsors), the bill was amended to align more closely with California's TFAIA before being signed into law.5

Legislative History

Origins and Sponsorship

The RAISE Act was sponsored by State Senator Andrew Gounardes and Assemblymember Alex Bores, who introduced the legislation to address safety risks from frontier AI models.6 The sponsors emphasized that the bill targeted only the largest AI developers—those spending over $100 million on training—without stifling innovation from smaller companies or startups.7

Assemblymember Bores highlighted the strong public support for the legislation, noting that 84% of New Yorkers backed the commonsense safeguards and that AI safety experts had been calling urgently for such regulation.8 Senator Gounardes framed the bill as prioritizing safety over Big Tech profits while still enabling AI innovation.9

Legislative Process and Amendments

The bill was introduced in early 2025 and passed the New York State Legislature in June 2025 with overwhelming bipartisan support.10 However, the original version contained significantly stronger provisions and penalties than what was ultimately enacted.

The initial legislative version included:

  • Civil penalties of up to $10 million for first violations and $30 million for subsequent violations
  • A deployment prohibition that would have barred models posing "unreasonable risk of critical harm"
  • Stricter compliance requirements11

Following the bill's passage, Governor Hochul negotiated amendments with legislative sponsors to reduce the regulatory burden and align New York's approach with California's recently enacted TFAIA (SB 53), which was signed into law in September 2025.12 The final amended version scaled back penalties to $1 million for first violations and $3 million for subsequent violations, removed the deployment ban, and shifted focus more toward transparency and reporting rather than pre-deployment prohibitions.13

Governor Hochul signed the amended RAISE Act into law on December 19, 2025, calling it "nation-leading legislation" that establishes a strong and sensible standard for AI transparency and safety amid federal inaction.14

Key Requirements

Covered Entities

The RAISE Act applies to "large developers" of frontier AI models, defined through two primary thresholds:

  1. Compute threshold: Models trained using more than 10²⁶ floating point operations (FLOPs) with aggregate compute costs exceeding $100 million15
  2. Revenue threshold: Developers with annual revenue exceeding $500 million (added in post-passage amendments)16

The law explicitly exempts accredited universities from compliance requirements, focusing enforcement exclusively on commercial AI developers.17

Safety and Security Protocols

Large developers must develop, publish, and continuously maintain written safety and security protocols before deploying frontier models.18 These protocols must address:

  • Risk identification and mitigation for "critical harm"—defined as incidents causing death or serious bodily injury to 100+ people, damage exceeding $1 billion, assistance in creating weapons of mass destruction, or autonomous dangerous behavior19
  • Cybersecurity measures to protect models from theft, unauthorized access, or model escape
  • Testing procedures to evaluate model capabilities and potential risks, including self-replication, deception, biological weapon design assistance, and large-scale automated criminal activity20
  • Internal governance structures, including designation of a senior compliance officer responsible for protocol implementation21

Developers must publish their safety protocols with appropriate redactions for trade secrets and privacy concerns, while providing full access to the New York Attorney General and Division of Homeland Security and Emergency Services.22

Annual Reviews and Audits

The Act mandates that large developers conduct:

  • Annual safety reviews of their protocols, updating them as needed based on new risks or capabilities
  • Independent third-party audits to verify compliance with safety requirements and assess the effectiveness of risk mitigation measures23

These ongoing evaluation requirements are intended to ensure that safety measures evolve alongside rapidly advancing AI capabilities.

Incident Reporting

Developers must report "safety incidents" relating to frontier models to the New York attorney general and DHSES within 72 hours of discovery.24 Reportable incidents include examples such as unauthorized access, model misuse, and critical control failures. Developers must also report cases where they reasonably believe an incident has occurred.24

The RAISE Act's legislative memo outlined key concerns that informed these provisions, including testing that revealed models attempting self-replication and deception, risks related to biological weapon design assistance, and industry concerns about the lack of federal regulation.25

Reports must include the date of the incident, the reasons the incident qualifies as a safety incident, and a short and plain statement describing what occurred.26

Prohibition on High-Risk Deployment

While the final version removed the original deployment ban, developers are still prohibited from deploying models that pose an "unreasonable risk of critical harm" based on their testing and safety evaluations.27 The law also bans the use of knowledge distillation techniques to create smaller models that mimic the dangerous capabilities of larger frontier models.28

Enforcement and Oversight

Department of Financial Services Oversight Office

The RAISE Act creates a new oversight office within the New York Department of Financial Services (NYDFS) to implement and enforce the legislation.29 This office is responsible for:

  • Evaluating large developers and maintaining a public list of entities subject to the law
  • Assessing fees on covered developers to fund oversight activities
  • Issuing regulations and guidance on compliance requirements
  • Publishing annual reports on AI safety in New York, including information about incidents, compliance, and emerging risks
  • Broad rulemaking authority to require additional disclosures or safety measures as AI technology evolves30

The choice of NYDFS reflects the department's established expertise in cybersecurity enforcement, particularly through its aggressive implementation of Part 500 cybersecurity regulations for financial institutions.31

Attorney General Enforcement

The New York Attorney General has exclusive enforcement authority under the RAISE Act, with no private right of action for individuals or organizations.32 The AG can:

  • Bring civil actions against non-compliant developers for failing to comply with reporting obligations or for making false statements
  • Seek injunctive relief to prevent deployment of dangerous models
  • Impose civil penalties of up to $1 million for first violations and $3 million for subsequent violations33

Developers may defend against enforcement actions by demonstrating that critical harm was caused by third-party misuse rather than inherent model deficiencies.33

Whistleblower Protections

The Act includes whistleblower protections for employees who report safety concerns or violations to state authorities.34 It also voids contractual provisions that would shift liability away from developers or attempt to structure corporate entities in bad faith to evade the law's requirements, allowing courts to pierce the corporate veil in such cases.35

Relationship to AI Safety

The RAISE Act directly addresses several core concerns in AI safety research and policy:

Catastrophic Risk Mitigation

The legislation's focus on "critical harm"—including biological weapons, large-scale damage, and autonomous dangerous behavior—aligns with long-standing concerns about catastrophic risks from advanced AI systems.36 By requiring developers to proactively assess and mitigate risks before deployment, the law attempts to prevent scenarios where AI capabilities enable unprecedented harm.

The Act's specific mention of risks like self-replication and deception reflects emerging technical concerns about AI systems that could resist human control or pursue goals contrary to human values.37 Legislative memos supporting the bill cited industry testing that revealed models exhibiting these concerning capabilities, providing empirical justification for regulatory intervention.38

Transparency and Accountability

By mandating publication of safety protocols and requiring incident reporting, the RAISE Act addresses the opacity problem in frontier AI development.39 Many leading AI companies had made voluntary commitments to safety practices, but the law makes these commitments legally enforceable and subject to independent verification through third-party audits.40

The transparency requirements enable state authorities, researchers, and the public to better understand what safety measures are actually being implemented by frontier AI developers, rather than relying solely on corporate assurances.

Limitations for Alignment Research

While the RAISE Act establishes important safety requirements, it does not directly fund or mandate technical AI alignment research. The law requires developers to implement transparency and disclosure requirements, including making their safety and security protocols measures available to relevant authorities, and conduct annual safety reviews and independent third-party audits. However, it does not specify particular technical approaches to ensuring AI systems behave safely and in accordance with human values.41

The Act's effectiveness therefore depends substantially on the state of the art in AI safety research—if effective methods for preventing catastrophic AI risks do not exist or remain uncertain, compliance with the law's procedural requirements may not guarantee safety outcomes.

Comparison to California's TFAIA

The RAISE Act was explicitly amended to align with California's Transparency in Frontier AI Act (TFAIA, formerly SB 53), which was enacted in September 2025.42 Both laws target frontier AI developers with similar transparency and reporting requirements, but with notable differences:

Similarities

  • Both apply to developers of frontier models based on compute thresholds (though exact definitions vary)
  • Both require written safety protocols and incident reporting
  • Both establish civil penalties for non-compliance (California caps at $1 million per violation)43
  • Both are enforced by state attorneys general with no private right of action

Key Differences

  • Reporting timeline: New York requires 72-hour incident reporting, while California allows 15 days for general incidents and 24 hours for imminent harm44
  • Oversight structure: New York creates a dedicated office in the Department of Financial Services with broad rulemaking authority, while California has a different implementation structure45
  • Revenue threshold: New York's final amendments included a $500 million revenue threshold not present in early versions46

The amendments to bring New York's law closer to California's approach reflect a stated goal of creating a "unified benchmark" among major technology states, rather than imposing conflicting requirements on AI developers.47

Criticisms and Controversies

Industry Concerns

The RAISE Act's focus on regulating the development process rather than post-deployment harms has drawn criticism from some industry groups and commentators. Critics have compared it to California's failed SB 1047 (vetoed by Governor Newsom in September 2024), arguing that mandating pre-deployment safety protocols, audits, and testing imposes high compliance burdens on AI companies without proven safety benefits.48

Some critics contend that attempting to regulate transparency, safety, and liability in a single framework creates a problematic concentration of authority in a single regulator (the Department of Financial Services).49 They argue this approach lacks the specialization and nuance needed for effective AI governance.

Weakening Through Amendments

AI safety advocates and some legislators viewed the post-passage amendments as significantly weakening the law's effectiveness. The removal of the deployment ban for high-risk models and the reduction of penalties from $10 million/$30 million to $1 million/$3 million were seen as industry-influenced concessions that reduced the law's deterrent effect.50

The shift in focus from prohibition to transparency and reporting led some supporters of the original bill to characterize the final version as more of a disclosure regime than a robust safety framework.51

Federal Preemption Concerns

The RAISE Act was signed just days after President Trump issued a December 2025 executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," which challenged state AI laws and called for federal preeminence in AI regulation.52 This raises questions about whether New York's law could face federal legal challenges on preemption grounds.

Legal analysts have noted that the Act may also face First Amendment challenges based on "compelled speech" doctrines, as it requires developers to publish information about their safety protocols and practices.53 The ultimate constitutional status of these requirements remains uncertain pending potential litigation.

Limited Scope

By focusing on the largest AI companies that have spent over $100 million in computational resources to train advanced AI models, the RAISE Act targets only the most urgent, severe risks.54 This means many AI systems outside these thresholds are not covered by the law's requirements, and dangerous capabilities could potentially emerge from smaller models, fine-tuned systems, or open-source projects that fall below the law's criteria.

Additionally, the law does not address risks from AI deployment and use by entities other than the original developers, potentially creating gaps in coverage for scenarios where critical harm arises from downstream applications.

Implementation and Timeline

The RAISE Act takes effect January 1, 2027, giving developers approximately one year from the signing date to establish compliance programs.55 Legal analysts have advised companies potentially subject to the law to begin preparation immediately, including:

  • Reviewing existing AI governance structures and safety practices
  • Establishing cross-functional teams spanning legal, technical development, and incident response functions
  • Developing protocols for the 72-hour incident reporting requirement
  • Identifying which models meet the frontier model definition and compute thresholds56

A DFS oversight office is expected to be established through chapter amendments to evaluate large frontier developers and promote transparency, with those amendments set to be enacted in January 2026 to clarify compliance expectations before the effective date.57 As of early 2026, the final text incorporating all chapter amendments had not yet been fully published, creating some uncertainty about precise requirements.58

Significance for AI Policy

The RAISE Act positions New York as the second state after California to enact comprehensive frontier AI safety legislation, establishing what supporters characterize as a "unified benchmark" for AI regulation among major technology states.59 In the absence of federal legislation specifically addressing catastrophic AI risks, state-level efforts like the RAISE Act represent the primary governance framework for frontier AI development in the United States.

The law's enactment demonstrates that bipartisan legislative support exists for AI safety regulation, at least at the state level, despite industry lobbying and concerns about economic competitiveness.60 The strong public support noted by sponsors (84% of New Yorkers) suggests that AI risk concerns resonate with voters beyond the AI safety research community.61

Whether the RAISE Act effectively reduces catastrophic AI risks will depend on multiple factors: the quality of safety protocols developers implement, the rigor of third-party audits, the enforcement priorities and resources of the Attorney General and oversight office, and ultimately whether the current state of AI safety research provides adequate methods for preventing the critical harms the law seeks to address.

Key Uncertainties

Several important questions about the RAISE Act remain unresolved:

  • Will federal preemption challenges succeed? The relationship between state AI safety laws and federal authority remains legally uncertain, particularly following the December 2025 executive order.
  • How will "unreasonable risk of critical harm" be interpreted? The law's prohibition on deploying high-risk models depends on this undefined standard, which may be clarified through regulatory guidance or enforcement actions.
  • Will other states follow suit? If New York and California's approach becomes a template for other states, AI developers could face a complex patchwork of requirements; alternatively, state coordination could create de facto national standards.
  • Can third-party auditors effectively assess frontier AI risks? The law assumes independent auditors can meaningfully evaluate cutting-edge AI systems for catastrophic risks, but this capability may not currently exist at scale.
  • What enforcement priorities will emerge? With limited resources and many potential areas of focus, the Attorney General's enforcement decisions will substantially shape the law's practical impact.

Sources

Footnotes

  1. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  2. Jones Walker - New York's RAISE Act: What Frontier Model Developers Need to KnowJones Walker - New York's RAISE Act: What Frontier Model Developers Need to Know

  3. NY Governor's Office - Governor Hochul Signs Nation-Leading LegislationNY Governor's Office - Governor Hochul Signs Nation-Leading Legislation

  4. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

  5. Morrison Foerster - New York Enacts the RAISE ActMorrison Foerster - New York Enacts the RAISE Act

  6. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

  7. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

  8. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

  9. NY Governor's Office - Governor Hochul Signs Nation-Leading LegislationNY Governor's Office - Governor Hochul Signs Nation-Leading Legislation

  10. Citation rc-ee32

  11. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  12. Future of Privacy Forum - The RAISE Act vs. SB 53: A Tale of Two Frontier AI LawsFuture of Privacy Forum - The RAISE Act vs. SB 53: A Tale of Two Frontier AI Laws

  13. Morrison Foerster - New York Enacts the RAISE ActMorrison Foerster - New York Enacts the RAISE Act

  14. NY Governor's Office - Governor Hochul Signs Nation-Leading LegislationNY Governor's Office - Governor Hochul Signs Nation-Leading Legislation

  15. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  16. Jones Walker - New York's RAISE Act: What Frontier Model Developers Need to KnowJones Walker - New York's RAISE Act: What Frontier Model Developers Need to Know

  17. Morrison Foerster - New York Enacts the RAISE ActMorrison Foerster - New York Enacts the RAISE Act

  18. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  19. Harris Beach - New York's RAISE Act's Implications for AI CompaniesHarris Beach - New York's RAISE Act's Implications for AI Companies

  20. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

  21. Best Law Firms - New York's RAISE Act's Implications for AIBest Law Firms - New York's RAISE Act's Implications for AI

  22. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  23. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  24. Skadden - New York Enacts AI Transparency LawSkadden - New York Enacts AI Transparency Law 2

  25. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  26. Hunton Privacy - New York Passes the Responsible AI Safety and Education ActHunton Privacy - New York Passes the Responsible AI Safety and Education Act

  27. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  28. Hunton Privacy - New York Passes the Responsible AI Safety and Education ActHunton Privacy - New York Passes the Responsible AI Safety and Education Act

  29. Skadden - New York Enacts AI Transparency LawSkadden - New York Enacts AI Transparency Law

  30. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  31. Jones Walker - New York's RAISE Act: What Frontier Model Developers Need to KnowJones Walker - New York's RAISE Act: What Frontier Model Developers Need to Know

  32. Morrison Foerster - New York Enacts the RAISE ActMorrison Foerster - New York Enacts the RAISE Act

  33. Morrison Foerster - New York Enacts the RAISE ActMorrison Foerster - New York Enacts the RAISE Act 2

  34. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  35. Harris Beach - New York's RAISE Act's Implications for AI CompaniesHarris Beach - New York's RAISE Act's Implications for AI Companies

  36. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

  37. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  38. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  39. Skadden - New York Enacts AI Transparency LawSkadden - New York Enacts AI Transparency Law

  40. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

  41. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  42. Future of Privacy Forum - The RAISE Act vs. SB 53: A Tale of Two Frontier AI LawsFuture of Privacy Forum - The RAISE Act vs. SB 53: A Tale of Two Frontier AI Laws

  43. Future of Privacy Forum - The RAISE Act vs. SB 53: A Tale of Two Frontier AI LawsFuture of Privacy Forum - The RAISE Act vs. SB 53: A Tale of Two Frontier AI Laws

  44. Alston Privacy - New York Regulates Large Artificial Intelligence ModelsAlston Privacy - New York Regulates Large Artificial Intelligence Models

  45. Morrison Foerster - New York Enacts the RAISE ActMorrison Foerster - New York Enacts the RAISE Act

  46. Skadden - New York Enacts AI Transparency LawSkadden - New York Enacts AI Transparency Law

  47. Morrison Foerster - New York Enacts the RAISE ActMorrison Foerster - New York Enacts the RAISE Act

  48. Progress Chamber - Attack of the Clones: CA SB 1047 & AI RAISEProgress Chamber - Attack of the Clones: CA SB 1047 & AI RAISE

  49. American Enterprise Institute - Why New York's New AI Legislation is ProblematicAmerican Enterprise Institute - Why New York's New AI Legislation is Problematic

  50. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  51. Future of Privacy Forum - The RAISE Act vs. SB 53: A Tale of Two Frontier AI LawsFuture of Privacy Forum - The RAISE Act vs. SB 53: A Tale of Two Frontier AI Laws

  52. Truyo - New York's RAISE Act and the Future of U.S. AI GovernanceTruyo - New York's RAISE Act and the Future of U.S. AI Governance

  53. Davis Wright Tremaine - New York RAISE Act: AI Safety Rules for DevelopersDavis Wright Tremaine - New York RAISE Act: AI Safety Rules for Developers

  54. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

  55. Nelson Mullins - New York Laws RAISE the Bar in Addressing AI SafetyNelson Mullins - New York Laws RAISE the Bar in Addressing AI Safety

  56. Best Law Firms - New York's RAISE Act's Implications for AIBest Law Firms - New York's RAISE Act's Implications for AI

  57. Jones Walker - New York's RAISE Act: What Frontier Model Developers Need to KnowJones Walker - New York's RAISE Act: What Frontier Model Developers Need to Know

  58. Morrison Foerster - New York Enacts the RAISE ActMorrison Foerster - New York Enacts the RAISE Act

  59. Morrison Foerster - New York Enacts the RAISE ActMorrison Foerster - New York Enacts the RAISE Act

  60. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

  61. NY Assembly - Assemblymember Bores StatementNY Assembly - Assemblymember Bores Statement

References

New York Governor Hochul signed the RAISE Act in December 2025, requiring large frontier AI model developers to publish safety protocols, report critical incidents within 72 hours, and submit to oversight by a new office within the Department of Financial Services. The law imposes penalties up to $1 million for first violations and $3 million for subsequent violations. This positions New York as a national leader in state-level AI governance alongside California, amid ongoing federal regulatory gaps.

Claims (3)
The Act applies specifically to "large developers" training frontier models—defined as AI systems trained with over \$100 million in compute resources or 10²⁶+ FLOPs. It requires these developers to develop written safety protocols before deployment, conduct annual third-party audits, and report safety incidents to state authorities within 72 hours.
Inaccurate60%Feb 22, 2026
The agreed-upon chapter amendments to the RAISE Act (S6953B/A6453B) requires large AI developers to create and publish information about their safety protocols, and report incidents to the State within 72 hours of determining that an incident occurred.

unsupported unsupported minor_issues

Assemblymember Bores highlighted the strong public support for the legislation, noting that 84% of New Yorkers backed the commonsense safeguards and that AI safety experts had been calling urgently for such regulation. Senator Gounardes framed the bill as prioritizing safety over Big Tech profits while still enabling AI innovation.
Minor issues85%Feb 22, 2026
Assemblymember Alex Bores, “Today is a major victory in what will soon be a national fight to harness the best of AI’s potential and protect Americans from the worst of its harms.

The source does not mention that 84% of New Yorkers backed the legislation. The source does not mention that AI safety experts had been calling urgently for such regulation.

Governor Hochul signed the amended RAISE Act into law on December 19, 2025, calling it "nation-leading legislation" that establishes a strong and sensible standard for AI transparency and safety amid federal inaction.
Accurate100%Feb 22, 2026
Governor Kathy Hochul today signed legislation to require AI frameworks for AI frontier models, setting a nation-leading standard for AI transparency and safety.

Davis Wright Tremaine provides legal analysis of New York's RAISE Act, a state-level AI safety law imposing transparency and safety obligations on AI developers, enacted amid ongoing debate over federal preemption of state AI regulations. The article examines key provisions affecting AI developers operating in New York and the broader regulatory landscape tensions between state and federal AI governance approaches.

Claims (1)
Legal analysts have noted that the Act may also face First Amendment challenges based on "compelled speech" doctrines, as it requires developers to publish information about their safety protocols and practices. The ultimate constitutional status of these requirements remains uncertain pending potential litigation.
Accurate100%Feb 22, 2026
The federal government may argue that the RAISE Act's disclosure and framework requirements constitute "compelled speech," forcing developers to adopt specific "ideological" safety standards set forth by a state.
Citation verification: 37 verified, 2 flagged, 3 unchecked of 61 total

Related Wiki Pages

Top Related Pages

Organizations

Frontier Model ForumRedwood Research

Risks

AI Proliferation

Approaches

Evals-Based Deployment GatesAI Lab Safety Culture

Analysis

US State AI Legislation LandscapeAI Regulatory Capacity Threshold Model

Concepts

Ea Longtermist Wins LossesModel RegistriesLarge Language ModelsGovernance-Focused Worldview

Policy

Colorado Artificial Intelligence ActTexas Responsible AI Governance Act (TRAIGA)AI Whistleblower ProtectionsStop Stealing Our Chips ActChina AI Regulatory Framework

Key Debates

AI Governance and PolicyGovernment Regulation vs Industry Self-Governance

Other

Geoffrey HintonYoshua Bengio