AI LEAD Act (AI Liability Standards)Bipartisan bill introduced September 29, 2025, establishing product liability standards for AI systems. Sponsored by Senators Durbin (D-IL) and Hawley (R-MO). Referred to Committee on the Judiciary. | United States | Sep 29, 2025 | proposed |
AI Risk Evaluation ActBipartisan bill introduced September 29, 2025 by Senators Hawley (R-MO) and Blumenthal (D-CT). Would require DOE to establish mandatory AI evaluation program with adversarial testing. Notable for \$1M/day non-compliance penalty and focus on dangerous capabilities. 7-year program duration. Referred to Commerce Committee. | United States | Sep 29, 2025 | proposed |
AI Safety Institutes (AISIs)Government-run institutions dedicated to evaluating frontier AI systems for dangerous capabilities and safety properties. Pioneered by the UK AISI in 2023, with analogues in the US (USAISI), EU, Japan, and others. Play a key role in pre-deployment evaluations and responsible scaling policy thresholds. | International | Nov 2023 | in-effect |
AI Whistleblower ProtectionsLegal and institutional frameworks for protecting AI researchers and employees who report safety concerns. The bipartisan AI Whistleblower Protection Act (S.1792) introduced May 2025 addresses critical gaps in current law, while EU AI Act Article 87 provides protections from August 2026. | United States | May 2025 | proposed |
Arkansas AI Laws (HB 1958 & HB 1876)Two AI laws signed by Governor Sarah Huckabee Sanders. HB 1958 requires public entities to develop AI use policies. HB 1876 establishes that individuals who provide input to generative AI own the output (subject to copyright). Notable for being one of the first state laws addressing AI-generated content ownership. | Arkansas | 2025 | enacted |
Artificial Intelligence and Data Act (AIDA)The Artificial Intelligence and Data Act (AIDA) was Canada's proposed federal AI legislation, introduced as Part 3 of Bill C-27 (the Digital Charter Implementation Act, 2022). Despite years of debate and amendment, the bill died on the order paper when Parliament was dissolved in January 2025. | Canada | Jun 16, 2022 | failed |
Bletchley DeclarationWorld-first international agreement on AI safety signed by 28 countries at the November 2023 AI Safety Summit, committing to cooperation on frontier AI risks. Follow-up summits in Seoul (May 2024) and Paris (February 2025) expanded commitments. | International | Nov 1, 2023 | in-effect |
California SB 243: Companion Chatbot RegulationRegulates AI companion chatbots in California. Requires operators to disclose when users interact with AI rather than humans. Applies to systems designed to meet users' social needs. Part of California's continued leadership in state AI regulation despite federal preemption pressure. | California | 2025 | enacted |
California SB 53California's Transparency in Frontier Artificial Intelligence Act (TFAIA), the first U.S. state law regulating frontier AI models. Requires large developers (>$500M revenue, models >10^26 FLOPs) to publish annual safety frameworks, report critical safety incidents within 15 days (24 hours for imminent danger), and protects whistleblowers. Also creates CalCompute, a public computing cluster for AI research. Signed September 29, 2025 by Governor Newsom after he vetoed the more ambitious SB 1047 a year earlier. Effective January 1, 2026. Civil penalties up to $1M per violation, enforced by the California Attorney General. | California | 2025 | enacted |
China AI Regulatory FrameworkChina has developed one of the world's most comprehensive AI regulatory frameworks through a series of targeted regulations addressing specific AI applications and risks. Unlike the EU's comprehensive AI Act, China's approach is iterative and sector-specific, with new rules issued as technologies emerge. | China | 2021 | in-effect |
Chip Security ActBipartisan legislation mandating location verification and security mechanisms on export-controlled AI chips. Requires the Secretary of Commerce to issue standards for "chip security mechanisms" within 180 days of enactment. Uses ping-based delay measurement (not GPS) to verify chip locations. Companies must notify BIS if chips are found in unauthorized locations. Explicitly prohibits kill switches and geofencing that would hinder chip functionality. Motivated by an estimated 140,000 chips worth $5-7 billion smuggled to China in 2024. | United States | May 8, 2025 | proposed |
Colorado Artificial Intelligence ActThe Colorado AI Act (SB 24-205) is the first comprehensive AI regulation enacted by a US state, taking a risk-based approach to regulating high-risk AI systems — in contrast to California's SB 1047 which used a size-based threshold for frontier models. Introduced April 10, 2024 by Senator Robert Rodriguez and signed by Governor Jared Polis on May 17, 2024, the law covers AI systems making consequential decisions across eight domains including employment, healthcare, and lending. Polis signed with a public letter expressing concerns that the law was too broad and could stifle innovation. Enforcement was originally set for February 1, 2026 but was delayed to June 30, 2026 after tech industry lobbying during an August 2025 special session (SB 25B-004). The Colorado Attorney General has exclusive enforcement authority with penalties up to $20,000 per violation. President Trump's February 2025 executive order challenging state AI regulations adds uncertainty to the law's implementation. | Colorado | Apr 10, 2024 | enacted |
Council of Europe Framework Convention on Artificial IntelligenceThe Council of Europe's AI Framework Convention represents the first legally binding international AI treaty, establishing human rights-focused governance principles across 57+ countries, though it has significant enforcement gaps and excludes national security applications. While historically significant as the first binding AI treaty, its practical enforcement mechanisms remain weak. | International | 2024 | enacted |
DEFIANCE ActBipartisan bill creating a federal civil cause of action for victims of nonconsensual sexually explicit deepfakes. Sponsored by Senators Durbin (D-IL) and Graham (R-SC). Passed Senate unanimously January 13, 2026. Held at desk in the House; not yet law. Complements the TAKE IT DOWN Act (criminal penalties) with civil remedies. | United States | 2025 | pending |
EO: Ensuring a National Policy Framework for AI (State Preemption)Executive order signed December 11, 2025 directing federal agencies to challenge state AI laws deemed inconsistent with federal policy. Does NOT itself overturn state law — only Congress or courts can do that. Established DOJ AI Litigation Task Force (Jan 9, 2026). Triggered bipartisan opposition: 36 state AGs formed coalition, governors of CA/CO/NY issued defiant statements, Congress had twice rejected preemption provisions. | United States | Dec 11, 2025 | enacted |
EU AI ActThe EU AI Act is the world's first comprehensive legal framework for artificial intelligence. Approved by the European Parliament in March 2024 with a 523-46-49 vote, it establishes a risk-based approach with four tiers: unacceptable (banned), high-risk (strict obligations), limited risk (transparency), and minimal risk (voluntary codes). The Act introduces a two-tier framework for general-purpose AI (GPAI) models, with enhanced requirements for models posing systemic risk (trained with >10^25 FLOPs). Implementation is phased: prohibitions took effect February 2025, GPAI rules in August 2025, and high-risk obligations apply from August 2026. Fines up to 7% of global revenue for violations. | European Union | Apr 2021 | enacted |
Executive Order 14179: Removing Barriers to American Leadership in AIFirst major AI executive order of the Trump second term, signed January 23, 2025. Revokes Biden's comprehensive AI executive order (EO 14110) and replaces it with a deregulatory, pro-innovation framework. Mandated a 180-day AI Action Plan and review of all existing AI policies. The resulting Action Plan ("Winning the Race") was released July 23, 2025, accompanied by three additional executive orders on data center permitting, unbiased AI principles, and scientific AI acceleration. | United States | Jan 23, 2025 | enacted |
Illinois HB 3773: AI Employment DiscriminationAmends the Illinois Human Rights Act to prohibit employer use of AI systems that discriminate against protected classes in employment decisions. One of the first state laws specifically targeting AI-driven employment discrimination. | Illinois | 2025 | enacted |
Indiana AI Healthcare DisclosureRequires healthcare professionals and insurers to disclose AI use in healthcare decisions and patient communications. One of the first state laws targeting AI transparency in healthcare specifically. | Indiana | 2025 | enacted |
Kentucky SB 4: AI GovernanceDirects Kentucky's Commonwealth Office of Technology to develop policy standards for AI use across state government. Part of the wave of state-level AI governance laws in 2025. | Kentucky | 2025 | enacted |
National AI Legislative Framework (White House, March 2026)Legislative blueprint released March 20, 2026. Seven sections covering child safety, energy/infrastructure, intellectual property, free speech, innovation, workforce, and federal preemption. Urges Congress to preempt state AI laws and adopt "light-touch" regulatory approach. Leaves AI/copyright fair-use questions to courts. Not law itself — a framework for congressional action. | United States | Mar 20, 2026 | proposed |
New York RAISE ActNew York's Responsible AI Safety and Education (RAISE) Act, often described as a successor to California's vetoed SB 1047, requires safety protocols, incident reporting, and transparency from developers of frontier AI models. Passed both chambers of the New York Legislature on June 12, 2025 with strong bipartisan support (58-1 in the Senate, 119-22 in the Assembly). Signed by Governor Hochul on December 19, 2025 after chapter amendments reduced civil penalties from $10M/$30M to $1M/$3M. Effective January 1, 2027. | New York | Mar 2025 | enacted |
NIST AI Risk Management Framework (AI RMF)The NIST AI Risk Management Framework (AI RMF) is a voluntary guidance document developed by the National Institute of Standards and Technology to help organizations manage risks associated with AI systems. | United States | Jan 2023 | in-effect |
Paris AI Action Summit (February 2025)International AI summit held February 10-11, 2025 at Grand Palais, Paris. 61 countries signed declaration on inclusive AI. Notable for US and UK refusal to sign. Launched the Current AI foundation with \$400M endowment. Anthropic CEO Dario Amodei called it a "missed opportunity" for AI safety. | International | Feb 10, 2025 | in-effect |
Safe and Secure Innovation for Frontier Artificial Intelligence Models ActSB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was California state legislation that would have required safety testing and liability measures for developers of the most powerful AI models. Passed the legislature with strong majorities (Senate 32-1, Assembly 48-16) but was vetoed by Governor Newsom, who cited concerns about regulating based on model size rather than risk. | California | Feb 2024 | vetoed |
Seoul Declaration on AI SafetyThe Seoul AI Safety Summit (May 21-22, 2024) was the second in a series of international AI safety summits, following the Bletchley Park Summit in November 2023. | International | May 22, 2024 | in-effect |
Singapore Consensus on AI Safety Research PrioritiesConsensus document from the 2025 Singapore Conference on AI (SCAI), authored by 88 researchers from 11 countries, organizing AI safety research into a defence-in-depth framework across three areas: Assessment, Development, and Control. | International | 2025 | in-effect |
Stop Stealing Our Chips ActCreates a whistleblower incentive program at the Bureau of Industry and Security (BIS) modeled after the SEC Whistleblower Incentive Program. Whistleblowers receive 10-30% of collected fines for original information leading to penalties exceeding $1 million. Anonymous reporting permitted with retaliation protections. Funded via an "Export Compliance Accountability Fund" financed by collected fines (no new appropriations). Aims to deter export control violations through market-powered enforcement. | United States | Apr 10, 2025 | proposed |
TAKE IT DOWN ActFirst federal law specifically addressing AI-generated deepfakes, signed May 19, 2025. Criminalizes nonconsensual intimate images (including AI deepfakes) and requires platforms to remove them within 48 hours. Passed with overwhelming bipartisan support (House 409-2, Senate unanimous). | United States | Jan 2025 | enacted |
Tennessee AI Mental Health Professionals BanProhibits AI systems from representing themselves as qualified mental health professionals. Passed Tennessee Senate unanimously. Responds to concerns about AI companion chatbots providing mental health advice without qualification. | Tennessee | 2025 | enacted |
Texas Responsible AI Governance Act (TRAIGA)Texas TRAIGA is a state-level AI regulation focused on intent-based liability for harmful AI practices rather than comprehensive safety requirements. Passed with near-unanimous support (House 146-3, Senate unanimous). Creates enforcement mechanisms and a regulatory sandbox but avoids prescriptive technical safety standards. | Texas | Dec 2024 | enacted |
TRUMP AMERICA AI Act (Blackburn Discussion Draft)Discussion draft released March 18, 2026 by Senator Blackburn (R-TN). Ambitious attempt at comprehensive federal AI framework combining children's safety (KOSA), digital likeness (NO FAKES), copyright (anti-fair-use for training), and broad federal preemption of state laws. Not yet formally introduced as a bill. Released two days before the White House National AI Legislative Framework. | United States | Mar 18, 2026 | proposed |
UN General Assembly AI Governance Resolution (August 2025)UN General Assembly resolution adopted August 26, 2025 establishing two new AI governance bodies. US explicitly rejected "centralized control and global governance" of AI at UN Security Council. China supported framework, aligned with developing countries. | International | Aug 26, 2025 | enacted |
US AI Chip Export ControlsThe United States has implemented unprecedented export controls on advanced semiconductors and semiconductor manufacturing equipment, primarily targeting China. These controls represent one of the most significant attempts to constrain AI development through hardware governance. | United States | Oct 2022 | in-effect |
US Executive Order on Safe, Secure, and Trustworthy AIExecutive Order 14110 on Safe, Secure, and Trustworthy AI, signed by President Biden on October 30, 2023, was the most comprehensive US government action on AI governance. The 111-page directive established mandatory compute-based reporting thresholds (10^26 FLOP—the same threshold later used in California's SB 1047), created the US AI Safety Institute at NIST, and placed 150+ requirements on 50+ federal entities. Revoked by President Trump on January 20, 2025, approximately 85% of requirements had been completed. | United States | Oct 30, 2023 | revoked |
Utah SB 226: AI Consumer Protection AmendmentsSigned March 2025, effective May 7, 2025. Refines Utah's AI transparency framework by narrowing disclosure requirements to explicit consumer requests and high-risk interactions. Represents a lighter-touch approach compared to Colorado and California AI regulation. | Utah | Jan 2025 | enacted |
Voluntary AI Safety CommitmentsIn July 2023, the White House secured voluntary commitments from leading AI companies on safety, security, and trust. These commitments represent the first coordinated industry-wide AI safety pledges, establishing baseline practices for frontier AI development. | United States | Jul 2023 | in-effect |
Washington AI Chatbot Safety Bills (SB 5984 / HB 2225)Companion bills advancing through both chambers of Washington legislature. HB 2225 passed House 69-28; SB 5984 passed Senate and advanced through House committee. Both address AI chatbot safety with disclosure requirements. | Washington | 2025 | pending |