Skip to content
Longterm Wiki
Back

National Conference of State Legislatures

web

A regularly updated tracker from NCSL useful for monitoring the fragmented U.S. state-level AI regulatory landscape; relevant for governance researchers studying how policy is developing outside federal action.

Metadata

Importance: 55/100organizational reportreference

Summary

The National Conference of State Legislatures tracks AI-related legislation across all 50 U.S. states and territories in 2025, documenting approximately 100 enacted measures covering IP ownership of AI content, critical infrastructure safety, worker protections, automated decision-making transparency, and restrictions on harmful AI uses. This resource provides a comprehensive overview of the fragmented but rapidly expanding state-level regulatory landscape for AI governance.

Key Points

  • All 50 states plus U.S. territories introduced AI legislation in 2025; 38 states enacted ~100 measures as of July 2025.
  • Key legislative themes: IP ownership of AI-generated content, critical infrastructure risk management, worker displacement protections, and automated decision transparency.
  • New York requires state agencies to publish inventories of automated decision-making tools and prohibits AI from overriding collective bargaining agreements.
  • Montana's 'Right to Compute' law mandates AI risk management policies for critical infrastructure aligned with NIST AI RMF standards.
  • State laws address harmful AI uses including AI-powered stalking/harassment (North Dakota) and AI impersonating licensed medical professionals (Oregon).

Cited by 1 page

PageTypeQuality
US State AI Legislation LandscapeAnalysis70.0

Cached Content Preview

HTTP 200Fetched Mar 15, 202698 KB
Artificial Intelligence 2025 Legislation
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 











 

 
 
 
 


 
 
 



 


 
 
 
 

 
 
 
 
 
 
 
 
 

 




 
 
 

 

 

 
 

 
 


 
 Skip to main content 
 
 
 
 
 
 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 Summary 
 Artificial Intelligence 2025 Legislation 
 


 


 

 
 
 
 
 
 
 
 
 Updated July 10, 2025 
 
 
 

 

 
 
 
 
 
 
 
 
 
 A Summary is a quick read to increase your knowledge of a topic. 








 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 

 
 Related Topic: 
 
 
 Technology 
 

 
 
 

 
 



 


 
 AI—the use of computer systems to perform tasks that normally require human intelligence, such as learning and decision-making—has the potential to spur innovation and transform industry and government. As AI advances and widespread adoption of these tools increase, government, business and the public are exploring the risks and benefits of using the system for different applications.

 State governments across the country are starting to use or examine how AI can be used to improve government services such as enhancing customer service; improving health care facility inspections; and improving roadway safety. Legislators, industry and other stakeholders have engaged in robust discussions regarding the concerns about potential misuse or unintended consequences of AI.

 More and more states have introduced AI-related legislation over the last few years. In the 2025 legislative session, all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C., have introduced legislation on this topic this year. Thirty-eight states adopted or enacted around 100 measures this year. Examples of those actions include:

 
 
 Arkansas enacted legislation that clarifies who the owner of AI generated content is, which includes the person who provides data or input to train a generative AI model or an employer, if the content is generated as a part of employment duties. The new law specifies that the generated content should not infringe on existing copyright or intellectual property rights.

 
 Montana's new, "Right to Compute," law sets requirements for critical infrastructure that is controlled by an AI system, such as instructing the deployer to develop a risk management policy that considers guidance from a list of specified standards, such as the latest version of the AI risk management framework from the national institute of standards and technology. The new law also specifies that the government cannot take actions that restrict the ability to privately own or make use of computational resources for lawful purposes, unless deemed necessary to fulfill a compelling government interest.

 
 New Jersey adopted a resolution urging generative AI companies to make voluntary commitments regarding employee whistleblower protections.

 
 New York ena

... (truncated, 98 KB total)
Resource ID: 1e8d4f5f6cea8c36 | Stable ID: OTBmZmY5OG