Back
Colorado AI Act (SB 24-205)
governmentleg.colorado.gov·leg.colorado.gov/bills/sb24-205
This is the official legislative text for Colorado's SB 24-205, one of the first US state laws to regulate high-risk AI systems; highly relevant for AI governance researchers and practitioners navigating emerging domestic AI compliance requirements.
Metadata
Importance: 68/100legislationprimary source
Summary
Colorado's SB 24-205 is a landmark state-level AI governance law establishing consumer protection duties for high-risk AI systems, effective February 1, 2026. It creates obligations for both AI developers and deployers to prevent algorithmic discrimination, conduct impact assessments, and provide consumers with transparency and appeal mechanisms for consequential AI-driven decisions.
Key Points
- •Applies to any business operating in Colorado that deploys high-risk AI systems, covering both developers and deployers in the AI supply chain.
- •Requires deployers to conduct algorithmic impact assessments, implement risk management policies, and notify consumers when AI makes consequential decisions.
- •Mandates human review appeal mechanisms, giving consumers recourse against adverse AI-driven outcomes in high-stakes contexts.
- •Developers must disclose system information, provide impact assessment documentation, and report known risks to Colorado's attorney general.
- •Represents one of the first comprehensive US state-level AI laws modeled loosely on the EU AI Act's risk-based framework.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Colorado Artificial Intelligence Act | Policy | 53.0 |
| Failed and Stalled AI Proposals | Analysis | 63.0 |
Cached Content Preview
HTTP 200Fetched Mar 15, 202614 KB
SB24-205 Consumer Protections for Artificial Intelligence | Colorado General Assembly
Skip to main content
SB24-205
Consumer Protections for Artificial Intelligence
Type
Bill
Session
2024 Regular Session
Subjects
Business & Economic Development
Labor & Employment
Telecommunications & Information Technology
Concerning consumer protections in interactions with artificial intelligence systems.
Recent Bill (PDF)
Recent Fiscal Note (PDF)
Bill Summary:
On and after February 1, 2026, the act requires a developer of a high-risk artificial intelligence system (high-risk system) to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system. There is a rebuttable presumption that a developer used reasonable care if the developer complied with specified provisions in the act, including:
Making available to a deployer of the high-risk system a statement disclosing specified information about the high-risk system; Making available to a deployer of the high-risk system information and documentation necessary to complete an impact assessment of the high-risk system; Making a publicly available statement summarizing the types of high-risk systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer and how the developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of each of these high-risk systems; and Disclosing to the attorney general and known deployers or other developers of the high-risk system any known or reasonably foreseeable risks of algorithmic discrimination, within 90 days after the discovery or receipt of a credible report from the deployer, that the high-risk system has caused or is reasonably likely to have caused.
The act also, on and after February 1, 2026, requires a deployer of a high-risk system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system. There is a rebuttable presumption that a deployer used reasonable care if the deployer complied with specified provisions in the act, including:
Implementing a risk management policy and program for the high-risk system; Completing an impact assessment of the high-risk system; Annually reviewing the deployment of each high-risk system deployed by the deployer to ensure that the high-risk system is not causing algorithmic discrimination; Notifying a consumer of specified items if the high-risk system makes, or will be a substantial factor in making, a consequential decision concerning the consumer; Providing a consumer with an opportunity to correct an
... (truncated, 14 KB total)Resource ID:
886f3fed50ae776d | Stable ID: MjRiNDAzMj