Skip to content
Longterm Wiki
Back

These reports create public accountability

web

Practical compliance guide for the Colorado AI Act, relevant to organizations deploying high-risk AI systems in the US; useful context for AI governance and regulatory accountability discussions.

Metadata

Importance: 42/100guidance documentreference

Summary

This TrustArc article explains the Colorado AI Act (effective February 1, 2026), which imposes obligations on developers and deployers of high-risk AI systems to prevent algorithmic discrimination in consequential decisions. It details transparency, notification, and impact assessment requirements for organizations operating AI in sectors like finance, healthcare, and education.

Key Points

  • The Colorado AI Act targets algorithmic discrimination in consequential decisions affecting employment, housing, healthcare, and financial services.
  • Both developers and deployers must take reasonable care to protect consumers, including providing documentation on training data, risks, and evaluation methods.
  • Developers and deployers must publish public website statements by February 1, 2026 describing high-risk AI systems deployed and how risks are managed.
  • Impact assessments are required, covering intended use cases, foreseeable limitations, data governance, and potential harms.
  • Non-compliance may constitute unfair or deceptive trade practices under Colorado law.

Cited by 1 page

PageTypeQuality
Colorado Artificial Intelligence ActPolicy53.0

Cached Content Preview

HTTP 200Fetched Mar 20, 202613 KB
[Skip to Main Content](https://trustarc.com/resource/colorado-ai-act-obligations/#main)

**Articles**

# Colorado AI Act: New Obligations for High-Risk AI Systems

[![](https://trustarc.com/wp-content/uploads/2024/08/people-Obehi-Okonofua.png)\\
\\
**Obehi Okonofua** Privacy Knowledge Lead, Controls Library, TrustArc](https://trustarc.com/people/obehi-okonofua/)

As the use of AI grows in sectors such as [finance](https://trustarc.com/resource/privacy-challenges-fintech/), healthcare, and education, the potential for algorithmic discrimination has increased. With this growth comes the responsibility to ensure that these technologies operate in a fair and equitable manner.

One of the laws designed to accomplish this is the Colorado AI Act, which aims to protect consumers from algorithmic discrimination, particularly when used in consequential decision-making, and outlines the obligations of developers and deployers of high-risk AI systems. The [Colorado AI Act](https://leg.colorado.gov/sites/default/files/2024a_205_signed.pdf) emphasizes the importance of mitigating these risks, especially when decisions made by AI can significantly impact someone’s life, such as in hiring processes, loan approvals, or access to essential services.

Central to the Colorado AI Act are the concepts of algorithmic discrimination and consequential decisions.

**Algorithmic discrimination** occurs when an AI system leads to unjust or illegal treatment of individuals or groups based on various characteristics such as age, race, gender, or disability.

**Consequential decisions**, on the other hand, refer to decisions that have a material legal or similarly significant effect on the denial or provision of services and opportunities to consumers, such as access to education and employment opportunities, housing, healthcare, financial insurance, government, or legal services.

## How to ensure compliance with the Colorado AI Act?

The Colorado AI Act will become effective on February 1, 2026, and organizations must align their practices with the principles contained in the Act by this date to avoid engaging in unfair or deceptive trade practices.

Both developers and deployers have a duty to take reasonable care to protect consumers from algorithmic discrimination arising from the intended or contracted use of the high-risk AI system. To this end, deployers and developers must comply with disclosure and notification requirements, and an impact assessment must be conducted where required.

### What are the transparency and notification requirements?

#### Documentation to be provided to deployers

Developers of a high-risk AI system are required to provide deployers of the system with the following:

- a general statement describing the expected uses and potential harmful or inappropriate uses of the high-risk artificial intelligence system;
- documentation disclosing high level summaries of the training data, known or foreseeable risks and benefits;
- documentation descri

... (truncated, 13 KB total)
Resource ID: 4144deb9ed0c51f4 | Stable ID: YjQ2M2U5OW