Effective June 30, 2026

Colorado AI Act

Senate Bill 24-205 - Concerning Consumer Protections in Interactions with AI Systems

Colorado's pioneering AI regulation requires comprehensive risk management, impact assessments, and consumer protections for high-risk artificial intelligence systems. First comprehensive state-level AI law in the US, serving as a model for other states.

Understanding the Colorado AI Act

America's First Comprehensive State AI Law

Colorado's SB 24-205, signed into law in May 2024 and effective February 1, 2026, represents a landmark moment in US AI regulation. As the first comprehensive state-level AI law, it establishes requirements that extend far beyond employment to cover any "high-risk AI system" that makes or substantially assists in consequential decisions affecting consumers.

The law defines high-risk AI systems as those that, when deployed, make or are substantial factors in making consequential decisions. Consequential decisions include those that have material legal or similarly significant effects on provision or denial of financial services, housing, education, employment opportunities, healthcare, insurance, or legal services. The breadth of this definition means organizations across virtually every sector must evaluate whether their AI systems fall under the Colorado AI Act's requirements.

Key compliance obligations under the Act include implementing a comprehensive risk management policy and program, conducting detailed algorithmic impact assessments before deployment, providing clear notice to consumers when high-risk AI is used in decisions affecting them, establishing mechanisms for consumers to correct inaccurate information and appeal automated decisions, taking reasonable care to protect consumers from algorithmic discrimination, and maintaining detailed documentation of all compliance efforts.

Colorado AI Act vs. NYC Local Law 144

Colorado's law is broader in scope but similar in spirit to NYC's regulation. Both aim to prevent algorithmic discrimination and require transparency, but Colorado's framework applies to all high-risk AI systems while NYC focuses specifically on employment decision tools. Colorado requires comprehensive impact assessments and risk management programs; NYC requires statistical bias audits.

For employers using AI in hiring: If you operate in both jurisdictions, start with NYC Local Law 144 compliance - it's already in effect. The processes you build for NYC compliance will provide a strong foundation for meeting Colorado's February 2026 requirements.

Get NYC AEDT Compliant First

Impact Assessments and Risk Management

The Colorado AI Act introduces the concept of "algorithmic impact assessments," comprehensive evaluations that organizations must conduct before deploying high-risk AI systems. These assessments go well beyond traditional bias testing to examine the system's purpose and intended benefits, its known limitations and potential harms, the categories of data processed and their relevance to the decision, transparent disclosure measures, data governance procedures, performance metrics and monitoring processes, and post-deployment evaluation mechanisms.

Risk management requirements are equally thorough. Organizations must establish formal risk management policies that identify and mitigate foreseeable risks of algorithmic discrimination. This includes implementing appropriate governance structures, conducting regular performance testing, maintaining human oversight mechanisms, and documenting all risk mitigation efforts. The law explicitly requires that these aren't merely paper exercises - organizations must demonstrate they're taking "reasonable care" to prevent discrimination.

Unlike some regulations that prescribe specific technical measures, Colorado's law is principles-based. It focuses on outcomes (preventing discrimination and ensuring transparency) rather than dictating exact methodologies. This flexibility allows organizations to tailor their compliance approaches but also creates responsibility to make thoughtful, defensible choices about how to meet the law's objectives.

Setting a Precedent for Other States

Colorado's pioneering status means other states are watching closely. Early indications suggest several states are considering similar legislation, and many are looking to Colorado's framework as a model. For multistate organizations, this creates both challenges and opportunities. The challenge is potential regulatory fragmentation, complying with different requirements across jurisdictions. The opportunity is that building robust AI governance now, based on Colorado's comprehensive approach, positions organizations well for whatever regulatory landscape emerges.

The law also includes notable enforcement mechanisms. Colorado's Attorney General has authority to investigate potential violations and bring enforcement actions. Importantly, the law creates a right for consumers to pursue civil litigation if they're harmed by violations - a provision that significantly increases the stakes for non-compliance. However, the law also includes a "cure" provision: if organizations can demonstrate they've taken reasonable measures to comply, they have an opportunity to remedy violations before facing penalties.

For organizations currently only subject to more narrow regulations like NYC's employment AI law, Colorado's Act provides a preview of what comprehensive AI regulation looks like. Even if not immediately required to comply, studying Colorado's framework helps organizations understand the direction of AI regulation and prepare for likely future requirements in other jurisdictions.

Ready to Get AEDT Compliant?

While Colorado's AI Act takes effect in February 2026, employers hiring in New York City face immediate compliance obligations under Local Law 144. Book a 15-minute consultation to discuss our full compliance package.