IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Colorado AI Group Reveals Its Approved Policy Guidelines

The Colorado AI Policy Work Group has developed, and now approved, a framework for changes to the state’s landmark legislation establishing consumer protections related to AI. Transparency is a priority.

A human figure looks at a 1 and 0 on a scale, entertaining a computational dilemma
Shutterstock
The Colorado AI Policy Work Group has now developed and approved a framework that aims to protect consumers from negative impacts of AI technology while supporting the state’s navigation of changes to disputed AI regulations.

The Colorado legislature passed the Consumer Protections for Artificial Intelligence bill, Senate Bill 24-205, also known as the Colorado AI Act, in 2024. The law was initially scheduled to take effect Feb. 1 but was amended through SB 25B-004 — the AI Sunshine Act — to delay enforcement until June 30. The law would require AI developers to start protecting consumers from “reasonably foreseeable risks or algorithmic discrimination.”

More than 50 local and national civil society organizations supported the AI Sunshine Act, including the Center for Democracy and Technology, the American Civil Liberties Union of Colorado, the Colorado Cross-Disability Coalition, and the National Employment Law Project. However, opponents argue that the state should not regulate AI, and rather, should wait for a federal framework.

Amid the federal government’s attempts to rein in states’ authority to regulate AI, the Colorado AI Act has been a topic of discussion. President Donald Trump’s executive order on the topic specifically called out Colorado, indicating the state has a law in place that requires the alteration of truthful AI outputs, although no such language exists in any current Colorado laws.

Colorado CIO David Edinger previously told Government Technology that any changes to the law are ultimately “in the legislators’ hands.”

The Colorado AI Policy Work Group launched in October and, on Tuesday, provided its unanimous support for a policy framework to address AI and automated decision-making technology systems (ADMT) in decision-making that impacts consumers. The proposal requires up-front notice to Coloradans when AI or ADMT are used in “consequential” decisions — meaning those that impact opportunities in education, employment, housing, insurance, finance, health care, public benefits or government services.

If the decision made by such a system is adverse, the deployer will have 30 calendar days to provide the consumer with a plain-language description of the decision and the role that ADMT played in making it, instructions to request additional information, information on how to request personal data and correct inaccuracies, and information on how to request human review or reconsideration “if available.”

The language does not appear to indicate that human review will always be made available. The wording also implies that consumers experiencing adverse outcomes from these decisions may request human review and reconsideration only “to the extent commercially reasonable.”

The framework is intended to build on the state legislature’s previous work. The group that created it is made up of organizations representing consumers, hospitals, school districts, technology users and companies.

“I look forward to supporting the recommended framework as legislation moves through the process, and commend the Colorado AI Policy Work Group for their efforts to get us here,” Gov. Jared Polis said in a statement about the working group’s agreement.