After his election, experts predicted President Trump would enact policy changes, and indeed, in January he rescinded an executive order (EO) from the previous administration that established AI risk management guidance. In July, he released an AI Action Plan, but stakeholders raised concerns at the time about its impact on states’ regulatory authority. After two failed attempts by Congress to rein in state AI regulation, Trump did so by EO in December.
So, what does this mean for states? One concern is that the new order withholds congressionally allocated funding for state broadband deployment as a consequence of enacting what it vaguely dubs “onerous AI laws.” Each state will be affected differently by the order, as they have taken different approaches to AI regulation.
For example, Idaho CIO Alberto Gonzalez said that because the state has taken an approach of guidance over governance, the state’s strategy is in alignment with the federal directive: “So, it doesn’t impact us, as far as I’m concerned.”
Some lawmakers, however, argue that limiting states’ regulatory authority infringes on states’ rights.
Colorado was specifically called out in Trump’s EO, which alleges that the state has a law in place that requires the alteration of truthful AI outputs, although no such language exists in any current Colorado laws. The state has, however, enacted a law called the Colorado AI Act — also known as Senate Bill 24-205 (SB 24-205) — to implement a risk management system related to algorithmic discrimination, which one state lawmaker made clear is something that has been requested by consumers.
Colorado has an AI working group, formed by Gov. Jared Polis, of which CIO David Edinger is a member. The CIO said that the working group is likely to make a recommendation to legislators soon to inform their re-evaluation of SB 24-205 during the next session, which begins in January: “That’s in the legislators’ hands.”
One state legislator, Colorado Rep. Brianna Titone, emphasized to Governing* that the executive order’s attempt to limit state policymaking is not legal, “and it will be challenged in court.”
From a global perspective, experts predict that Trump’s goals to position the U.S. at the front of the AI race will impact international norms in favor of tech nationalism, according to an October report on 2026 technology predictions from Forrester Research.
Some experts believe that an “AI bubble” — inflated by significant investment and ambitious goals — will pop in 2026. If that happens, it would likely mark a shift toward AI uses that can provide more tangible returns on investment.
Several states have already heightened their focus on AI implementations that can deliver quick wins, like Georgia and North Dakota.
Another expectation is that 2026 will see increased focus on agentic AI, the next evolution of the technology, which involves autonomous decision-making.
State leaders are already exploring how to integrate agentic AI tools. Alaska officials are exploring the tech for its digital government portal. Indiana officials are looking to make it a part of the notary licensing process, and Virginia officials want to adopt agentic AI tools to support a regulatory review process.
Utah CIO Alan Fuller told Government Technology that he is “very bullish” when it comes to agentic AI. The CIO said that he believes this technology will help the state government drive more productivity among employees.
Still, questions remain about states’ ability to enact laws related to AI.
“An executive order cannot create law and cannot preempt state authority,” U.S. Rep. Ted W. Lieu said in a statement.
And this sentiment is not partisan; Florida’s Republican Gov. Ron DeSantis emphasized during a roundtable that an EO cannot block states, indicating his plans to advance a legislative proposal to create an AI Bill of Rights.
The American Civil Liberties Union has deemed this EO “unconstitutional.” And according to the Center for Democracy and Technology, states should continue “pursuing AI legislation as they see fit.”
Uncertainty remains, however, as to exactly what state-level policymaking on AI may look like in 2026.
“Amidst unpredictable federal action around AI regulation, state and local leaders have an urgent responsibility to model people-first AI governance and deployment,” stated a December AI road map from the NewDEAL Forum.
Currently, states plan to continue regulating AI as they see fit. In fact, New York’s governor just signed a new AI law on Friday creating safety requirements and an oversight office.
*Governing is part of e.Republic, Government Technology's parent company.