(Shutterstock)
For the past few years, AI in government has been discussed mostly in the abstract. Ethics principles. Responsible AI frameworks. Committees and working groups. Thoughtful documents that articulate values most people already agree on. That phase mattered. It created a shared language and bought institutions time to orient themselves.
But time is up.
What is changing is not only the law, though that matters. What is changing is that AI has crossed a threshold where it no longer behaves like a discrete technology an agency adopts and instead behaves like infrastructure. It blends into workflows, incentives and edge cases. It shows up in places no one labeled as AI. And in 2026, these realities will collide with regulation, procurement, audits and public scrutiny in ways policy alone cannot absorb.
AI IS NO LONGER WHERE YOU EXPECT IT
One reason policy is no longer enough is simple: AI is no longer confined to systems agencies deliberately deploy. It appears in browser tools staff use to summarize emails or draft reports. It is embedded in vendor products marketed as analytics, automation or optimization. It powers internal triage, routing and detection systems purchased years ago, before anyone asked whether they counted as AI at all.
Increasingly, AI is also reshaping the environment agencies operate in, even when the agency itself is not the one using it.
Public records is an early example. Third parties are beginning to automate Freedom of Information Act (FOIA) and public records requests using AI, generating large volumes of technically valid, narrowly tailored requests at almost no cost. Teams sized for human-scale demand suddenly find themselves overwhelmed, not because transparency rules changed, but because the economics of request generation did.
Procurement is another example. The time and effort required for vendors to produce proposals has dropped sharply with AI. Agencies are seeing two or even three times as many responses to the same RFP, without any increase in staff or evaluation time. Procurement teams now face a qualification and review problem they did not design for, created by AI adoption outside the agency.
None of this violates an AI policy. None of it triggers an ethics review. All of it is operationally real.
2026 IS WHEN THEORY MEETS OPERATIONS
New laws in places like Colorado and Texas matter not because they are perfect, but because they force specificity. They introduce concepts that sound abstract until agencies must operationalize them: AI inventories, high-risk systems, impact assessments, bias monitoring and ongoing risk management.
In theory, these requirements are reasonable. In practice, they expose how much AI governance has lived at the level of intent rather than execution.
An AI policy may say the agency will ensure fairness and transparency. A regulator, auditor or contract amendment will ask where that happens, for which systems, with what evidence and how often.
That is the shift underway.
GOVERNANCE MOVES FROM STATEMENTS TO SYSTEMS
Even agencies not directly subject to a specific state law will feel this pressure. Vendors operate across jurisdictions. Federal procurement standards influence the market. Contract language travels. Expectations converge. In 2026, agencies will increasingly be asked not whether they believe in responsible AI, but whether they can demonstrate control over the AI already operating in their environment.
THE VISIBILITY PROBLEM
Most agencies do not have an AI adoption problem. They have an AI visibility problem.
They cannot say with confidence where AI is being used, what decisions it influences, what data it touches or how it changes over time. Not because of negligence, but because AI no longer announces itself. It is bundled. It is updated remotely. It is turned on quietly. Models drift. Vendors change dependencies. Staff experiment.
Without an active inventory, governance becomes reactive. Agencies discover AI after it has already shaped outcomes, or after someone asks an uncomfortable question.
This is why inventory matters more than policy in the next phase. Not as a static spreadsheet, but as an ongoing capability to surface AI usage, classify risk and decide where controls are required.
Some agencies are already learning this firsthand. In Aurora, efforts to map AI usage surfaced tools in use that leadership was not aware of, including AI features embedded in vendor products that had never been explicitly disclosed. The exercise was not about blame. It was about reality. Once visibility improved, decisions became easier and risk conversations more grounded.
HIGH-RISK AI IS NOT A LABEL
High-risk AI is often misunderstood as a category agencies either fall into or avoid. In practice, it is a signal that some systems deserve more discipline than others.
Anything that materially affects access to services, employment, safety or individual rights requires a higher bar. That bar is not about intent. It is about documentation, testing, human oversight and the ability to detect when a system stops behaving as expected.
This is where model drift and bias move from theory to operations. A system that was acceptable last year may degrade quietly. A vendor update can change outcomes. New data can skew results. None of this requires bad actors. It is simply how probabilistic systems behave over time.
In 2026, agencies will be expected to show not only that they evaluated AI at launch, but that they manage it continuously.
POLICY IS NOT A CONTROL
This is the uncomfortable but necessary point: Policy is not a control.
Policies articulate values. Controls shape behavior.
A principle does not log decisions. A framework does not enforce procurement gates. A committee does not monitor drift. Controls do.
Real governance shows up in unglamorous places: intake forms, contract clauses, review workflows, monitoring dashboards and escalation paths. It is boring by design, which is precisely why it works.
Some agencies already treat AI governance the way they treat security or safety. CapMetro in Austin, Texas, is one example. Rather than relying on infrequent committee meetings, they established a regular operational rhythm to review AI use, risks and mitigations. The result is not bureaucracy. It is calm. Decisions happen faster because the rails are already in place.
WHAT WILL SEPARATE AGENCIES IN 2026
In 2026, the gap will not be between agencies that care about AI and those that do not. It will be between agencies that built operational capacity and those that stayed at the level of aspiration.
Agencies that navigate this well will have done a few unglamorous things early. They built and maintained a real AI inventory, including embedded and shadow AI. They defined what high risk means for their mission and tied that definition to action. They made procurement the front door, not an afterthought. They assigned clear ownership for AI risk. They assumed AI would change continuously and built monitoring to match that reality.
As a result, they will feel calmer, not slower. They will be able to say yes faster because they know where the guardrails are.
THE REAL QUESTION
The question for agency leaders in 2026 is not whether they have an AI policy. It is not even whether they believe they are compliant.
It is whether, if asked today what AI systems they use, which ones are high risk, and how they are managed, they could answer confidently, and whether that answer would still be true next month.
AI policy was a necessary first step. But policy was the easy part. The harder and more important work is building the capability to govern AI as it actually exists: everywhere, evolving and already shaping outcomes.
That is the work 2026 will measure.
Author Bio
Noam Maital is the CEO and co-founder of Darwin AI, where he works with state and local governments to help operationalize responsible AI. His work focuses on moving AI governance beyond policy by establishing practical controls for visibility, risk management, procurement, and ongoing oversight as AI becomes embedded across government operations.