IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

When the Government Should Say ‘No’ to an AI Use Case

As state officials move forward with various testing environments for artificial intelligence, IT leaders remain focused on ensuring that partners’ data practices meet government standards.

Colorado CIO David Edinger
Colorado CIO David Edinger
Government Technology/David Kidd
States across the nation are creating "sandboxes" and otherwise encouraging experimentation with AI that enables more effective and efficient operations. Call it, perhaps, AI with a purpose. But advancing innovation in government comes with risk.

In Colorado, CIO David Edinger said his office has so far reviewed about 120 ideas for potential uses of AI in state government. Below, he explains how they vet agency proposals to use AI. For ideas classified as "high" risk under the NIST framework, most of the ones they reject have something in common: data practices that don't meet the state's data privacy requirements.


Colorado is not alone in keeping the data practices of potential AI partners at the forefront of its decision-making.

In a conversation with Government Technology at last month's National Association of State Chief Information Officers (NASCIO) Midyear Conference, California Chief Technology Officer Jonathan Porat explained that there are three main components to how the state evaluates prospective use cases of artificial intelligence. Aside from the appropriateness of the use case itself for state government, officials also consider the track record of the technology itself. Thirdly, they dig into the data involved in the proposal.

“Are the data that we’re using appropriate for a GenAI system?” Porat said. “Are they properly being governed and secure?”

Video transcript: I would say we've reviewed maybe 120 proposals so far across every agency for all possible uses and we follow the NIST framework for that. So it's medium, high or prohibited. If it's prohibited, we prohibit it. If it's medium, we just deploy it. If it's high, we evaluate it more thoroughly. And when we do evaluate it and we say no, it's almost always not because of how it was intended to be used, but because of data sharing and what data we're then sharing with whoever that provider is per their standard contract that we can't usually by state law share. So it's PII or HIPAA or CJIS or something like that and we have to say it's not because of how you want to use the tool, it's because you're giving away the data in a way that we can't accept. And that's really the crux of it and that was another surprise was it's not how people are trying to use it. It's what's going on with the privacy of the data.
Noelle Knell is the executive editor for e.Republic, responsible for setting the overall direction for e.Republic’s editorial platforms, including Government Technology, Governing, Industry Insider, Emergency Management and the Center for Digital Education. She has been with e.Republic since 2011, and has decades of writing, editing and leadership experience. A California native, Noelle has worked in both state and local government, and is a graduate of the University of California, Davis, with majors in political science and American history.
Nikki Davidson is a data reporter for <i>Government Technology</i>. She’s covered government and technology news as a video, newspaper, magazine and digital journalist for media outlets across the country. She’s based in Monterey, Calif.
Sign up for GovTech Today

Delivered daily to your inbox to stay on top of the latest state & local government technology trends.