In Colorado, CIO David Edinger said his office has so far reviewed about 120 ideas for potential uses of AI in state government. Below, he explains how they vet agency proposals to use AI. For ideas classified as "high" risk under the NIST framework, most of the ones they reject have something in common: data practices that don't meet the state's data privacy requirements.
In a conversation with Government Technology at last month's National Association of State Chief Information Officers (NASCIO) Midyear Conference, California Chief Technology Officer Jonathan Porat explained that there are three main components to how the state evaluates prospective use cases of artificial intelligence. Aside from the appropriateness of the use case itself for state government, officials also consider the track record of the technology itself. Thirdly, they dig into the data involved in the proposal.
“Are the data that we’re using appropriate for a GenAI system?” Porat said. “Are they properly being governed and secure?”
Video transcript: I would say we've reviewed maybe 120 proposals so far across every agency for all possible uses and we follow the NIST framework for that. So it's medium, high or prohibited. If it's prohibited, we prohibit it. If it's medium, we just deploy it. If it's high, we evaluate it more thoroughly. And when we do evaluate it and we say no, it's almost always not because of how it was intended to be used, but because of data sharing and what data we're then sharing with whoever that provider is per their standard contract that we can't usually by state law share. So it's PII or HIPAA or CJIS or something like that and we have to say it's not because of how you want to use the tool, it's because you're giving away the data in a way that we can't accept. And that's really the crux of it and that was another surprise was it's not how people are trying to use it. It's what's going on with the privacy of the data.