IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Report Offers Guidance for Government on AI Adoption

The National Academies of Sciences, Engineering, and Medicine's new release cautions public-sector agencies against acquiring AI-powered tools without giving them an adequate level of vetting and governance.

Human and robot hands connect, touching on an image in blue of a cloud computing network.
Theyone St
Introducing artificial intelligence technology tools should go through a careful vetting process that considers the problem they will help to solve, and the people who will use them, according to a new strategy report for government.

“Know your people. Know your organization. And be hesitant to implement artificial intelligence,” Nathan McNeese, founding director of the Clemson University Center for Human-AI Interaction, Collaboration and Teaming, said.

McNeese, an author of the new report, Strategies for Integrating AI into State and Local Government Decision Making: Rapid Expert Consultation by the National Academies of Sciences, Engineering, and Medicine, discussed it during a webinar Aug. 21.

“Really know what the purpose is for the implementation, and think very, very deeply about how it is going to be perceived, how it is going to be utilized by your people,” he said, echoing the report. “Because at the end of the day, artificial intelligence is a technology, but it is turning into being the most forward-facing technology that we’ve ever seen.”

Report co-author Suresh Venkatasubramanian, a computer science professor at Brown University, where he directs the Center for Technological Responsibility, Reimagination and Redesign, described it as offering “constructive guidance,” with its focus on advising public-sector agencies to have internal governance policies around data and the vetting of technology, and systems to measure the effectiveness of AI tools.

Venkatasubramanian offered the cautionary advice of viewing AI systems as “potentially useful and quite often, unreliable.”

“There’s a lot of pressure, often coming from the top down, to introduce AI systems. I think the most encouraging things have been brought about by people on the ground, working with problems, trying to solve problems for people, and thinking carefully about how to incorporate the use,” Venkatasubramanian said during the webinar.

Being able to measure and monitor and evaluate is critical, he said.

“This is where the ethos of experimentation comes back into play,” he said. “We need to try things out. We need to measure. We need to understand what the measurements are telling us. And we need to adapt and refocus as needed.”

That deliberative ethos guides the thinking in cities like San Jose, where AI tools are evaluated against factors like risk, and how they can serve the public good, said Leila Doty, a privacy and AI analyst with the city.

San Jose has introduced an AI-powered language translation tool at City Council meetings, which has augmented the human interpretation service.

“It allows us to provide real-time translation in dozens more languages than we normally would have,” Doty said during the panel. “And the result of that is providing more accessibility to our residents.”

A language translation tool is seen as a relatively low-risk tech aid since it’s processing data that’s already released to the public. An AI tech tool that would likely receive a higher risk assessment and increased vetting might be body-worn cameras used by police.

“These are things that we’re seeing in the field right now, that we know agencies are interested in using,” Doty said, indicating San Jose conducts a “triage for risk” when a city department wants to deploy an AI tool.

“We’re trying to assess if this system is low, medium or high risk,” Doty said, walking through some of the factors the city technology team considers, such as the tech tool’s purpose, potential privacy impacts, the type of data it will require, and how much human oversight the system will have.

This is the kind of guidance offered by the National Academies report, and by groups like the GovAI Coalition, a network of some 850 government agencies and 2,500 members, including public- and private-sector workers.

San Jose was one of the Coalition’s founding agencies when it was established in 2023, to help the public sector develop policies and procurement practices to effectively vet and deploy such tools. The Coalition has created a large suite of AI governance resources, from high-level AI policy and the processes that need to be in place related to governance, to use cases and guides for implementation.

“We felt that it made a lot of sense to come together as a collective,” Doty said, “and demand more accountability from the AI vendors that we work with.”
Skip Descant writes about smart cities, the Internet of Things, transportation and other areas. He spent more than 12 years reporting for daily newspapers in Mississippi, Arkansas, Louisiana and California. He lives in downtown Yreka, Calif.