How Does AI Predict Governments Will Use AI? It Depends Where You Live

Government Technology wanted to know what a generative AI model thought was the best AI use case for governments in each state — and if those uses were ethical or feasible. Google AI’s Gemini answered in surprising ways.

  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email
Task forces are popping up nationwide to answer a burning question: What is the best way for governments to use generative AI responsibly and ethically?

While Government Technology would typically interview industry experts and thought leaders about this topic, we decided to try something different for this piece: We turned to generative AI itself.

It's critical to mention before we go any further that large language models (LLM), like the one we used for this story, can hallucinate and present inaccurate information as factual. Therefore, this piece is meant to spark conversation and explore potential applications, not provide definitive predictions.


In order to receive specific, data-driven responses from an LLM up to date on current events and government data, we chose to ask Gemini, the generative artificial intelligence chatbot designed by Google, to be our voice of AI for this story. The LLM has access to real-time news articles and data through Google Search capabilities.

According to Google, “Gemini 1.0 was trained to recognize and understand text, images, audio and more at the same time, so it better understands nuanced information and can answer questions relating to complicated topics. This makes it especially good at explaining reasoning in complex subjects like math and physics.”

As LLMs rely on well-defined prompts to deliver the most accurate and relevant responses, we crafted a single universal prompt and ran it separately for 50 states. The prompt instructed Gemini to do the following:

“Gather information from official government websites, the Bureau of Labor Statistics, and news APIs to deliver one realistic idea for how state and local governments in [insert state] could use the power of AI now to enhance a government process. Provide names of specific agencies, details about how AI could be used and the risks and considerations necessary to ethically use AI.” 


Gemini delivered 50 detailed responses for each state; no two responses were the same. The solutions targeted a broad spectrum of government functions, aiming to improve outcomes for the public and the environment.


After plotting the response data on a map, some regional trends appeared. Gemini generally recommended AI could be used to address environmental issues in the western and Midwest regions, while most of the recommendations in the Northeast and South revolved around public service or social and economic development.
To determine what kind of government function Gemini provided the most AI use cases for, we then grouped each of Gemini’s recommendations into categories. Clear recurring pain points appeared: workforce, wildfires, student performance and agriculture.


In some cases, yes. For example, Gemini suggested that Arizona: “Implement an AI-powered virtual assistant to assist with answering frequently asked questions and scheduling appointments for the Arizona Department of Motor Vehicles (ADOT MVD).” 

Arizona launched a chatbot on the MVD website nearly four years ago that was trained to answer frequently asked questions and provide details for people who want to make an appointment at a MVD office. The agency announced in the first week of being active in 2020 that the chatbot had 26,000 interactions with users.

In the Pacific Northwest, Gemini suggested various AI-powered platforms for wildfire risk prediction, prevention, assessment and resource management. A year ago, the Washington Department of Natural Resources launched a pilot program using advanced AI technology to enable early detection and monitoring of wildfires to allow for safer, faster and more effective response.

"With wildfire every second counts," said Hilary Franz, commissioner of public lands, in a press release. "Increased early detection through deploying technology like Pano AI means we can respond faster while fires are still small — saving lives and property while reducing costs — which is crucial as wildfire seasons get longer and more challenging."


A common suggestion from Gemini was that governments could create chatbots to help serve their constituents better.

That happens to mirror the real-life interest governments have in using chatbots to improve their processes. According to the Center for Digital Government’s* Digital Counties, Digital States and Digital Cities 2022 surveys, chatbots have secured a foothold in all levels of local and state government as a majority of agencies either currently used them, or planned to in the next 12 to 18 months.

Gemini suggested bots could be used for everything from chatting to farmers about agricultural crop production in North Dakota to telling people whether they were in an area with a high risk for flooding in Missouri. It also suggested New York create a chatbot assistant to support emergency preparedness and response.

Senior IT analyst for Placer County, Calif., and public-sector chatbot expert Benjamin Palacio has seen how effective a chatbot can be. During the River Fire in Placer County in 2021, the county's “Ask Bob” chatbot was used as a emergency response tool.

“The capability of supplying this information to residents in a state of emergency and mobile is huge, not to mention the relief it can provide,” said Palacio in an email to Government Technology.

Palacio also sees potential for chatbots focused on weather or climate data.

“AI and chatbots share a similar constraint,” said Palacio. “They both rely on context streams to support good responses. These context streams feed the AI and bots information on how to query data and use models, historic or trained, to provide the best possible responses. Based on this, historical analysis of various models, weather patterns for example, would seem more accurate and become feasible.”

But in Utah and Georgia, Gemini’s chatbot suggestions dove deeper. It suggested in those areas, chatbots could offer emotional support and information on mental health conditions and treatment options. For that use, Palacio has doubts.

“I have experienced some chatbots that are extremely frustrating, with poor conversation flow or use of context,” Palacio said. “I could see this becoming a concern with a chatbot used to offer emotional support or mental health feedback. A big part of this problem, sentiment value, is hard to establish from a text-based conversation — meaning, understanding the emotional state of the constituent.”


We asked Gemini to specify the risks and ethical considerations governments would need to make before implementing any of its proposed use cases.
The most common risk, identified for use cases in 30 different states, was algorithmic bias. Gemini stipulated that in order for the proposed project to be effective and ethical, the AI would need to be trained on diverse data sets and employ safeguards to prevent bias. As few state governments have developed mature data governance, it would be difficult for many agencies to obtain clean and complete data sets to ethically to train the models on.

Data privacy and security, transparency and explainability, and overreliance on AI were also top risks Gemini identified. Interestingly, job displacement was provided as a risk or concern in only one use case.


Finally, we wanted to know what Gemini really thought about its own predictions. We asked it for one final prediction about when and if governments will really use AI in the way it suggested.

“While the exact timeframe is uncertain, widespread implementation of these AI solutions in state governments is likely within the next 5-10 years," responded Gemini. "Progress hinges on collaboration between researchers, policymakers and the public to ensure responsible development and address potential privacy concerns. This isn't science fiction — it's the future of how technology can empower our governments to serve citizens more effectively."

*The Center for Digital Government is part of e.Republic, Government Technology’s parent company.
  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email
Nikki Davidson is a data reporter for Government Technology. She’s covered government and technology news as a video, newspaper, magazine and digital journalist for media outlets across the country. She’s based in Monterey, Calif.