Dr. Mila Gasco Hernandez, research director for the center, and Tzuhao Chen, a Ph.D. student at UAlbany, co-authored a study titled, The Adoption and Implementation of Artificial Intelligence Chatbots in Public Organizations: Evidence from U.S. State Governments. The study is based on 22 interviews with local and state governments across the U.S. and explores the various use cases and the implementation challenges facing agencies.
“In the timeframe our chatbot study began, there was a lot of discussion going on regarding the use of AI in governments and its potential impacts on both organizations and society,” Chen shared. “We found that most of the discussion remained at the conceptual level, and we wanted to explore more about the real use cases in the U.S.”
The research team uncovered various facets that characterize the overall implementation of government chatbots throughout the country. Chen explained that most deployments centered on providing information, like how to file unemployment insurance claims or education services.
“We didn’t find many cases where chatbots are used for service application, so we believe that area is still developing,” Chen added.
One initial hurdle in the implementation process was identifying what data an agency needed for chatbot tools and the methods to acquire that data. Chen said that the agencies they spoke to had to undergo an in-depth analysis to pinpoint what type of questions to include in the AI-driven chatbot software.
“In some situations, we found that agencies had to work with their colleagues at the call center to discover what the top 10 questions they received were in order to accurately identify which pieces of information to build into the chatbot,” Chen said.
Another set of challenges uncovered during the study had to do with human and financial resources — or lack thereof. Gasco Hernandez noted that many of the chatbot systems were created through third parties, which can add complexity to the management process.
“The fact that many chatbots and AI-based systems are contracted out is also a challenge because then we are talking about potentially missing those public values that are so important when delivering any kind of public services,” Gasco Hernandez said.
She noted that IT vendors must take responsibility for creating the database and continuous content enhancement for the chatbots in order for the process to work.
“It is like a dance where you are constantly engaging with a partner, and you have to be in step collectively — in unison — for the tools to perform well and address questions properly.”
During the study, the researchers also got the opportunity to engage with public employees and public managers who helped implement the chatbots, where they encountered some surprising takeaways — namely a lack of agreement on overall impact and level of acceptance of the technology.
“Some interviewees shared that because the chatbot addressed many information demands, they could use their time doing something else. On the other side of the issue, interviewees shared that because they had to consistently check to verify if the chatbot was distributing the correct information, it increased their workload, and they weren't happy,” Gasco Hernandez said.
Given the mixed feedback, the researchers explained it’s important to understand that chatbot implementation is not a one-time thing but a continual process. Agencies and their partners need to continually maintain the content of the chatbots as well as the infrastructure, and support from upper management is make or break for successful execution.
It’s also important to embody the three P’s — plan, performance and partnerships — when implementing any AI-based system. Each will be vital as agencies deploy chatbots across government and to any potential impact they might have on public engagement and service delivery.
As those discussions surrounding the evolution of AI-based tools continue to unfold, the UAlbany team predicts that companies will stick to the basics in some form or another when programming chatbots to mitigate legal complications.
“I think generative AI won’t replace the current use of the retrieval-based chatbots because the organizations want to prioritize consistency in terms of the answers and responses the chatbots relay to the public to mitigate legal risks,” Chen said.
He believes the key to successful implementation and getting maximum value from generative AI lies in “leveraging it for analyzing trends in information demands to continually enhance knowledge bases, creating a more conversational and human-like interaction with chatbots.”