IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Study: Government Should Think Carefully About Those Big Plans for Artificial Intelligence

Looking critically at AI could be the difference between streamlining citizen-centric government services and bogging down the processes and people it was trying to help, according to a new white paper.

Government is always being asked to do more with less — less money, less staff, just all around less — and that makes the idea of artificial intelligence (AI) a pretty attractive row to hoe. If a piece of technology could reduce staff workload or walk citizens through a routine process or form, you could effectively multiply a workforce without ever actually adding new people.

But for every good idea, there are caveats, limitations, pitfalls and the desire to push the envelope. While innovating anything in tech is generally a good thing, when it comes to AI in government, there is fine line to walk between improving a process and potentially making it more convoluted.

Outside of a few key government functions, a new white paper from the Harvard Ash Center for Democratic Governance and Innovation finds that AI could actually increase the burden of government and muddy-up the functions it is so desperately trying to improve.

Hila Mehr, a Center for Technology and Democracy fellow, explained that there are five key government problems that AI might be able to assist with reasonably: resource allocation, large data sets, expert shortages, predictable scenarios, and procedural and diverse data.

And governments have already started moving into these areas. In Arkansas and North Carolina, chatbots are helping the state connect with its citizens through Facebook. In Utah and Mississippi, Amazon Alexa skills have been introduced to better connect constituents to the information and services they need. 

Unlike Hollywood representations of AI in film, Mehr said, the real applications for artificial intelligence in a government organization are generally far from “sexy.” The administrative aspects of governing are where tools like this will excel.

Where it comes to things like expert shortages, she said she sees AI as a means to support existing staff. In a situation where doctors are struggling to meet the needs of all of their patients, AI could act as a research tool. The same is true of lawyers dealing with thousands of pages of case law. AI could be used as a research assistant.

“If you’re talking about government offices that are limited in staff and experts," Mehr said, "that’s where AI trained on niche issues could come in.”

But, she warned, AI is not without its problems, namely making sure that it is not furthering human bias written in during the programming process and played out through the data it is fed. Rather than rely on AI to make critical decisions, she argues that any algorithms and decisions made for or as a result of AI should retain a human component.

“We can’t rely on them to make decisions, so we need that check, the way we have checks in our democracy, we need to have checks on these systems as well, and that’s where the human group or panel of individuals comes in,” Mehr said. “The way that these systems are trained, you can’t always know why they are making the decision they are making, which is why it’s important to not let that be the final decision because it can be a black box depending on how it is trained and you want to make sure that it is not running on its own.”

But past the fear that the technology might disproportionately impact certain citizens or might somehow complicate the larger process, there is the somewhat legitimate fear that the implementation of AI will mean lost jobs. Mehr said it’s a thought that even she has had.

“On the employee side, I think a lot of people view this, rightly so, as something that could replace them," she added. "I worry about that in my own career, but I know that it is even worse for people who might have administrative roles. But I think early studies have shown that you’re using AI to help people in their work so that they are spending less time doing repetitive tasks and more time doing the actual work that requires a human touch.”

In both her white paper and on the phone, Mehr is careful to advise against going whole hog into AI with the expectation that it can replace costly personnel. Instead she advocates for the technology as a tool to build and supplement the team that already exists.

As for where the technology could run affront of human jobs, Mehr advises that government organizations and businesses alike start considering labor practices in advance.

“Inevitably, it will replace some jobs,” she said. “People need to be looking at fair labor practices now, so that they can anticipate these changes to the market and be prepared for them.”

With any blossoming technology, there are barriers to entry and hurdles that must be overcome before a useful tool is in the hands of those best fit to use it. And as with anything, money and resources present a significant challenge — but Mehr said large amounts of data are also needed to get AI, especially learning systems, off the ground successfully.

“If you are talking about simple automation or [answering] a basic set of questions, it shouldn’t take that long. If you are talking about really training an AI system with machine learning, you need a big data set, a very big data set, and you need to train it, not just feed the system data and then it’s ready to go,” she said. “The biggest barriers are time and resources, both in the sense of data and trained individuals to do that work.”

Eyragon Eidam is the web editor for Government Technology magazine, after previously serving as assistant news editor and covering such topics as legislation, social media and public safety. He can be reached at eeidam@erepublic.com.