IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Making AI Work for Government: It All Comes Down to Trust

Experts say safe and effective use of artificial intelligence requires transparency, explainability and auditability. Users of the tech also have to trust the people who made it.

illustration of three people looking at a box filled with AI tools and money
Adobe Stock
If a fool were to look at a car and say, “The purpose of this is to propel you forward,” perhaps they would come to the conclusion that the brakes were unimportant — after all, they play no part in moving the vehicle.

Lest you find yourself desperately driving into a ditch, consider that pieces of a system outside its primary function can nonetheless be crucial to its utility.

This is how I think about artificial intelligence. As we collectively begin to plunge toward ubiquitous AI, let’s not allow ourselves to become so transfixed by the technology’s promise that we forget it also needs brakes in order to be worth anything.

I began this article thinking that the main “brake” AI needed was transparency. But in researching and speaking with several experts, I found two more. And together they even come with a catchy acronym: TEA — transparency, explainability and auditability.

The goal of these three components, the reason AI won’t work without them, is trust. On both sides of technology — the person using it and the person being impacted by it — success is only achieved through trust. A person using AI in their daily work needs to understand, to varying degrees depending on the task, how the AI arrives at its outputs in order to trust them. A person whose life is being impacted by AI — imagine an algorithm weighing in on a person’s food stamp application — must have trust that the algorithm is running correctly and with minimal bias.

To quickly define each:

Transparency is making it clear when AI is being used and how it’s being used. This might include “model cards” or “fact sheets” outlining basics such as where it pulls data from and its performance metrics.

Explainability is providing narratives, statistics and other tools to help users understand how an algorithm works, especially for specific outputs.

Auditability is when the design of an AI tool allows users to monitor it for key indicators of success and failure such as bias and accuracy. This might involve providing data provenance and lineage, allowing one to trace an output back to the data it was based on.

These may be baked into specific solutions in one way or another, or they might be provided via a platform or enterprise tool that works across various algorithms.

Imagine a government employee examining expense reports, using an AI tool to quickly summarize an entire agency’s data for a year. One of the returned numbers doesn’t look right, so now that employee needs to investigate further.

At this point, if the AI is transparent, explainable and auditable, the employee could follow the program’s process of generating the number back to the original data set to find that there were several duplicate expenses that may have been misreported. If it’s a “black box” AI, the employee is stuck with a number they don’t understand and don’t believe they can pass on to anybody else. Worse, they don’t know where to begin looking to find the problem. They might begin thinking that if they had simply performed the task manually, they would have seen the issue earlier and corrected it. In this person’s mind, the AI tool is now a step backward in efficiency.

Yet it’s not necessarily true that full, robust tools to achieve transparency, explainability and auditability are crucial in all situations. Some use cases will be relatively low stakes, with obvious and limited data sources.

“Let’s say, for example, you want to come up with a more descriptive paragraph for something. In that particular situation, data lineage and provenance and auditability may not be that important. You’re using it to do something like creative writing,” said Phaedra Boinodiris, IBM Consulting’s global leader for trustworthy AI. “If, however, you want to ask it a question about … how much medicine should I use, it is really important to know, where is this trusted source of data that is being used in order to come up with this determination?”

Nor are transparency, explainability and auditability going to automatically make people trust AI in government. They will help, but as Guy Pearce, a digital transformation veteran currently working as a lead consultant with Alinea International, points out, government has a larger trust problem. The 2024 Edelman Trust Barometer found that, globally, people generally trust business and non-governmental organizations above government — and the U.S. ranks relatively low compared with other countries on trust.

“If the government itself is not deemed trustworthy, then any attempts at transparency, explainability and auditability would be viewed suspiciously,” Pearce wrote in an email. “In other words, any organization presenting these three factors as a basis for trust needs to itself be credible.”

Rather, TEA will empower people to do what they’ve always done with government in the U.S. — examine its work, seek to understand its actions, judge its effectiveness and push it to do better. Indeed, these concepts are already finding their way into policy: President Biden’s 2023 executive order on AI pushed the federal government to examine and report on its algorithms, while New York City’s AI Action Plan called on municipal leaders to consider transparency and explainability when assessing and procuring such tools.

Regulation will continue to advance. Courts or lawmakers may eventually require public agencies to put on a kettle of TEA for their AI tools, but trust will require more than compliance.

Tamara Kneese, director of the Algorithmic Impact Methods Lab at the nonprofit Data & Society, has thrown herself at this problem. Her current work is to figure out how best to conduct algorithmic impact assessments, which seek to understand how algorithms work and how they’re affecting people — including downstream harms that the people using AI might never expect.

She believes transparency, explainability and auditability are crucial for showing whether AI tools actually work and how they might create more areas of work. But beyond that, an organization seeking to monitor and understand its use of AI needs to be prepared to do something about what it finds.

“Transparency is great. Knowing how the algorithm is enacting forms of discrimination and other harms is really important,” she said. “But then the question also becomes — what do you do about it?”

Successful deployment of AI, then, relies on culture shift. It’s not just about using a new tool, and it’s not just about monitoring it for efficacy — it’s about committing to improving it.

Romelia Flores, distinguished engineer and master inventor with IBM Client Engineering, argues that organizations adopting AI should work their way up to bigger and better technologies through time, seeking to get people used to the concept of supplementing their own skills.

“If a system starts helping me make these decisions, I still want to be able to explain things, and … it’s a culture shift, it’s a mind shift of saying, ‘I’m not just going to depend on my brain. I’m going to depend on my brain, but I’m going to depend on the system to help me analyze things and help me make these decisions more effectively,’” Flores said.

Similarly, Boinodiris said, managers need to think about the expectations they’re setting around AI to align the organization’s needs and values with the work.

“If … call center operators are incentivized to strictly be more efficient with their time, like, ‘Get through as many of these as you possibly can, because you’ve got an AI model helping you,’ they’re going to be far less incentivized to scrutinize the outputs of the model and make corrections simply because they’re not being told that that is their measure of success,” she said.

That’s going to take structural change as well as education, followed by real-world experience that results in tangible improvements.

“We have an interesting road ahead of us,” Boinodiris said. “But it all starts with literacy.”

This story originally appeared in the March 2024 issue of Government Technology magazine. Click here to view the full digital edition online.
Ben Miller is the associate editor of data and business for Government Technology. His reporting experience includes breaking news, business, community features and technical subjects. He holds a Bachelor’s degree in journalism from the Reynolds School of Journalism at the University of Nevada, Reno, and lives in Sacramento, Calif.