The paper, Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees’ Practices, Challenges, and Needs, was published in April by the University of Pittsburgh’s Pitt Cyber and the Carnegie Mellon University Block Center.
Interviews with 19 local government employees in procurement and governance roles informed the report, painting an on-the-ground picture of procurement and outlining the three key challenges cities must address to improve their practices.
The analysis begins with the premise that most of the public-sector AI systems in use today are developed and acquired from external vendors through public procurement processes; in 2023, more than half of AI tools used by federal agencies were purchased from commercial vendors.
When creating AI tools and systems, there are areas the private sector may sometimes overlook, like the potential for bias posed by AI tools, risks that are often hidden. However, there are intentional steps developers can take to ensure AI systems are inclusive and mitigate bias.
The narrow focus on improving efficiency or performance through AI systems can result in adverse effects that disproportionately impact marginalized communities. Instances of bias and privacy violations shed light on issues in the way AI is acquired, used and governed in the public sector, the paper argues. However, it suggests there are ways of reimagining AI procurement to help governments anticipate — and prevent — AI harms. To do so, cities’ AI procurement practices must first be better understood, which is the focus of this analysis.
Cities’ purchasing practices hinder their ability to address AI harms in some cases, the paper indicates. For example, cost thresholds allow employees to acquire low- or no-cost AI solutions in a way that lets them circumvent the accountability measures government purchasing traditionally entails.
The paper examines a lack of standardization for AI acquisitions in traditional procurement processes. In some cases, city councils’ approval was required and in others it was not. Contracts were often negotiated and signed before purchasers could access the AI systems. Contracts were often not renegotiated, and in several cases, employees indicated that algorithmic harms only came to light post-deployment. Cities also used separate governance processes for different types of AI procurement, and different cities prioritize values like data privacy to varying degrees.
ADDRESSING THE CHALLENGES
Three challenges must be addressed by cities to improve AI acquisition processes, the paper argues. First, cities must address information gaps between governments and the vendors from which they are purchasing.
“I don’t really know what the risks are to working with AI,” one procurement specialist, whose name wasn’t included, said in the paper. “If I can’t protect us from those risks comfortably, then I’m not doing my job.”
Second, cities will require support in requesting more from AI vendors.
The paper found AI vendors frequently withheld critical information about their systems. Government employees indicated that they lacked leverage in advocating for their city when making agreements with vendors. One employee even cited a contract in which they were limited to only five support calls with the vendor.
And third, cities will require support in both sharing and taking on the ongoing AI governance responsibilities.
The AI governance process does not end at development or even deployment, the paper argues. As such, more clear guidance may be required to address how ongoing AI governance responsibility should be shared between the public and private sectors. And while cities may have limited capacity to take on this governance role, the paper argues that governance activities are best performed by government employees. As such, capacity-building is critical to improving this process.
Some of the city officials interviewed for this paper indicated that their existing procurement practices have already undergone changes to account for AI acquisitions, such as establishing an AI review process.
There is a trend in which “many AI vendors are not cooperating with employees’ efforts to understand and mitigate AI harms,” the paper concludes, indicating this prompts questions of what reforms to city purchasing processes stakeholders might advocate for, how accountability can be increased in these processes, and whether more direct regulation of AI vendors can help protect the public interest.