IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Ethical AI Procurement Requires Collaboration, Accountability

As state and local governments cautiously pursue AI, they must prioritize ethics, transparency and accountability in procurement to protect public interests and deliver on the technology's potential.

Just like many private-sector organizations, government is also an end user of AI. However, government agencies serve the entire population and are tasked with not just efficiency or cost but also the protection of important public values. This balance necessitates extra caution when interacting with AI and the vendors behind the technology as they can both dramatically improve effectiveness, but also impinge on equity and privacy.

In the U.S., governments have been paying close attention to AI development and its use in the public sector for several years. Following two executive orders in 2019 and 2020 focusing on bolstering AI ethics and implementation in the public sector, the Biden administration launched to keep the public informed about federal AI initiatives. The National Institute of Standards and Technology further detailed this commitment by releasing the AI Risk Management Framework and related resources. Notably, as pointed out in a report on AI and governments by law scholars from Stanford and NYU, these guidelines predominantly focus on AI techniques for data analysis, like conventional machine learning or deep learning with natural language and image data. Addressing biases, enhancing accountability, and prioritizing privacy and cybersecurity remain at the forefront of these efforts.

The emergence of ChatGPT has revolutionized the dynamics between the public sector and AI by accelerating the process of integrating AI into large-scale organizations like the government. In a recent conversation with chief data officers from major U.S. cities, Luis Videgaray, director of AI Policy for the World Project at the MIT Sloan School of Management, highlighted that as the ecosystem of various AI applications grows, so will the complexity of how government interacts with vendors who use generative AI in their products.

Behind this AI ecosystem lies a complex supply chain involving multiple third-party vendors. Boosting algorithmic accountability compels government agencies to improve understanding of back-end systems in order to ensure cybersecurity and privacy protection. Cities will also need to expand their definitions of and approaches to transparency and engagement; Cary Coglianese, professor of law at the University of Pennsylvania, and law clerk Erik Lampmann pointed out in a recent article that public agencies will also need to learn how to conduct periodic audits. Addressing all these issues sequentially can be challenging. Government faces the daunting task of distinguishing specific contributions from different vendors, selecting products that align with public interests and avoiding over-reliance on a particular enterprise solution. With more outsourced technical work, cities will require their own set of outside advisers in order to maintain control and accountability.

To address these challenges, public procurement becomes one of the most critical steps in ensuring the appropriate use of AI. One constructive perspective is to shift from the traditional IT purchasing philosophy to an agile strategy that foresees adaptive testing and

fosters long-term collaborations between governments and vendors. Establishing an iterative process for ongoing development between governments and private service providers to tailor products can ensure alignment with public interests.

A prime example of a collaborative approach is the Progressive Delivery Model, a contracting method that prioritizes in-depth collaboration between the contractor and the project owner from the design phase, even before finalizing the project’s price and schedule. This model falls under the umbrella term of “progressive contracting,” designed to encourage a cooperative environment in a project’s early stages. One primary objective of the model is its inherent flexibility. Recognizing that products or services may encounter shifting circumstances, this model is crafted to address and adapt to these changes effectively. This not only enables the public agency to make well-informed decisions before committing to a long-term agreement, but also heightens transparency around system life cycle costs.

Predicting the future of AI and its societal impacts is an uphill task. Devising an approach that bolsters technical capacities through informed collaborations with vendors that includes auditing, transparency and protection of important public values will provide a foundation for advancement.

Juncheng (Tony) Yang, a doctoral candidate at the Harvard Graduate School of Design and researcher for Data-Smart City Solutions, co-authored this column.

This story appears in the October/November issue of Government Technology magazine. Click here to view the full digital edition online.
Stephen Goldsmith is the Derek Bok Professor of the Practice of Urban Policy at Harvard Kennedy School and director of Data-Smart City Solutions at the Bloomberg Center for Cities at Harvard University. He previously served as Deputy Mayor of New York and Mayor of Indianapolis, where he earned a reputation as one of the country's leaders in public-private partnerships, competition and privatization. Stephen was also the chief domestic policy advisor to the George W. Bush campaign in 2000, the Chair of the Corporation for National and Community Service, and the district attorney for Marion County, Indiana from 1979 to 1990. He has written The Power of Social Innovation; Governing by Network: The New Shape of the Public Sector; Putting Faith in Neighborhoods: Making Cities Work through Grassroots Citizenship; The Twenty-First Century City: Resurrecting Urban America; The Responsive City: Engaging Communities through Data-Smart Governance; and A New City O/S.