IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Could ChatGPT Help Cities Better Vet Potential Vendors?

As government grapples with how to make practical use of generative AI, one avenue for the new technology could be helping cities ensure regulatory compliance from companies bidding for new construction contracts.

In the background, the ChatGPT logo and text that says "Welcome to ChatGPT" in white font on a black background. In the foreground a hand holds up a smartphone with the ChatGPT app open on the screen.
Shutterstock
Several years ago, when I was deputy mayor of New York City, the finance commissioner brought me an important proposition: We should ensure that contractors who owe the city money are not allowed to draw building permits for new developments until they resolve their obligations. It seemed to me that this logic should apply to any vendors competing for city business, yet when I proposed this to the commissioner of buildings and head of procurement, they both expressed reservations — not to the idea but to the practicality. How would they make these judgments without sacrificing their basic responsibilities, especially when determined marginal actors might create new LLCs or other vehicles under new names? We made some modifications but recognized that the necessary thorough research exceeded the capacity of departments that were set up to administer core programs rather than undertake these new complex responsibilities of tracking down individuals, lawyers and others who might have a nexus with multiple problematic entities.

These challenges will become even more important as new federal infrastructure and environmental funding flows into cities and presents dramatically expanded opportunities for those contractors involved in building roads, railways, bridges, EV stations, solar and retrofitted buildings. So how will local leaders manage to efficiently award contracts and distribute funds without being taken advantage of by bad actors?

The idea is simple, but the implementation difficult. Those individuals and companies who owe back taxes or have material regulatory infractions should be flagged until the issues are resolved. Why should taxpayers spend money with entities that owe money, or who have been repeatedly cited for code violations or unsafe building procedures? Yet it’s impossible for public employees to manually go through available companies and owners to identify those entities that should be barred, including those operating under new related family or company names.

Generative AI like ChatGPT, if given access to data from departments like finance, building, permitting, sanitation and transportation, can identify these bad actors trying to hide under layers of false names, shell ownership or frequent company changes. In 2021 the U.K.’s National Archives tested several AI tools to see if they could help employees with their duty of selecting “records of enduring value for permanent preservation at the National Archives.” The employees themselves were overwhelmed by the sheer “volume, diversity, complexity and distributed nature of departmental digital records,” but the pilot found that the AI tools could successfully help with their tasks, indicating that these types of tools could be employed to identify bad actors in massive amounts of documents.

In 2018, OpenAI introduced “Improving Language Understanding by Generative Pre-Training,” an advanced natural language processing (NLP) based on the transformer architecture. Compared to previous NLP models, which relied mostly on supervised learning from manually labeled data, the OpenAI approach involved an unsupervised generative “pre-training” stage to set parameters and a supervised stage to adapt these parameters to a target task. This significantly reduces the necessity for human oversight and time-consuming manual labeling.

Just a few months since its launch in November 2022, ChatGPT has refreshed the public understanding of generative AI tools. Various AI-backed tools are emerging to address queries in specific fields. For instance, Census GPT enables individuals to search the Census database. The tool allows user-friendly queries, such as finding neighborhoods with the highest average household income and displaying the percentage of each race in that area.

Generative AI can aid city departments in ensuring regulatory compliance across a wide range of activities including building permits and construction contracts. Cities have a duty to reward bids to quality companies, rather than ones whose procedures may result in a cheaper, but much worse, bid. Another, less-tangible benefit is an increase in trust. By employing generative AI to seek out red flags and weed out poor-performing contractors, cities can show residents how they are valuing safety and implementing better safeguards against bad actors.

This story originally appeared in the June issue of Government Technology magazine. Click hear to view the full digital edition online.
Stephen Goldsmith is the Derek Bok Professor of the Practice of Urban Policy at Harvard Kennedy School and director of Data-Smart City Solutions at the Bloomberg Center for Cities at Harvard University. He previously served as Deputy Mayor of New York and Mayor of Indianapolis, where he earned a reputation as one of the country's leaders in public-private partnerships, competition and privatization. Stephen was also the chief domestic policy advisor to the George W. Bush campaign in 2000, the Chair of the Corporation for National and Community Service, and the district attorney for Marion County, Indiana from 1979 to 1990. He has written The Power of Social Innovation; Governing by Network: The New Shape of the Public Sector; Putting Faith in Neighborhoods: Making Cities Work through Grassroots Citizenship; The Twenty-First Century City: Resurrecting Urban America; The Responsive City: Engaging Communities through Data-Smart Governance; and A New City O/S.