IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Where to Start With AI? Cities and States Offer Use Cases

Building an AI program is a daunting proposition, but government has to start somewhere. From strengthening cybersecurity to improving 311, a handful of early adopters are finding safe and practical uses.

colorful interlocking building blocks
Adobe Stock
Speaking in November at a global event on AI held in London, Vice President Kamala Harris made the importance of the present moment very clear. “The actions we take today will lay the groundwork for how AI will be used in the years to come.”

It is, to be sure, a pivotal moment. With the rise of generative AI (GenAI), government agencies — local, state and federal — are striving to harness the tech in safe and productive ways, trying to establish guidance and policies while almost simultaneously finding new applications for the evolving tools.

But for some — both in and out of the public sector — AI can feel like a complex and intimidating subject, with high stakes and no clear set of best practices. With that in mind, there may already be some good examples of how to proceed, however, and at all different levels of government.

President Joe Biden signed an executive order in October to regulate AI, setting new standards focused on security and privacy. This long-anticipated action is the most sweeping yet from the White House, building on voluntary commitments like the “AI Bill of Rights.” But because this action did not come until relatively late in 2023, in its absence smaller government entities charted their own paths forward, not wanting to be left behind.

STATES


A handful of states, including Pennsylvania, Wisconsin and Virginia, implemented their own executive orders on AI. Other states, like Utah, Washington and Kansas, created GenAI policies, while New Jersey did both.

As New Jersey Chief AI Strategist Beth Noveck said via email, Gov. Phil Murphy has taken a forward-thinking approach to GenAI as it offers “tremendous potential” to public-sector service delivery.

Noveck also noted that the state is using AI-based text-to-speech features for some automated call center transactions, and GenAI tools have been used by staff to help simplify language on unemployment insurance claimant communications.

“First: listen, learn and experiment,” Noveck advised other states looking to adopt AI, underlining the importance of balancing the use of GenAI with key principles of existing privacy and security policies.

In another early-adopter state, Utah, CIO Alan Fuller said although the “hype” around the tools is currently at a peak, AI has been around for years. In fact, in 2018, his state created a Center of Excellence in AI — nearly five years before the widespread availability of GenAI tools hit the public. By the end of 2023, the state enacted an enterprise GenAI policy to govern responsible use of this technology.

“I think, this year in 2024, we’ll see a lot more adoption as people figure out how to really make it useful,” Fuller said.

Utah's approach to generative AI
One goal in Utah was to start with outlining prohibited actions, ensuring state proprietary data or personally identifiable information would not be entered into public GenAI models. The state’s policy was created early to address those risks. Looking forward, Fuller said the state wants to put out a positive framework outlining best practices for safe use.

The use of AI in the state has already resulted in tangible savings of employee time, he said. And while much of the conversation around GenAI currently involves a mix of benefits and major risks, Fuller feels confident that Utah’s approach will maximize benefits while minimizing risks. And, he noted, by exploring this technology in a safe environment, state government there is likely to find new use cases.

The state also uses AI in threat detection. Utah CISO Phil Bates said that as an early adopter of Google’s Chronicle — a cybersecurity program — the state saw an improvement in the quality and actionability of alerts. With 2 terabytes a day of logged data that need to be scanned, AI helps enable the state to sort through substantial quantities of information and take proactive steps to mitigate risk. The next step, according to Bates, is to integrate the tech with Utah’s other systems to take automated actions in response, such as blocking an IP address, which is currently a manual process.

“If you don’t get ahead of it and provide policy guidance and direction, [agencies are] going to go there anyway,” Bates said. “If you don’t guide them, they’ll get there on their own, but they’ll make mistakes doing it.”

LOCAL GOVERNMENT


At the local level, city and county governments are already advancing AI work as well.

Santa Cruz County, Calif., is one example of a local government that has been an early and effective adopter of an AI policy. According to the county’s district supervisor, Zach Friend, officials there saw an opportunity at the local level for something that was lacking at the federal and state levels. Now, Friend hopes that the work that has been done in Santa Cruz can serve as a model for other governments.
And it does seem like that’s where the tide is trying to go: How do you harness the innovation without stifling the innovation through overregulation?

In fact, the county’s work has already proven to be an influential model, as Friend recently visited the White House for the release of the federal executive order. During this visit, he shared his perspective on how the federal government can work with counties to help shape the AI regulatory landscape.

Friend is part of the National Association of Counties’ AI Exploratory Committee, which he said has helped amplify the county voice at the federal level. Friend added that he has engaged with the federal government throughout the process of creating a county policy to ensure plans were aligned.

When creating the county’s regulatory framework, the initial focus was on creating a set of general values to be used as AI continued to rapidly evolve. The goal was to ensure that rather than banning the tools, the county could harness their potential, ultimately leading to the creation of the county’s policy.

In the process of creating this policy, Friend said it became clear that AI tools were already being used in the county in daily workflows — in fact, in a period of several months, the county determined there were over 36,000 unique use cases. He said that the question then became how that usage could be elevated while implementing guardrails to secure protected information.

“And it does seem like that’s where the tide is trying to go: How do you harness the innovation without stifling the innovation through overregulation?” Friend said.

For other government agencies looking to adopt AI and regulate use within their jurisdiction, Friend suggested looking to governments that have had success. He suggested “operating from an understanding that your employees are already using it, your community’s already using it … and it’s only going to become more prevalent.”

Friend stressed that accepting that this technology isn’t going anywhere gives local governments an opportunity to shape the debate, both in their state and more broadly.

Boston's approach to AI
Boston has taken a similar approach to policy. The city was another early adopter of GenAI guidance among U.S. localities.

Boston CIO Santiago Garces said there were a couple of reasons for this. First, it was clear that GenAI tools were increasingly widespread. Second, the value of the technology was evident. Third, the risks needed to be addressed so that when the tools were used, they were not potentially adversarial.

He also credited the support of city leadership, including Mayor Michelle Wu, who wanted to embrace innovation while protecting constituents served by the city. For Boston, this meant striking a balance between the value of AI and the importance of human oversight — which Garces described as an example of the historically complicated relationship between art and technology.

“Ultimately, how and when the output of these tools gets used is the responsibility of people that have been trusted by the public to be stewards of the public good,” Garces said.

Ultimately, how and when the output of these tools gets used is the responsibility of people that have been trusted by the public to be stewards of public goods.
The city has already begun using GenAI, too. So far, Boston has primarily used it to draft job descriptions and things of that nature. Now, the city is experimenting with new ways the tools could be used, including improving the experience of accessing constituent services or analyzing 311 requests. In addition, Garces noted that a lot of the city’s data is in a raw form that is not easily consumed by members of the public; the city is now working to see how it can use GenAI to summarize data for easier consumption. For example, GenAI could help summarize the issues the City Council has voted on.

The city is evaluating the impact of AI on a case-by-case basis, too, focusing on three primary lenses: value, risk and cost. Garces said experimentation has enabled a greater understanding of the difference between perceived risk and real risk. And he noted that because of the public-serving role of government, it is important to be transparent and intentional to maintain trust.

Other cities looking, he advised, would do well to remain grounded in human values: focus on what will benefit the people in city government and the people they serve.

SMALL CITIES


While a lot of the advice does apply to smaller cities, there are some evolving examples unique to those jurisdictions, too.

One example of a small city that has embraced AI is Wentzville, Mo. Although the city has a population under 60,000, Strategic Communications Officer Kara Roberson noted that it is the fastest-growing city in the state — and for that reason, the city tends to be an early adopter of new tech.

Wentzville, Mo.'s approach to AI
“We’re always looking for new tools to capitalize on that, and AI seems like one we could do pretty easily,” Roberson said.

The city is now using GenAI to automate certain pieces of communication. This includes drafts for automated responses, she said, which will free up city staff to do other work. Thus far, the city has offered in-person workshops and training, and it has sent electronic trainings to help people implement tools responsibly.

Although it is still early in the process and there is a lot that the city plans to explore, she said the impact has already been tangible: For the communications team, the city has seen a significant return on investment in terms of time saved rewriting content.

Roberson said that while implementing AI, it is important to maintain human oversight that ensures all content matches the city’s messaging and tone.

And while the city now has adopted a policy to govern AI use, when the pilot of the tools started, the policy was not in place. Roberson said the city staff leaned on guidance from existing policies, such as the ethics policy, to guide use during the initial experimentation period. Once the city could see if and how the tools fit into the workplace, the city could create a policy around that usage. She said the city also looked to larger cities like Boston as a model.

She noted that some hesitation toward AI may exist around job security, and she suggested looking at AI as a digital assistant that could save the city the time and money of hiring a new part-time position.

She acknowledged that the general public does have some fear around rapid AI advances, but starting small and demonstrating value can help folks overcome that.

“I think first and foremost is to just not be afraid,” Roberson assured. “It’s not a scary space.”

Public entities at all levels across the country are now advancing AI. From Gilbert, Ariz., a city that just appointed Eugene Mejia to serve as its first chief AI strategy and transformation officer, to Southeastern Pennsylvania Transportation Authority CIO Emily Yates, who is now serving as a co-chair on the MetroLab Network’s GenAI for Local Governments Task Force, government leaders are eager to understand how AI can improve their work processes.

And as experts across different levels of government have emphasized, the best way to gain an understanding of the tech and how it will impact constituents is to start exploring it.

This story originally appeared in the March 2024 issue of Government Technology magazine. Click here to view the full digital edition online.
Julia Edinger is a staff writer for Government Technology. She has a bachelor's degree in English from the University of Toledo and has since worked in publishing and media. She's currently located in Southern California.