IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How Should Government Guide the Use of Generative AI?

As governments grapple with how to roll out generative AI — or whether they even should — policies in Seattle, New Jersey and California aim to to be broad, easy to understand and relevant in the face of change.

robot holding weighted scales
AdobeStock
For some, the rapid rise of artificial intelligence is a cause of anxiety. Will AI perpetuate bias, surface inaccurate information or automate people out of their jobs? As New Jersey’s first chief AI strategist, Beth Simone Noveck sees this moment not as a cause for alarm, but rather as an opportunity.

“A lot of places are being silent, leaving public servants wondering what they should be doing,” she said. In New Jersey, “we have said: You should go out and use these things, and please ask about them. Let’s talk about them. Let’s have a conversation.”

With the first iteration of the state’s AI policy, released last November, Noveck is looking to get that conversation going. And she’s not alone. Nationwide, government IT leaders are formulating policies to encourage exploration of emerging capabilities, especially around generative AI, while still being mindful of potential risks.

Experts say there is an urgent need for government to establish guardrails, given the fast-growing availability of generative AI applications. “These tools are already publicly available and are being adopted or used privately,” said Mehtab Khan, fellow at the Harvard Berkman Klein Center. Given the rapid adoption, “you don’t have any control, besides just having internal policies and principles.”

Here, we’ll look at some of those emerging policies and explore what’s at stake: Why policies are needed now and how things are likely to evolve as generative AI continues to take the world by storm.

REGULATING CITY USES


As the first order of business in Seattle, Interim CTO Jim Loter has been setting up guidelines for staff to get ahead of any improper uses.

list of Seattle's 7 artificial intelligence principles
While the IT team had already been exploring AI for a couple years, “the explosive growth of generative AI … was really the catalyst for us to take the concerns that had been highlighted about artificial intelligence more seriously,” he said.

“We saw that with the growth of tools like ChatGPT, it was suddenly very, very likely that city employees were going to want to use this technology to conduct their day-to-day business,” he said. “We wanted to open up a conversation about that.”

To formulate policy, the city assembled a Generative AI Policy Advisory Team that included leaders from the University of Washington, the Allen Institute for AI, members of the city’s Community Technology Advisory Board (CTAB) and Seattle Information Technology employees.

The team followed the same model the city had used when crafting a data privacy policy a few years earlier, a strategy that called for a statement of general guidance as a starting point.

“We found that the principle-based approach was very effective, because it didn’t require us to anticipate every single potential use case that could emerge in this area,” Loter said. “It allowed us to communicate to city employees: Go ahead and use your discretion. When you are making a decision about whether to use or not use a particular system, keep these principles in mind.”
You should go out and use these things, and please ask about them. Let's talk about them. Let's have a conversation.

To that end, the AI policy articulates several governing principles, touching on areas such as transparency and accountability; validity and reliability; bias and fairness; privacy; and explainability.

The policy also reiterates the longstanding rule that the IT department is the only body authorized to purchase information technology on behalf of the city. It’s all too easy to access AI tools online, Loter said, and the IT team wants to ensure city employees don’t bypass the usual controls around things like privacy and security.

This refers to “even something as basic as creating a free account on ChatGPT,” he said. “If you type your Seattle.gov address into that and then start typing in your prompts and using the results of that, you are creating unnecessary risks.”

Screenshot of Beth Noveck on a video presentation
Beth Simone Noveck was New Jersey’s chief innovation officer when the state released its AI policy in November 2023. In January 2024, she was named chief AI strategist, as Gov. Phil Murphy called for an “AI moonshot.”

EFFORTS UNDERWAY


In California, meanwhile, state CIO and Department of Technology Director Liana Bailey-Crimmins likewise describes the emergence of an AI policy that will deliver general guidance at first in the form of administrative policies.

“We recognize the new and unique nature of GenAI and are working closely with industry and academic experts to understand the risks and benefits of this technology and where additional policy may be needed,” she said in an email to Government Technology.

In these early days, those policy efforts are being shaped by an executive order from California Gov. Gavin Newsom. Introduced last September, it offers a framework for the state to understand both the risks and the benefits of generative AI, offering direction for state agencies as they formulate their approaches. The goal, according to the document, is to create “a safe, ethical and responsible innovation ecosystem inside state government.”

Those guidelines will eventually cover public-sector procurement, uses and required training for the application of generative AI. The emerging policy, Bailey-Crimmins said, will build on early guidance from several sources at the federal level, including NIST’s AI Risk Management Framework, the White House’s Blueprint for an AI Bill of Rights, and the White House’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.

California will also enlist academic and industry experts as its policies evolve, to “ensure we are taking a balanced and insightful approach to considering current and future policy needs,” she said.

MAKING AI ACTIONABLE


In New Jersey, Gov. Phil Murphy has established an AI task force, which is often the first step policymakers take when confronting generative AI. Noveck co-chairs that task force, along with state CTO Chris Rein and Tim Sullivan, CEO of the New Jersey Economic Development Authority.

In November 2023, the task force issued its first policy announcement. The policy sets a goal for government employees, calling on them to use AI “to deliver enhanced services and products efficiently, safely and equitably to all residents.”

The purpose of a policy is to give people guidance about what to do, and that comes from understanding what the questions are.
It touches on inclusion and respect (“AI should uplift communities”). It calls for transparency (“We must disclose that responsibly and share our workflow freely with other public servants and with the public”). And it encourages responsible experimentation (“We understand risks may not be fully apparent initially and commit to proactive risk assessment”).
The point of all this is to get the ball rolling. “These are powerful new technologies that can really help us do our job better,” Noveck said.

In exploring that potential, “you should be consistent with safeguards, such as: Don’t rely exclusively on information that they give you. Make sure to disclose when you use them. Don’t put personally identifiable information, yours or anyone else’s, into these tools,” she said.

The policy gives specific examples of how AI tools might be used, along with guidance for doing so responsibly. “We’ve also accompanied that with a video primer,” Noveck said. “We actually show you: Here’s where you log on, here’s what you should do, here’s how you might summarize something or translate something, or simplify language.”

This all aims “to help people start to use these technologies to do things to benefit residents,” she said.

Even as Noveck focuses on the potential benefits of AI, she joins others in acknowledging the risks that government faces as it seeks to make best use of this powerful new capability.

WHAT'S AT STAKE


First and foremost, AI has shown a tremendous capacity for being convincingly and confidently wrong.

Definition of "hallucinate" as it relates to AI
“These tools sometimes hallucinate,” Noveck said. “They very authoritatively spit out information which may be incorrect, and therefore you need to double check any text. Never just use what you get from a tool like ChatGPT or Bard without checking it first.”


Just as a government employee wouldn’t post an intern’s work for public consumption without first looking it over, she said, AI results should be vetted by responsible individuals before being used to make decisions, or being released in the wild.

“Don’t go publishing things, especially things that residents are going to rely on, if they haven’t been checked and proofed,” she said.

California, meanwhile, is looking at a broad range of risks.

“We are focusing on key areas such as procurement guidelines, risk management and security to ensure that potential pitfalls such as privacy, misuse and biased outcomes are properly assessed, monitored and mitigated,” Bailey-Crimmins said. The goal? “Safe, secure and equitable outcomes.”

In Seattle, the risk evaluation “started with a very simple question: Is this really new? Is this just marketing hand-waving that’s making it seem new, or is this really something that presents novel risks?” Loter said.

Short answer: It’s new. While there’s always a chance of software performing incorrectly, “we couldn’t really think of another example of software out there that was literally producing and generating new content that could essentially just be copied and pasted, and that sounded like it was produced by a person,” he said. That generative risk is high on the list of concerns.

“If you use ChatGPT to publish something on a city website, and that something ends up being inaccurate — or in a worst-case scenario actually inflammatory or in some other way harmful to the city — it doesn’t matter that ChatGPT wrote that,” he said. “Whoever made the decision to copy and paste it and publish it on the website is ultimately responsible for that.”

And the generative risk in turn produces an ancillary concern that needs to be considered. Is AI going to introduce new liabilities?

Say, for instance, a city employee uses a GenAI tool to research a question. “Now you’ve brought it into the ecosystem, you’ve provided it with city information that now lives there. It could be requested as part of a public disclosure request — and we don’t know that it’s there,” he said.

In that case, “we run the risk of failing to be responsive to a public records request,” he said.

WHERE WE'RE HEADED


All this is likely just the tip of the iceberg. Generative AI is very much a moving target, and government IT leaders expect the risks — and their policy responses — to continue evolving.

“AI technologies are growing and changing rapidly,” Bailey-Crimmins said. “We already see AI capabilities being added to many commonly used productivity and collaboration tools, so incorporating this technology into the state workplace is inevitable.”

That being the case, “we expect California’s AI- and GenAI-related standards, guidelines and policies to change as these technologies change,” she said.

In New Jersey, Noveck envisions more specific policies around internal use. “How are we in government using AI technology, and what are the rules of the road there?” she said. Emerging policy iterations will look toward “ensuring that the tools do what they say they’re going to do, that we have adequate transparency into how they work, that we have ways of testing them.”

In particular, forthcoming policy will emphasize the importance of human oversight. The policy team will also be looking at early use cases, to determine where guidelines are needed most. “The purpose of a policy is to give people guidance about what to do, and that comes from understanding what the questions are,” she said.

In Seattle, the city’s next step will be to establish a structure for driving practical application of the broad guiding principles.

“We are going to create what’s called a community of practice for city employees who have an interest in AI technologies,” to help them implement the principles, Loter said. They’ll exchange information, develop best practices. This in turn will help policymakers “better understand how people want to be or should be using this technology in real-life use cases.”

The IT team will also be evolving policy to guide its own buying process around AI tools. That will likely start with a clear definition: The next round of policy will spell out exactly what is or isn’t a GenAI tool, both to help IT in its assessments, and also to guide city employees as they seek to understand where and how the AI policy should be applied.

Overall, the policy will continue to define high-level rules of engagement, without getting too deep into the nuts and bolts. “As soon as you get down to the granular level and start to try to legislate particular products or versions of software, or even particular use cases, you start playing this giant whack-a-mole game with reality,” Loter said. “We didn’t want to get into that.”

The hallmark of good policy “is that it’s widely applicable, it’s easy to follow and it doesn’t have to be changed every time a new product gets released onto the market. That’s what we are shooting for,” he said.

This story originally appeared in the March 2024 issue of Government Technology magazine. Click here to view the full digital edition online.
Adam Stone is a contributing writer for Government Technology magazine.