IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Map: How Are State and Local Governments Navigating AI Regulation?

President Joe Biden signed an executive order to regulate artificial intelligence, but how are state and local governments handling it? Many are exploring how AI can enhance services, while others are temporarily banning its use.

Closeup of a human and robot shaking hands. Gray background.
Dabble in generative AI now, or wait and see what happens when others do it first?

That’s the question many government leaders are tasked with answering, and so far there doesn’t seem to be a breakout tactic among states or local governments.


The topic has generated a lot of buzz with lawmakers. According to the National Conference of State Legislatures, at least 25 states, Puerto Rico and the District of Columbia have all introduced artificial intelligence bills in 2023, and 15 states and Puerto Rico adopted resolutions or enacted legislation.

A Government Technology analysis of state and local government strategies toward AI revealed a few trends.

GOVERNORS ARE USING EXECUTIVE POWERS TO FORCE AI POLICY


Since August, governors in California, Virginia, Wisconsin, Oklahoma, Pennsylvania and New Jersey have announced executive orders centered around exploring AI.

Bypassing legislators is a move that is usually reserved for public health emergencies or disasters. However, in this case, most have used their gubernatorial powers to mandate that the state must create a task force to harness AI technology and create recommendations for ethical AI use.

LOCAL GOVERNMENTS ARE MAKING THEIR OWN AI RULES


The governments of Seattle, New York City, San Jose, Calif., and Santa Cruz County have all issued independent policies or guidelines for how their employees should use AI on the job.

The focus of these frameworks centers on responsible use of AI, while avoiding sharing sensitive information and introducing risks that may jeopardize government operations or cause unintended negative consequences to constituents.

A majority of the agencies that enacted their own policies are located in places that had not yet created statewide mandates or guidelines at the time.

SOME AGENCIES ARE TAKING A CONSERVATIVE APPROACH TO AI


While many states have created task forces and research groups to study AI and expand its use in ethical government functions, at least one is taking a “wait and see” approach that restricts employees from experimenting with AI on the job.

In June, Maine Information Technology (MaineIT) directed all executive branch state agencies not to use generative AI for at least six months on any device connected to the state’s network. The ban does not include any chatbot technology currently approved for use by MaineIT, and instead focuses on ChatGPT and any other software that generates images, music, computer code, voice simulation and art.

According to the moratorium, “This will allow for a holistic risk assessment to be conducted, as well as the development of policies and responsible frameworks governing the potential use of this technology.”

North Dakota was one of the first state agencies to pass legislation related to AI at the start of the year, but the law differs from what other states have experimented with since then. The North Dakota emergency measure stipulates that AI is not a person.

A handful of states have attempted to introduce new laws centered around the use of AI for government agencies, but have yet to have their plans finalized and put into action. Several bills that would have created AI task forces or research groups didn’t make it much farther than their initial introduction in legislature.
Nikki Davidson is a data reporter for Government Technology. She’s covered government and technology news as a video, newspaper, magazine and digital journalist for media outlets across the country. She’s based in Monterey, Calif.