IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

A Look at San Francisco’s AI Rules for City Workers

The guidelines direct city employees to fact check AI-generated content and disclose when they use the technology, which can write and summarize emails and other documents as well as generate images and videos.

The Golden Gate Bridge with San Francisco behind it.
(TNS) — San Francisco has released its first guidelines on how city employees should use artificial intelligence at work, more than a year after OpenAI first rocked the world with the ChatGPT chatbot that reinforced San Francisco's dominance in the burgeoning AI industry.

The guidelines direct city employees always to fact check AI-generated content and to disclose when they use the technology, which can be harnessed to write and summarize emails and other documents as well as to generate images and videos.

The guidelines also warn city employees away from entering sensitive information into public generative AI tools. That information can be seen by companies like OpenAI that make the technology — and potentially the public.

The San Francisco rules, released in December, come as New York state this week began encouraging employees to use AI tools, a step taken by Gov. Gavin Newsom in September via executive order. Separately, Pennsylvania Gov. Josh Shapiro announced Tuesday that his administration had launched a first of its kind pilot program for state government employees to use ChatGPT in their work.

The San Francisco guidelines, which the city called "preliminary," point out that generative AI programs like ChatGPT can be useful in drafting emails, adjusting written levels of formality, and automating repetitive tasks such as coding.

The AI plan, released by the City' Administrator's Office in consultation with various city technology departments, encourages San Francisco city employees to experiment with the technology, but also warned that AI programs can reflect the biases inherent in their training data, and should be used with caution.

The city said existing AI plans from San Jose, Boston, the state of California, the White House, and the United Kingdom informed its own guidelines.

Kevin Frazier, a law professor at St. Thomas University in Florida, who studies AI law and regulation, said in an email that San Francisco's guidelines "tiptoe in the right direction" but ultimately fail to provide enough clarity to employees on when and how to use the technology.

Frazier compared the city's incremental approach to AI with its comparatively "bold" ban on government agencies using facial recognition technology. He said it wasn't clear why the city was in this case encouraging employees to experiment with generative AI tools. "Minimally, City employees should receive formal training on when and to what extent to use tools like ChatGPT on the job," Frazier said.

"The City deserves credit for recognizing a point that often goes undiscussed — everyone can, will, and should (in certain contexts) use Generative AI tools," Frazier told the Chronicle.

The city said next steps in developing the guidelines would include a comprehensive survey of the ways city departments are using and might use AI, the "creation of a user community," and further consulting with AI experts.

Instead of encouraging unguided experimentation, Frazier said, city officials would be better served by banning the use of generative AI tools that have yet to undergo scrutiny by third party experts, an approach that could "would send a signal that cities, counties, and states can use their substantial budgets and public influence to meaningfully shape the AI debate and push AI development towards the public interest."

Frazier noted that while the guidelines were only preliminary, "the risks posed by Generative AI tools being used in public functions warrants the City issuing a substantive set of rules (not merely guidelines) sooner than later.

The guidelines have "low odds" of providing city employees with sufficient input on how to use AI tools, he added, noting: "Unguided experimentation, ad hoc consultation with department IT teams, and encouragement to fact check AI content is certainly not sufficient guidance."

© 2024 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.