IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

New ChatGPT Gov Enables Use of Non-Publicly Available Data

ChatGPT Gov is the latest artificial intelligence tool from OpenAI, geared toward expanded use by government agencies, and offering another way to access advanced machine learning models.

Side profile of a woman with a gold-colored digital outline over it.
New digital tools powered by artificial intelligence (AI) are emerging, tempting government agencies to move toward a more open posture when it comes to AI.

The OpenAI product known as ChatGPT Gov, announced Tuesday, will begin to allow government agencies to feed “non-public, sensitive data” into OpenAI's models. ChatGPT Gov allows government organizations to “manage their own security, privacy, and compliance requirements” if they self-host it, according to an OpenAI blog post.

The new government version of ChatGPT works like an expanded version of ChatGPT Enterprise, with access to many of the latter’s functions including text interpretation, creating summaries of documents and sharing conversations within other government offices, according to OpenAI.

New tools like these deserve the same close level of scrutiny gov tech officials have given other AI tools, said Emily Royall, senior IT manager for San Antonio, Texas, and a board member of the GovAI Coalition.

“Like San Antonio, I think most state and local governments will continue to require the same transparency and security from OpenAI as they would any product that potentially exposes sensitive public data to private entities,” Royall said via email.

“Ensuring the public good means not only providing AI solutions for governments, but also upholding transparency and accountability,” she said. “This includes offering clear and comprehensive documentation on product performance and potential risks, strengthening trust within our communities.”

Given the vast amounts of sensitive data public-sector agencies manage and have access to, secure AI processing is essential, said Chester Leung, co-founder and head of platform architecture at Opaque.

“Just as government employees undergo rigorous background checks and need security clearances, AI systems used in the public sector must also be verifiably secure before handling private information,” Leung said, adding, Opaque enables governments to verify that AI models operate securely and within regulatory frameworks.

Others in the government IT community have also expressed a need for reasonable levels of caution around AI tools, while providing room to innovate.

“You have to get started,” Bianca Lochner, CIO for Scottsdale, Ariz., said in December during a panel at the GovAI Coalition Summit*. “Because you’re going to be left behind. And also, your constituents are expecting it.”

“Pause, and don’t say no, before you say yes,” she said.

Microsoft offers Microsoft 365 Copilot, which leverages OpenAI large language models within a closed Azure environment that doesn’t expose public data to OpenAI, Royall pointed out.

“Several agencies are actively piloting this technology, and they will want to understand the difference and risk tradeoffs between the two offerings,” she said. "As a member of the GovAI Coalition, I would encourage OpenAI to fill out the Coalition’s AI Factsheet, that provides a baseline for what information public agencies need to successfully evaluate AI tools and technologies for public agency use.”

*Note: The GovAI Coalition Summit was hosted by Government Technology in partnership with the GovAI Coalition and the city of San Jose.
Skip Descant writes about smart cities, the Internet of Things, transportation and other areas. He spent more than 12 years reporting for daily newspapers in Mississippi, Arkansas, Louisiana and California. He lives in downtown Yreka, Calif.