IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How Should Government Regulate AI? We Asked a Robot

Artificial intelligence's potential in the public sector has grown exponentially, as have questions around appropriate guardrails. We interviewed the ChatGPT chatbot from OpenAI to see what it had to say.

Artificial Intelligence
Shutterstock
I started working for e.Republic (Government Technology’s parent company) as a writer almost 12 years ago. I was doing a lot of interviews for case study-style stories on technology deployments. I would later transcribe those interviews word for word to maximize my understanding and faithfully reproduce any direct quotes I ended up using. At the time, AI tools to automate the transcription process fell short — comically short, actually.

What a difference a dozen years makes.

Artificial intelligence, in the ensuing years, has advanced exponentially in its capabilities and ease of use. At GT, we report on its growing adoption by public-sector agencies to help identify patterns and automate routine processes, making it possible for staff to reserve more of their time for complex problems. And AI only continues to get smarter, recently demonstrating new capabilities in the creative arts, for example, producing poetry, music and photography that is beginning to rival what humans can do.

In many ways, the pandemic moved chatbots into the “must have” category for government — a force multiplier for agencies that saw dramatic increases in customer inquiries. But chatbots rely on good content. The nature of user questions can evolve as quickly as COVID-19 variants, requiring constant upkeep to remain effective. It’s no longer enough to have a chatbot. You have to devote time and resources to making sure it is indeed saving staff time and meeting constituent needs.

Enter the ChatGPT tool from Open AI, an AI-driven chatbot prototype that uses sophisticated natural language processing technology with impressive results. Released in late 2022, it’s proof that these tools are getting smarter, earning praise like this from Kevin Roose of the New York Times: “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.”

Dustin Haisler is the chief innovation officer for e.Republic. He follows the evolution of AI and its impact on government and wanted to test ChatGPT’s capabilities by asking it questions about how these technologies should be regulated given their vast potential, both positive and negative.

“I thought, how better to look at regulating AI than by asking AI itself?” Haisler said.

What follows is Haisler’s revealing “interview” with ChatGPT.

Dustin Haisler: What are the biggest challenges state and local agencies are facing today?

ChatGPT: There are many challenges that state and local agencies are facing today, including budget constraints, a lack of resources and the need to adapt to changing circumstances. In addition, state and local agencies are often required to provide a wide range of services, which can be difficult to manage and coordinate effectively. Additionally, state and local agencies may face challenges related to public perception and trust, as well as the need to ensure that their services are delivered in a fair and equitable manner.

DH: What are some of the most important emerging technologies that government leaders should be paying attention to?

There are many emerging technologies that government leaders should be paying attention to, including artificial intelligence (AI), blockchain, the Internet of things (IoT) and virtual reality (VR). AI and machine learning technologies, for example, can be used to automate many tasks and processes, improving efficiency and accuracy. Blockchain technology has the potential to revolutionize the way that governments handle data and transactions, providing a secure and transparent way to manage information. The IoT can be used to connect devices and systems, allowing for real-time data collection and analysis. And VR technologies can be used to improve training and simulation, as well as to create immersive experiences for the public. All of these technologies have the potential to greatly impact the way that government operates, and it is important for leaders to be aware of their developments and potential applications.

DH: What are some of the most impactful AI use-cases for state and local government?

ChatGPT: There are many potential use-cases for AI in state and local government, including:
  1. Predictive analytics: AI can be used to analyze data and make predictions about future trends, allowing governments to better anticipate and plan for the needs of their constituents.
  2. Decision-making: AI can be used to support decision-making by providing insights and recommendations based on data analysis.
  3. Process automation: AI can be used to automate many of the tasks and processes involved in service delivery, improving efficiency and accuracy.
  4. Public safety: AI can be used to support law enforcement and emergency response efforts, including by analyzing data to identify patterns and trends, and by providing real-time situational awareness.
  5. Public engagement: AI can be used to improve communication and engagement with the public, for example by providing personalized information and services, or by enabling citizens to access information and services through virtual assistants.

Overall, the use of AI in state and local government has the potential to greatly improve the delivery of services, support decision-making, and enhance the public's trust and engagement with government.

DH: How should state and local leaders look at regulating AI systems like you?

ChatGPT: State and local leaders should approach the regulation of AI systems, such as myself, in a thoughtful and measured way. It is important to recognize the potential benefits that AI can bring in terms of improving efficiency, accuracy and decision-making, as well as the potential risks and challenges that it may pose. As such, leaders should consider implementing regulations that balance the need to promote the use of AI with the need to protect the public and ensure that AI systems are used responsibly. This could include measures such as establishing clear standards and guidelines for the use of AI, providing oversight and accountability, and ensuring that there are adequate safeguards in place to prevent misuse or abuse of AI systems. Additionally, leaders should consider engaging with experts and stakeholders to ensure that any regulations are well-informed and effective.

DH: How can state and local leaders mitigate some of the common AI concerns?

ChatGPT: State and local leaders can take a number of steps to mitigate some of the common concerns surrounding AI, including:
  1. Establishing clear standards and guidelines for the use of AI: Leaders can develop regulations and policies that outline how AI should be used, including what it can be used for, how it should be implemented, and what safeguards should be in place to prevent misuse or abuse.
  2. Providing oversight and accountability: Leaders can establish mechanisms to ensure that AI systems are used in a responsible and transparent manner. This could include creating independent committees or agencies to monitor the use of AI, or requiring regular reporting and audits to ensure compliance with standards and regulations.
  3. Engaging with experts and stakeholders: Leaders should seek input from experts and stakeholders, such as ethicists, researchers and members of the public, to ensure that any regulations and policies are well-informed and effective.
  4. Investing in research and development: State and local leaders can support efforts to advance the field of AI, including by investing in research and development and encouraging collaboration between academia, industry and government. This can help to ensure that AI is developed in a way that is safe, ethical and beneficial for society.
  5. Providing education and training: Leaders can support initiatives to educate the public about AI, including its potential benefits and risks, as well as how to use it responsibly. This can help to promote understanding and trust in AI, and ensure that individuals are prepared to work with and benefit from this technology.
Noelle Knell is the executive editor for e.Republic, responsible for setting the overall direction for e.Republic’s editorial platforms, including Government Technology, Governing, Industry Insider, Emergency Management and the Center for Digital Education. She has been with e.Republic since 2011, and has decades of writing, editing and leadership experience. A California native, Noelle has worked in both state and local government, and is a graduate of the University of California, Davis, with majors in political science and American history.