The demands of pandemic response accelerated state and local government interest in data-driven decision-making and implementation of early AI-related projects.
Artificial intelligence or AI may no longer seem like the theme of a sci-fi movie where computers take over the world. Yet misconceptions linger about AI capabilities and uses, and governments are uncertain about what AI technology to implement and how. These factors slowed public sector adoption of AI until its potential value emerged more clearly during the COVID-19 pandemic.
The demands of pandemic response accelerated state and local government interest in data-driven decision-making and implementation of early AI-related projects. For example, nearly all responding states in the Center for Digital Government’s most recent Digital States Survey will make some use of AI by 2022 and many counties and cities will do so as well, according to the latest Digital Cities and Counties surveys.
To support broad AI adoption, government business and IT leaders need a shared and realistic understanding of potential uses for AI and the challenges around these technologies today. Moving forward with AI initiatives will also be more successful if an organization applies a framework that addresses critical technology, management and leadership elements.
The 2020 CDG Digital States, Cities and Counties surveys found several examples of how state and local governments are using AI today.
Bellevue, Wash., delivers COVID-19 information via a multilingual chatbot, uses AI on video analytics to identify traffic signal changes that will reduce accidents and automates routine tasks for reviewing digital development plans. Virginia Beach, Va., uses AI to improve waste pickup routes and inform citizens of their pickup time. Early in the pandemic, Arizona used AI-driven chatbots to handle a surge in unemployment insurance applications until newly hired employees could be trained to handle those calls.
Improving cybersecurity programs is another top use for AI, planned by most respondents in the CDG surveys.
For many governments, five common challenges present barriers to AI implementation, especially on a broad scale.
Resources. Using AI at a level that makes a meaningful impact on government operations and services requires a significant and ongoing resource investment. Improving data quality and access, deploying an AI development platform and hiring data scientists who can apply it effectively are all potential challenges.
Cognitive limitations. Although AI models are increasingly sophisticated, they often lack the contextual understanding of humans. The knowledge of experienced employees is still required at the right points to improve data interpretation and process redesign.
Inadequate transparency. It can be difficult to identify the why and how behind the choices made by AI models and their machine learning algorithms. This lack of full transparency about AI data analysis and decision-making can lead to inappropriate profiling, an opening for potential misuse of processes or data, and other unintended consequences.
Trust issues. Mistrust about the decisions of an algorithm and fear that smart technologies will eliminate or change jobs mean employees and the public don’t always welcome the use of AI.
Uncertain value. The potential for achievable value in AI projects isn’t always clear. Justifying a new investment in AI becomes even harder if previous attempts to use this technology have failed.
Although these challenges can be complex, they are not insurmountable if AI is implemented with a focus on technology and expertise, management and leadership.
SAP research found that a government’s AI adoption can be more successful if it is done within a framework of three essential elements.
The first element is technology and technical expertise. AI isn’t off-the-shelf software that can simply be purchased and deployed. Instead, it is an evolving capability created from the combination of quality data, an AI platform and data science staff.
AI models need a representative data set, one that is fit for reuse, to learn how to make relevant and unbiased decisions. The AI models need to run on a computing and storage platform that enables fast and efficient processing for large data sets and supports distributed AI services. Data scientists offer the understanding of government data and AI algorithms that is necessary to build appropriate AI models for public service purposes.
The second element encompasses management strategies for redesigning work and guiding culture change. Employees may have substantial concerns that the use of AI will eliminate their jobs. Additionally, they may not trust the logic of AI algorithms, especially if they uncover previously unseen data patterns or trends.
Involving a diverse set of employees as subject experts in the AI project can help replace resistance with engagement. Also important is to emphasize that AI can handle many routine data analysis tasks, freeing employees to focus on case management, customer service and other sensitive work that needs human experience and judgment.
Another vital management responsibility is assuring that AI and machine learning algorithms are traceable and explainable to stakeholders, and that they adhere to high levels of data privacy, security and fair use. This transparency is vital to build trust that the government is applying AI for the right purposes and using data appropriately for analyses and recommendations.
High citizen trust increases engagement and allows a government to apply AI for further innovation with additional services or new approaches to issues. Trust continues to build as government delivers these AI-driven innovations in an open and engaged manner.
Organizational leadership is the third element. Executive-level attention is needed for AI oversight, creating stakeholder value and proving benefit to citizens. For oversight, it’s important to understand that all uses of AI technology come with the risk of perpetuating errors or bias that exist in the input data. Leaders should be prepared to establish and reinforce an organizational culture that is compliant with current laws, regulatory requirements and accepted ethics.
The stakeholder value gained from AI can come in multiple forms. Internally, this value is often expressed in terms of financial and operational improvements. Externally, public value must be clear in terms of outcomes or service improvements that emerge from AI decision-making.
When implemented with care, AI can help governments address critical operational and services challenges, as well as produce tangible results for employees and constituents. As the post-pandemic future unfolds, AI will become an increasingly important technology to help governments better fulfill their mission.
Never miss a story with the daily Govtech Today Newsletter.
This content is made possible by our sponsors; it is not written by and does not necessarily reflect the views of e.Republic’s editorial staff.