IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

6 Strategies to Help Governments Start Off on the Right Foot with Artificial Intelligence

Soon, AI will permeate the ways we interact with our government, too. From small cities in the US to countries like Japan, government agencies are looking to AI to improve citizen services.

This story was originally published by Data-Smart City Solutions. It was excerpted from a paper "Artificial Intelligence for Citizen Services and Government" written by Harvard Ash Center Technology and Democracy Fellow Hila Mehr.

From online services like Netflix and Facebook, to chatbots on our phones and in our homes like Siri and Alexa, we are beginning to interact with artificial intelligence (AI) on a near daily basis. AI is the programming or training of a computer to do tasks typically reserved for human intelligence, whether it is recommending which movie to watch next or answering technical questions. Soon, AI will permeate the ways we interact with our government, too. From small cities in the US to countries like Japan, government agencies are looking to AI to improve citizen services. 

While the potential future use cases of AI in government remain bounded by government resources and the limits of both human creativity and trust in government, the most obvious and immediately beneficial opportunities are those where AI can reduce administrative burdens, help resolve resource allocation problems, and take on significantly complex tasks. 

For many systemic reasons, government has much room from improvement when it comes to technological advancement, and AI will not solve those problems. In addition, there is hype around many modern tools, while most government offices are still trying to reach more basic modern operating standards. Nevertheless, there is benefit in preparing for the inevitable future, and making technology investments to keep pace with trends in how citizens prefer to engage with service providers. Governments can start thinking about implementing AI by learning from previous government transformation efforts and AI implementations in the private sector.

Six strategies can help governments start off on the right foot with AI:
ai-article-image-1.png


1. Make AI a part of a goals-based, citizen-centric program. AI should not be implemented in government just because it is a new, exciting technology. Government officials should be equipped to solve problems impacting their work, and AI should be offered as one tool in a toolkit to solve a given problem. The question should not be “how will we use AI to solve a problem,” but “what problem are we trying to solve, why, and how will we solve it?” If AI is the best means to achieve that goal, then it can be applied, otherwise it should not be forced. If AI is the right tool, it cannot be a single touch-point for citizens. McKinsey recommends agencies consider a citizen’s end-to-end journey through a process. They report in their “Putting Citizen’s First” study that organizations that manage the entire customer journey from start to finish achieve higher levels of satisfaction and are more effective at delivery. Government offices can consider where and when AI can be a touchpoint, and what other technologies or human interaction touchpoints might be required in the citizen’s journey. In keeping with customer centricity, the technology also must be inclusive, with awareness for generational, educational, income, and language differences.

2. Get citizen input. Citizen input and support for AI implementations will be essential. “Governments should enable a genuine participatory, grassroots approach to both demystify AI as well as offer sessions for citizens to create an agenda for AI while addressing any potential concerns,” suggests Russon Gilman. Wallach agrees: “There needs to be a conversation in society about AI — to educate everyone from citizens to policymakers so that they truly understand how it works and its tradeoffs.” With that level of education, citizens can then offer other ways to be engaged with AI, and even help co-create ethics and privacy rules for use of their data. When it comes to building and deploying AI platforms, user feedback is essential both from citizens and government employee users. Onda recommends designing systems “to provide the right level of insight, depending upon individual user preferences.” 

3. Build upon existing resources. Adding the benefits of AI to government systems should not require building the systems from scratch. Though much evolution in AI has come from early government research, governments can also take advantages of the advances businesses and developers are making in AI. IT analyst firm IDC predicts that by 2018, 75 percent of new business software will include AI features. Nonprofits and research institutions offer the public access to world-class research and new releases of open-source machine intelligence programs allow users to inexpensively scale their use of AI. Implementations do not have to start only for entirely new programs or datasets either. One place to start would be integrating AI into existing platforms, like 311 and SeeClickFix, where there is existing data and engagement.

4. Be data-prepared, and tread carefully with privacy. Many agencies will not be at the level of data management necessary for AI applications, and many may be lacking the significant amount of data needed to train and start using AI. But as government offices improve their data collection and management, best practices about the type of data that will be used and collected will be critical for future use with AI. “Collecting and aggregating the right type of data is critical for success,” says Onda. “Governments must think about the type of data they need, when the data expires (it has a shelf life), and how the data will be aggregated to provide context for a specific individual. Citizens must be able to trust the systems they are interacting with and know where their data is going.” Governments should be very transparent about the data collected and give citizens the choice to opt in when personal data will be used. There may be fewer privacy concerns if the only data being used is already provided to the government by citizens (such as IRS data). The privacy concerns become relevant when citizens have not provided consent or external datasets get mixed with government sources, explains Eaves. Data use also becomes concerning when the data is inaccurate. This can lead to a cascading effect as the data travels. “Transparency isn’t enough if the data is already off,” explains Russon Gilman, because “the algorithms and learning systems can be hidden, so the stakes are very high for democratic governance and ensuring equity in the public sector.”

5. Mitigate ethical risks and avoid AI decision making. AI is susceptible to bias because of how it is programmed and/or trained, or if the data inputs are already corrupted. A best practice for lessening bias is to involve multidisciplinary and diverse teams, in addition to ethicists, in all AI efforts. In addition, Matt Chessen, an AI researcher with the US Department of State, has recommended a new public policy profession that specializes in machine learning and data science ethics. Governments can also leverage the work of groups of technologists who have come together to create common sets of ethics for AI, such as the Asilomar AI Principles and the Partnership on AI. Given the ethical issues surrounding AI and continuing developments in machine learning techniques, AI should not be tasked with making critical government decisions about citizens. For example, the use of a risk-scoring system used in criminal sentencing and similar AI applications in the criminal justice system were found to be biased, with drastic repercussions for the citizens sentenced. These types of use cases should be avoided. Companies like Google and Microsoft are actively trying to improve machine learning models to prevent or correct bias, and have internal ethics boards that consider new algorithms — government offices should uphold a similar practice. Until machine learning techniques improve, though, AI should only be used for analysis and process improvement, not decision support, and human oversight should remain prevalent.

6. Augment employees, do not replace them. Research highly varies in determining the threat of AI to jobs over the next two decades — from 9 to 47 percent — according to the 2016 White House report on automation and the economy. In some cases, AI may instead lead to increased and new employment directly and indirectly related to AI development and supervision. While job loss is a legitimate concern for civil servants, and blue and white collar workers alike as the technology evolves, early research has found that AI works best in collaboration with humans. Any efforts to incorporate AI in the government should be approached as ways to augment human work, not to cut headcount. Governments should also update fair labor practices in preparation for potential changes in workplaces where AI systems are in place.

With these strategies, governments can approach the use of AI in citizen services with a focus on building trust, learning from the past, and improving citizen engagement through citizen-centric goals and solutions.