Potholes, Rats and Criminals: How to Think about the Ethics of AI in Government

From public works to public health, the range of possible city applications for AI and complex algorithms is vast.

  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email
This guest post was written by Gretchen Greene of the AI and Governance Assembly, a collaborative initiative of the MIT Media Lab and Harvard Berkman Klein Center. This story was originally posted by Data-Smart City Solutions.


Local governments across the country are starting to look to tech, big data and artificial intelligence for new answers to old questions. How can cities be most efficient with their limited resources? How can they ensure fairness towards individuals and groups of citizens? How can they improve safety, affordability, economic opportunity, and quality of life?

Artificial intelligence, or AI, is taking on a critical role in answering each of these questions.

From public works to public health, the range of possible city applications for AI and complex algorithms is vast. But when we talk about using AI to fix potholes faster, get rats out of restaurants, and keep criminals off the street, we must recognize that even if the solutions to these problems are technically similar, the ethical risks vary widely. That difference, as the basis of a taxonomy of AI applications, provides a useful framework for evaluating potential AI applications.

AI applications have different levels of risk, defined as the likelihood of causing serious harm through discrimination, inaccuracy, unfairness or lack of explanation.



Example use categories and specific examples on the AI ethical risk continuum
To determine the AI ethical risk of a project, we start by asking about the kind and seriousness of the possible harm. What would harm look like for a particular application? What citizen rights or interests could be involved? Who would be harmed? How serious is this harm compared to other applications we are considering? How likely? Is there a way to lessen the harm or its likelihood?



Some of the factors used to determine AI Ethical Risk level
Moving beyond the specifics of a use case, government officials should ask questions about underlying datasets, algorithm choice, code implementation, secondary uses, as well as evaluation. Each of these elements presents a source of risk that governments must carefully consider and mitigate.

Decision makers may choose a different set of questions and risk factors or weight them differently than we did here, and they may come to different conclusions about an application’s AI ethical risk. These variations are predictable byproducts of political and cultural differences, but the most important thing is that cities start asking these questions and thinking carefully about the potential risks of AI. Moreover, they should ask them more than once, throughout the process of initial consideration, development (or procurement), deployment and evaluation.

If you're a city leader looking for a place to start with AI that's relatively safe, look to the green column on the far left side of the ethical risk continuum. Take one example there: directing public works resources — like filling in potholes — to neighborhoods where citizens use a city app probably skews to a certain demographic, but bumpier roads for some neighborhoods is a lesser potential harm than many other unwanted outcomes from AI. Moreover, there are ways to fix that problem. For instance, the selection bias could be ameliorated by maintaining a second method for directing crews: for example, using 311 calls or putting the app on city buses. As I heard one mayoral chief of staff say at a recent Project on Municipal Innovation meeting held by Harvard Kennedy School Ash Center and Living Cities, if you can't fix the potholes and keep the snow plowed, you can't get reelected. So maybe that AI pothole app is a pretty good idea.


  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email