IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How Should Local Governments Approach AI and Algorithms?

The Pittsburgh Task Force on Public Algorithms has released recommendations for county and municipal governments that are interested in using automated systems for better decision-making.

Artificial Intelligence
Shutterstock
How can government agencies avoid causing more harm than good when they use artificial intelligence and machine learning? A new report attempts to answer this question with a framework and best practices to follow for agencies pursuing algorithm-based tools.

The report comes from the Pittsburgh Task Force on Public Algorithms. The task force studied municipal and county governments’ use of AI, machine learning and other algorithm-based systems that make or assist with decisions impacting residents’ “opportunities, access, liberties, rights and/or safety.”

Local governments have adopted automated systems to support everything from traffic signal changes to child abuse and neglect investigations. Government use of such tools is likely to grow as the technologies mature and agencies become more familiar with them, predicts the task force.

The problem is that some algorithms can replicate and exacerbate biases. Further, government agencies in the Pittsburgh region studied by the task force currently have few obligations “to share information about algorithmic systems or to submit those systems to outside and public scrutiny,” the report says.

This status quo leaves little room for public or third-party oversight, and residents often have little information on these tools, who designed them or whom to contact with complaints.

The goal isn’t to quash tech adoption, just to make it responsible, said David Hickton, task force member and founding director of the University of Pittsburgh Institute for Cyber Law, Policy and Security.

“We shouldn’t have the … Hobson’s choice between good technology that can protect us and — when it doesn’t work — operates to deny people their fundamental civil and constitutional rights,” Hickton told Government Technology. “We can fix this stuff, if we have high standards.”

The task force included members of academia, community organizations and civil rights groups, and received advice from local officials.

“We hope that these recommendations, if implemented, will offer transparency into government algorithmic systems, facilitate public participation in the development of such systems, empower outside scrutiny of agency systems, and create an environment where appropriate systems can responsibly flourish,” the report states.

RESIDENTS WEIGH THE RISKS


Automated systems can make processes more efficient and draw insights from vast amounts of data, but they may also fail to understand the context of situations and risk replicating and exacerbating existing biases and discriminatory practices if not properly designed and managed.

While automated systems often intend to reduce human error and bias, algorithms make mistakes, too. After all, an algorithm reflects human judgments. Developers choose what factors the algorithms will assess and how heavily each factor is weighted, as well as what data the tool will use to make decisions.

Governments therefore should avoid adopting automated decision-making systems until they’ve consulted with residents — through multiple channels, not just public comment sessions — who would be most impacted.

Residents must understand the tools and the ways they’ll be used, believe the proposed approach tackles whatever issue in a productive way, and agree the potential benefits provided by an algorithmic system outweigh the risk of errors, the task force said.

“Sufficient transparency allows the public to ensure that a system is making trade-offs consistent with public policy,” the report states. “A common trade-off is balancing the risk of false positives and false negatives. A programmer may choose to weigh those in a manner different than policymakers or the public might prefer.”

Constituents and officials must decide how to balance the risk of an automated system making a mistake. For instance, Philadelphia probation officials have used an algorithm to predict the likelihood of people released on probation becoming reoffenders. These officials have required individuals on probation to receive more or less supervision based on the findings. In this case, accepting more false positives means increasing the chance that people will get inaccurately flagged as higher risk and be subjected to unnecessary intensive supervision, while accepting more false negatives may lead to less oversight for individuals who are likely to reoffend.

THE RIGHT RESPONSE TO RESULTS


Residents should also be consulted about how officials react to algorithms’ findings.

For example, an individual may be flagged by a pretrial risk assessment algorithm as unlikely to make their court date. But there’s a big difference between officials jailing the person before the court date and officials following up with texted court date reminders and transportation assistance.

Community members told the task force that the safest use of algorithms may be “to identify root problems (especially in marginalized communities) and allocate services, training and resources to strengthen community support systems.”

Residents also emphasized that issues can be complex and often require decision-makers to consider individual circumstances, even if also using algorithms for help.

CONTINUAL EVALUATION


The report also reminds government to make sure employees know all the automated tools’ uses and limitations, have enough insight and control over any vendor-provided algorithms, and vet and review the tools on a regular basis.

Systems should be vetted before adoption and reviewed regularly — such as monthly — to see if they’re performing well or need updates. Ideally, independent specialists could evaluate sensitive tools and employees’ training on them, and in-house staff would examine the workings of vendor-provided algorithms.

Contract terms should require vendors to provide details that can help evaluate their algorithms’ fairness and effectiveness. This step could prevent companies from hiding under “claims of trade secrecy.”

Local government faces few official limitations around how they can use automated decision-making systems, Hickton said, but residents could put pressure on election officials to make changes. Governments could theoretically appoint officials or boards in charge of overseeing and reviewing algorithms to improve accountability.

“I can't predict where this will all go, but I'm hopeful that what we've done is put a spotlight on a problem and that we are giving the public greater access and equity in the discussion and the solutions,” he said.
Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.