IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

In 2020, a Reckoning for Law Enforcement and Tech Ethics

From worldwide protests to policy moves from technology giants like IBM and Amazon, the past year saw police use of tools like facial recognition and body cams come under scrutiny like never before.

A security camera with two police officers in the blurred background.
Shutterstock.com
Law enforcement technology in 2020 saw some innovation, acquisitions and announcements, but more than anything there was public scrutiny. As footage of a police officer killing an unarmed Black man in Minneapolis in May prompted global protests and news coverage, various communities started calling on everyone involved with police work, including tech companies, to reflect on their responsibilities. Ethical questions that some civil liberties groups had been asking for years hit the public consciousness with new urgency. For the tech industry, questions are about biased algorithms and historical data, and the potential abuses of tools such as facial recognition and artificial intelligence. Public attention forced a reckoning with the present and future implications of police tech, and both private and public organizations signaled an interest in making changes.

After years of growing concern among watchdog groups and industry insiders, facial recognition hit a wall in 2020 as one major company after another came out against selling the technology to law enforcement. Many followed the lead of Axon, the nation’s largest retailer of police body cameras, which stopped putting facial recognition in body cams in 2019 and promised to halt development on face-matching software altogether due to ethical concerns about surveillance, accuracy and bias. In January 2020, CEO Sundar Pichai of Google’s parent company, Alphabet, endorsed the European Union’s temporary ban on facial recognition. The George Floyd video broke in May, and more chips fell in June: IBM sent a letter to Congress vowing not to sell facial recognition or analysis software; Amazon halted police use of its facial recognition technology for a year pending federal legislation; and Microsoft vowed not to sell facial recognition to law enforcement until there’s a federal law on the books, grounded in human rights.

Reactions from smaller companies ran the gamut: Some shrugged, others issued statements at least genuflecting in the direction of social justice and police accountability, and one data company in Chicago swore off doing work for law enforcement altogether. Some companies started pitching their products as ways to promote fairness or accountability, such as Mark43 and its records management system to track officer behavior, LEFTA Systems and its data analytics tools for flagging potential “problem officers,” or Axon’s virtual reality police training modules for de-escalation and peer intervention in the field.

In the public sector, state and local agencies started talking to constituents and civil liberties groups, in some cases forming advisory boards or consulting those they already had. New York City hired its first chief algorithm officer on the recommendation of a 2019 report from its advisory group, the Automated Decision Systems (ADS) Task Force. Philadelphia started using research and guidelines from a multi-agency group called GovLabPHL, formed in 2017, to inform decisions about smart city technology and AI tools.

Artificial intelligence has been a point of concern in several industries, but particularly in law enforcement because, whether it’s a facial recognition algorithm or predictive policing program, AI is only as good as the sum of its inputs. To make sure those inputs are sound and responsible, advisory groups like Axon’s ethics board and the Verizon First Responder Advisory Council have recommended that cities and companies talk to spokespeople from communities where AI tools will be deployed.

It’s too soon to tell whether all this scrutiny will hold, but for now, private, public and nonprofit entities are talking to each other. In the near future, the best-case scenario for police tech appears to involve more communication with stakeholders, the ability to have new technologies audited by a third party, a federal law governing facial recognition and 5G network speeds.

This story is part of our 2020 Year in Review series.

Andrew Westrope is managing editor of the Center for Digital Education. Before that, he was a staff writer for Government Technology, and previously was a reporter and editor at community newspapers. He has a bachelor’s degree in physiology from Michigan State University and lives in Northern California.