IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Do Public Safety Benefits Outweigh Risks of Police AI?

The expected integration of artificial intelligence into police work has rekindled a debate about balancing possible public safety benefits of new technologies with ensuring the tools aren't violating rights.

(TNS) — The expected integration of artificial intelligence into police work has rekindled a debate about the need to balance the possible public safety benefits of emerging technologies with ensuring the tools aren't used to violate people's rights.

The discussion has also sparked accusations of biases by — and about — cops, and mirrors recent controversies about law enforcement's use of technologies including facial recognition software, the ShotSpotter gunshot-detecting system in Detroit and license plate readers,

Although AI-assisted policing is in its infancy, concern about its future use and possible impact on minorities prompted the Michigan Civil Rights Commission to join demands by national lawmakers and the American Civil Liberties Union for legislation that regulates how police may use the technology. Artificial intelligence is defined in the U.S. Code as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments."

The Michigan Civil Rights Commission passed an April 29 resolution calling for "a socially responsible, bias-free, research-based approach in the use of Artificial Intelligence in policing."

The resolution urged Gov. Gretchen Whitmer to convene a "Taskforce on Artificial Intelligence" to study AI and its potential law enforcement uses, and asked the governor to help state lawmakers pass legislation "that regulates the use, acquisition and implementation of Artificial Intelligence in policing so as to eliminate the perpetuation of bias and discrimination."

"Against the backdrop of our country’s documented bias in law enforcement, to ignore the historic prejudice in policing practices that have resulted in inherently biased data, statistics and actionable information is tantamount to an intentional decision to perpetuate racial, ethnic, and national origin discrimination," the resolution said.

Michigan Civil Rights Commission Chairperson Gloria Lara wrote a May 3 letter to Robert Stevenson, director of the Michigan Association of Chiefs of Police, informing him that the resolution had been passed.

"The long history of racial violence and bias in law enforcement, and unfortunate instances that persists to the present required the MCRC to address the potentially dangerous outcome of the unregulated implementation of Artificial Intelligence software and technology in policing communities of color," Lara wrote.

Stevenson said Lara's letter and her organization's resolution are "extremely offensive" and "show a bias against police, when they're supposed to be a group that's against stereotyping.

"I can't think of anything more ridiculous than to say police are intentionally trying to make certain racial groups look bad," Stevenson said. "The data we collect about offenders is based on victim reporting (via the annual U.S. Bureau of Justice's National Crime Victimization Survey).

"Police arrest people who commit crimes, and (U.S. Department of Justice) statistics that show that African Americans, who are about 13% of the population, are committing an inordinate number of violent crimes," Stevenson said. "When a certain ethnic group is committing an inordinate number of crimes, that's absolutely a problem. But it's not a law enforcement problem; it's a socioeconomic problem."

According to the Department of Justice's Arrests by offense, age and race report, in 2020, the most recent year with available statistics, African Americans were arrested for 167,030 violent crimes, which was 36% of the 461,540 overall violent crime arrests that year.

The Michigan Civil Rights Commission and Whitmer's office did not respond to requests for comment.

Nathan Freed Wessler, deputy director of the American Civil Liberties Union's Speech, Privacy, and Technology Project, said there should be tighter regulations on law enforcement's use of AI, which he said is likely to hurt minorities because the data put into the systems "reflects over-policing in those communities."

"I think there are a lot of ways this technology can get police departments in trouble and go wrong in ways that hurt people," Wessler said. "We've seen wrongful arrests with other technologies such as facial recognition that have disproportionately impacted people of color, and I think these same kinds of dangers are present with AI."

The Michigan Civil Rights Commission's resolution referenced a Jan. 24 letter from seven Democratic U.S. senators, urging U.S. Attorney General Merrick Garland to "halt all Department of Justice grants for predictive policing systems until the DOJ can ensure that grant recipients will not use such systems in ways that have a discriminatory impact."

The letter added: "Predictive policing systems rely on historical data distorted by falsified crime reports and disproportionate arrests of people of color. As a result, they are prone to over-predicting crime rates in Black and Latino neighborhoods while under-predicting crime in White neighborhoods. The continued use of such systems creates a dangerous feedback loop: biased predictions are used to justify disproportionate stops and arrests in minority neighborhoods, which further biases statistics on where crimes are happening."

The senators' letter followed an executive order by President Joe Biden in October setting guidelines for the use of AI by federal agencies that include monitoring for potential biases. According to the White House, the order "establishes new standards for AI safety and security, protects Americans’ privacy (and) advances equity and civil rights."

Detroit resident LaTrece Cash said she's hesitant about the increased prevalence of AI in all areas of life, and has particular concerns about police using the technology.

"AI is scary because it can do a lot of things that humans can't do — in a way, it's like trying to play God," Cash said. "The people controlling AI are putting information into the system that predicts people's behaviors and mimics their actions. If you mix that with policing, it's pretty scary to think about. It'll probably be helpful for the police in a lot of ways, but at what cost?"

Stevenson, a former Livonia police chief, agreed there's potential for misuse of AI technology, but he added: "That applies to anything. Tylenol is good if you take two, but if you take 30, it could destroy your liver. As with any new technology, we always have to look at how it's used and guard against abuses."

Detroit Police Commissioner Ricardo Moore, a former Detroit police lieutenant, said AI has advantages for police work if handled properly.

"Artificial intelligence will help police departments ... with responding to crime and police officer safety," Moore said. "On the flip side, if a department's culture is one of secrecy and stonewalling, it will make police oversight just that much more difficult."

Potential uses envisioned

Police departments in Michigan aren't using AI "to any great extent right now, although we're seeing departments starting to use it in different ways, mostly to crunch data," Stevenson said.

Last year, the Ann Arbor Police Department and the Washtenaw County Sheriff's Office began using Truleo, AI-driven software that analyzes body camera footage to monitor officers' demeanor during interactions with citizens and uses of force.

The Congressional Research Service said in a December report: "In the law enforcement realm, researchers note that while the use of AI is not yet widespread, existing tools may be enhanced with AI to expand law enforcement capabilities and increase their efficiency."

The report cited examples of how police could integrate artificial intelligence into existing systems.

"Automated license plate readers can be leveraged ... (for) the issuance of red-light violation tickets ... security cameras outfitted with certain AI-embedded hardware can be used for real-time facial recognition of potential suspects, (and) facial recognition technology and text analysis tools can be enhanced with AI to scan online advertisements to help identify potential crimes such as human trafficking," the Congressional Research Service report said.

"A number of concerns have been raised about law enforcement use of AI, including whether its use perpetuates biases," the report said. "One criticism is that the data on which the software are trained contain bias, thus training bias into the AI systems. Another concern is whether reliance on AI technology may lead police to ignore contradictory evidence.

"Policymakers may consider increased oversight over police use of AI systems to help evaluate and alleviate some of the shortcomings," the report concluded.

Whitmer in November signed into law a package of election-related bills that included regulations governing using AI in political campaigns, although no proposed state legislation has been introduced seeking the oversight of police use of the technology.

"A big issue when talking about AI is, it encompasses a wide scope of technologies," the ACLU's Wessler said. "Sometimes companies will use an Excel spreadsheet and try to sell it to police departments as AI, although there's not really any secret sauce; they're just using the language of the day. And then, some systems do use complex machine algorithms that perform tasks that humans can’t do by themselves."

The most recent edition of the Michigan Association of Chiefs of Police online magazine, Michigan Police Chiefs, features an article titled "Policing in the Age of Artificial Intelligence" that discusses possible law enforcement applications. Potential AI uses for cops include crunching data to predict crime trends, combining AI with facial recognition technology to locate missing persons and identify suspects, and using it for virtual training, according to the article.

"AI has the potential to reduce human bias in decision-making processes, such as bail determination decisions," the article said. "By using algorithms, decisions can be based on data, rather than human judgment, theoretically resulting in fairer outcomes. However, this approach is not without its own challenges, as algorithms can inherit bias from the data they are trained on."

The article also discussed moral issues, posing questions: "What are the boundaries of surveillance? How do we ensure the ethical use of AI during investigations? What safeguards are in place to prevent misuse?"

A footnote at the end of the article discloses that the story had been written by the ChatGPT artificial intelligence software in less than a second in response to the query: "Write me a 1,200-word article on the use of AI in police work."

Artificial intelligence is being used across Michigan for non-police security applications. In the wake of the Nov. 30, 2021, Oxford High School massacre, the Oakland County school joined other state schools that employed ZeroEyes, software that uses artificial intelligence to detect the presence of guns. Eastern Michigan University also adopted the technology last year.

The Michigan Capitol also started using ZeroEyes in November, marking the first time the technology had been used in a state Capitol.

Problems exacerbated?

Facial recognition technology has been criticized because it can produce false hits on people with darker skin, and the ACLU's Wessler said the problem is likely to be exacerbated if AI is integrated into police departments' existing facial recognition systems.

Wessler pointed to Detroiters who were misidentified by facial recognition processes, including Robert Williams, who in 2018 was mistakenly targeted as the man who'd stolen five watches at a Detroit Shinola store; and Michael Oliver, who in 2019 had charges of stealing a cellphone dropped after his attorney proved the software had identified the wrong man.

Williams and Oliver have ongoing lawsuits against the city, as does Porcha Woodruff, whose suit last year claimed she was arrested in front of her two children while eight months pregnant based on "an unreliable facial recognition hit."

"We've seen at least three cases in Detroit where facial recognition technology has misidentified people of color," Wessler said. "We have this same danger with AI, because these systems are heavily influenced by the data they're trained on."

He pointed out that AI is being used by courts in Ohio, Arizona, Alaska, Kentucky and New Jersey to help set bond for defendants.

"Those systems have come under tremendous criticism because they produce inequities," Wessler said. "You get over-enforcement and arrests in communities of color, because of algorithms that treat people of color more suspiciously. Then, AI is setting higher bonds for people of color.

"More than 20 cities across the country have banned facial recognition software because they recognize this inherent bias in the software," Wessler said. "We need to be similarly careful when we hand over these AI tools to the police.

"Let's slow way down, take a look at the technology, figure out which technology may be a good idea; which may be a good idea in principle but not in practice; which are ill-conceived from the start; and those which aren’t worth spending taxpayer money on when they may not help police and may hurt people," Wessler said.

Earlier this month, Microsoft banned U.S. police departments from using AI for facial recognition on its cloud computing service. Amazon also bars police from using its facial recognition technology.

U.S. Rep. Warren Davidson, R-Ohio, has pushed House Bill 4639, which would "limit the authority of law enforcement agencies and intelligence agencies to access certain customer and subscriber records or illegitimately obtained information." The GOP-controlled House approved the legislation in a 219-199 vote in April and is awaiting action in the Democratic-led Senate.

"There seems to be a push to stop police from trying to catch criminals," Stevenson said.

"This bill would limit police access to license plate reader data in private databases that the general public has access to," he added. "So the public could have full access to this information, but not the police. That defies common sense in my world."

Cash, who lives on Detroit's west side, said she fears AI will take over the thinking — and feeling — process.

"The one thing AI can't do is have human empathy, be compassionate or understand all aspects of a situation," Cash said. "Sometimes you'll get a series of facts that make a situation look like the truth, but in the end, it may not be true. Or it may be a situation where making an arrest might not be the best thing to do right then, even though on paper, it may say otherwise.

"How does AI differentiate in those situations or have compassion for someone, despite what the computer might say?" she said. "We always need the police to keep that human element and be careful that AI doesn't take everything over."

© 2024 The Detroit News. Distributed by Tribune Content Agency, LLC.