IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

ACLU Explores Potential Rights Violations from Algorithms, AI

Dubbed the Automated Injustice Project, the group is raising questions about whether safeguards are in place when state officials are relying on AI to make crucial decisions in areas such as health care and justice.

ACLU
Shutterstock
(TNS) — Algorithms and artificial intelligence are embedded in daily life, from social media platforms deciding what posts to show us to the type of preventative care we may receive based on our health histories.

But what happens when crucial government functions are turned over to algorithms without oversight?

“Algorithms absolutely need to subject to public scrutiny,” said Dillon Reisman, an attorney at the New Jersey chapter of the American Civil Liberties Union, which is launching an investigation into how New Jersey state government uses automated decision-making tools.

Dubbed the Automated Injustice Project, the group is raising questions about whether safeguards are in place when state officials are relying on AI to make crucial decisions, from whether a suspect should be jailed before trial to what kinds of benefits Medicaid recipients and even what benefits domestic violence victims should receive.

In an interview, the ACLU attorney said artificial intelligence and algorithms can be powerful tools to make government run more efficiently but, ultimately, they are designed by humans and vulnerable to the same biases and errors that plague institutions.

“We need people to recognize that these systems are here in New Jersey and even if they aren’t impacted by them today, they will be eventually at some point in their lives,” Reisman said.

“Many forms of algorithms and artificial intelligence have been shown to perpetuate and worsen racial inequity, deprive people of the ability to contest unfair outcomes, and fundamentally change how people interact with the government,” the ACLU wrote in a blog post announcing the project Tuesday.

“Even if algorithms are just a new version of old injustice, they give us new opportunity to fight against that injustice,” Reisman said.

There’s the use of facial recognition software by police, which the Attorney General’s Office recently reined in over concerns about its accuracy and whether it violates civilians’ privacy.

In one recent case, a New Jersey man spent 10 days in jail in a mistaken identity case that involved facial recognition software.

Then there’s the Public Safety Assessment, the tool New Jersey courts have used since statewide bail reform was enacted in 2017. While the change led to an overall reduction in jail populations without an increase in crime, critics point out that New Jersey’s jail and prison inmates are disproportionately Black.

A look under the hood of these systems could either reveal fundamental problems or allay concerns about bias, but government agencies and private entities that design AI and algorithmic systems have resisted efforts to make the underlying code public.

“It’s very hard to ask the right questions when the state is not proactively transparent about the systems they use,” Reisman said.

“Even under the best of circumstances, the government is effectively a black box when it comes to these systems. They should be shared publicly and proactively so you shouldn’t even have to file (a records request).”

In the coming weeks, the group is taking a close look at the criminal justice algorithms, as well as some surprising places where their use could cause harm.

They include the state’s Medicaid budgeting system, which determines the type of healthcare Medicaid recipients get.

An even more obscure — but crucial — use in the Family Violence Option Risk Assessment, which uses a 118-question form to determine the extent to which a domestic violence victim was abused in the past and could face violence in the future. That tool, Reisman said, determines a person’s access to special public benefits meant to help domestic violence survivors escape their current situation and recover.

Such tools are well-intentioned, Reisman said, but without transparency, there’s no way to know from the outside if they’re working.

“We don’t know if these systems treat people fairly and we don’t know if they work how they’re supposed to work,” he said.

© 2022 Advance Local Media LLC. Distributed by Tribune Content Agency, LLC.