IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Old AI Battles May Offer Cautions About New Technology

Former ACLU of Idaho Legal Director Richard Eppink said at a U.S. Senate hearing that a lack of public transparency and other factors led to damaging effects when the state tried to use algorithms to determine Medicaid funding.

Seated in Senate chamber is Richard Eppink, formally dressed in dark maroon jacket and pale yellow tie. He gives testimony, speaking into a microphone.
Richard Eppink, former ACLU of Idaho legal director, discusses algorithmic risks during a May 2023 Senate hearing.
Screenshot
As government leaders at all levels wrestle with the promises and risks of artificial intelligence, one advocate said at a recent hearing that past legal battles may offer valuable and cautionary learnings.

State and federal officials alike are looking to understand where AI can offer efficiency as well as how best to oversee related technologies to prevent risks, such as those of discriminatory impacts and damages from inaccurate results. While newer, advanced forms of AI like GPT-4 have been making headlines and sending some technology experts ringing alarm bells, automated decision-making systems and their risks aren’t all new. And there are lessons to be learned from how older, more simplistic algorithmic systems have gone wrong, too.

Former ACLU of Idaho Legal Director Richard Eppink has spent years in legal battles over Idaho’s use of algorithms for Medicaid benefit allocations, and Eppink testified about his experiences during a U.S. Senate hearing Tuesday. His experience highlights the kinds of vetting and transparency safeguards that could help curb harms from both advanced AI and the more basic formulas Idaho had used.

In 2012, Eppink began representing Idaho residents with intellectual and developmental disabilities whose state-administered health-care reimbursement allocations were slashed based on the recommendations of decision-making algorithms.

Courts would later rule in favor of the ACLU: A federal court passed an injunction in 2014, restoring funds, per the Spokesman-Review. In 2016, a court ruled in favor of the ACLU and required the Idaho Department of Health and Welfare to make changes to its processes, per North Idaho local paper the Coeur d’Alene/Post Falls Press.

But it was a challenging process to get there, and the matter isn’t settled: “The case continues today as we contend against a proposed new system that repeats some of the same problems the old system had,” Eppink said in written testimony.

Each year, the state calculates a maximum budget for eligible residents' health care, based on its assessment of their needs. Many 2011 recipients found their allocated budgets were significantly lower than the prior year, in some cases by 30 percent or more.

It was a struggle just to uncover that algorithms were behind the new funding decisions, let alone to discover how they worked, Eppink said. State attorneys had argued that their method for deriving individual budget allocations based off resident assessments was a “trade secret,” and his clients had to sue to be able to see the data that the algorithms trained on.

“It turned out to be just a handful of formulas coded into a basic Microsoft Excel spreadsheet,” Eppink said to U.S. senators. “As rudimentary as it was, it still took us many months, three experts, and over $40,000 to reverse engineer the system, catalog its flaws and assess the harm that its results could wreak upon our clients.”

That experience points to one lesson for AI governance: Organizations should be required to disclose about when and how they’re using automated decision-making systems, and how the systems work. Otherwise, it’s extremely difficult for the people subjected to the tools to dispute the results.

Eppink learned that the state used two systems to assess residents. The systems’ results would then be plugged into those Excel-based formulas to calculate funding amounts. The state developed the formulas and one of the assessment systems in-house, while procuring the other. Several concerning findings emerged.

“Out of the data the state compiled to compute the statistical formulas at the heart of its system, more than two-thirds of the records were either plainly erroneous, mismatched with the agency’s systems or contained incomplete or unbelievable information,” Eppink said in written testimony. The data also underrepresented a major Idaho region, skewing results.

Plus, the systems seemed not to have been thoroughly tested.

“We found that it was built out of corrupt data, relied on inputs that the state never validated and produced results that even those who created it could not explain,” Eppink told senators.

This, in part, points to a need for agencies to establish regulations and standards for any potential use of AI systems in their programs, including setting standards for auditing the system before and after implementing them.

Idaho’s second assessment instrument was proprietary, and assessors received a booklet that advised them “in completing the tool and details each person’s individual scores,” per Eppink. The company behind the system sought to prevent Medicaid recipients from seeing that booklet, however. This meant they lacked the information to contest its calculations and point out errors. Similar issues have come up again in a new and ongoing case.

“My clients have a right to the same information the government does to evaluate these systems and to challenge their results,” Eppink said. “Private contractors’ proprietary interests can never be allowed to trump individual’s due process and equal protection rights.”

In his view, many of the issues at play boil down to automated systems being kept opaque — making it difficult to contest, let alone understand, their decision-making — and to their being designed, implemented and assessed without the input of the people who’d be subjected to them. Those affected by the systems have the most at stake and are likely to have the most insights into the system’s impacts.

“The experts on these systems are the people that the system's make decisions about,” Eppink said. “I’ve worked on this litigation in Idaho for 12 years now, almost. And I have worked with agency officials. I’ve communicated with federal overseers. I've communicated with the courts. But it is, time and time again, my clients who have been able to spot the most important systemic problems with these systems.”
Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.