IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What Does Generative AI Mean for the Justice System? (Part 1)

Generative AI tools could potentially create videos for courthouse visitors or rewrite legal documents with accessible language to help people navigate the system. But the tool must be handled carefully.

A gavel made up of blue lines and dots on a black background.
Shutterstock/piick
The rise of generative AI has potential to change the court system, but experts say judges may also need to determine whether the tools can be deployed safely, as well as how lawyers will be allowed to use them.

Indeed, experts in the justice system and court tech space believe generative AI could do helpful things like translating legal jargon into accessible language. But there are also risks, such as AI fabricating false information. Judges must also decide whether generative AI should have a role in advising their own decisions.

With the future on its way — or perhaps already here, depending on one’s perspectives — it’s worth taking time now to examine the potential role generative AI will play in the justice system, specifically in the courts.

GENERATIVE AI IN COURTROOMS


The Texas Judicial Branch’s Generative AI: Overview for the Courts presentation outlines how the technology could theoretically be used by lawyers, self-represented litigants or judicial officers. This list includes using AI to guide users without a lawyer through legal processes; to help lawyers review judges’ previous rulings with resulting suggestions on how to tailor documents; and to give judicial officers recommendations about bail or sentencing.

But the presentation also cautions that “just because we can doesn’t mean we should,” outlining a variety of risks. The data on which generative AI was trained might be biased or it could produce inaccurate answers that might go uncaught if they aren’t carefully reviewed, among other problems.

In a few reported instances, judges in other countries have consulted ChatGPT during a court decision to confirm relevant rules or get a perspective. In a case reported by The Guardian, a judge in Colombia was deciding whether a boy with autism’s insurance should fully cover his medical treatment. The judge turned to ChatGPT, asking the tool: “Is an autistic minor exonerated from paying fees for therapies?”

Louisiana District Court Judge Scott Schlegel is the chair of the Louisiana Supreme Court Technology Commission. In his view, judges shouldn’t be using generative AI tools as part of their decision-making, even if the tools are mostly accurate. That’s because the human — and human only — relationship is an integral part of the justice system, he said.

“A big part of the justice system is being heard and being able to say, ‘Man, I hate that Judge Schlegel, he got it dead wrong,’” Schlegel said. “We’re humans. And so, especially in larger types of cases, we want to be heard, and we want decisions to be made by other humans.”

ACCESSIBLE INFORMATION


But decision-making support isn’t the only way such tools can be deployed. Schlegel has been experimenting with how generative AI could create content for the court website.

That could mean using ChatGPT to quickly create informative videos for court visitors, explaining topics like court dress codes or the forms residents need to file for custody. Generative AI tools can produce good quality educational videos in a matter of minutes, for a fairly low cost, he said.

“I can tell these generative AI tools to go to our self-help websites right now and build a script based upon all the information that we already have up there,” he said, “and turn it into a video and have a very good, clean educational video that I can clean up in 30 minutes and have a better product than I have up there now.”

The technology might also help website visitors get answers to questions. Schlegel is working to build a ChatGPT-based chatbot to answer basic logistical questions, such as when is their next court date. It could spare visitors from having to call for answers. Schlegel said he’s trying to put limits on the chatbot to only let it draw answers from a specific, designated knowledge base and to prevent visitors from asking it for legal advice.

Bridget McCormack is a former chief justice of the Michigan Supreme Court and currently president and CEO of the American Arbitration Association-International Center for Dispute Resolution (AAA-ICDR), a not-for-profit providing mediation and arbitration services.

She said generative AI has the potential to reduce barriers for people who cannot afford lawyers — a group made up mostly of small and medium businesses, as well as the majority of people in the U.S.

“Most people who navigate eviction cases as defendants, debt collection cases, most family law cases are just legally naked,” McCormack said.

As such, it’s important to make legal information accessible and understandable to a general readership — not just lawyers. As AAA-ICDR tests several generative AI models, one thing it’s tried is having the tool rephrase a form with language that is readily understandable to anyone.

Chris Shenefiel — cyber law researcher at the Center for Legal and Court Technology (CLCT) at William & Mary Law School — also noted that in addition to that function, generative AI can write descriptions for online images, assisting people who are blind.

OUT-OF-COURT ARBITRATION


McCormack also sees potential for generative AI to eventually assist with out-of-court dispute resolution. The goal would be to make the process faster — and thus less expensive — for everyone involved. But the tool’s suitability for this would depend on its training. Safeguards are also important here, one of which would be allowing either party to reject an AI decision and request a human review.

Put simply, McCormack said, “there’s huge potential for generative AI dispute resolution."

And related research is underway, with a project at the Cyberjustice Laboratory of the University of Montreal this summer reportedly planning to explore whether a large language model-based conversational robot could support online mediations. Human mediators often oversee several mediations at once. One goal here is to see if the tool could ease this by monitoring the tone of discussions and alerting a mediator if things get heated.

Check back Monday for Part Two of this series, which will examine lawyers’ use of generative AI technologies as well as the risks of deepfaked evidence.
Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.