IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What Does Generative AI Mean for the Justice System? (Part 2)

Lawyers can run into trouble with generative AI, and a few courts have pushed back on its use. Others, however, see the tech as a time-saver. Deepfaked evidence, meanwhile, is a growing concern.

A hand pointing to a symbol of scales illuminated in light blue and surrounded by dots connected by lines, as well as other law-associated symbols including a courthouse, and open book and a gavel. Black background.
Shutterstock
Courts need to consider not only their own use of generative AI, but also potential use by lawyers and other parties submitting evidence.

Lawyers may use the technology for help with research or drafting documents, for example, and over-reliance can be risky, because generative AI is known to sometimes make up false information. Some companies in the legal space, however, are betting that issue lies more with general AI tools. They’ve been announcing specialized models trained on legal texts, in efforts to reduce fabrications.

Judges also should be alert to other kinds of risks that could emerge from the technology, such as highly convincing AI-created photos, audio or video that could be entered as evidence. At present, these deepfakes may be difficult to detect, although several AI companies have made voluntary promises to develop a system for distinguishing AI-generated media.


In the now-infamous Avianca Airlines lawsuit, a lawyer used ChatGPT to help with research, submitting a legal brief that cited non-existent cases fabricated by AI.

This is, perhaps, an unsurprising outcome. Today’s general-purpose generative AI tools, including ChatGPT, are designed to write well-structured sentences, not produce accurate information, said Chris Shenefiel, cyber law researcher at the Center for Legal and Court Technology at William & Mary Law School.

“It’s designed to predict, given a topic or sentence, what words or phrases should come next,” Shenefiel said. “... It can fall down, because it doesn’t validate the truth of what it says, just the likelihood of what’s to come next.”

When it does not have a clear response to pull, the model makes something up. This means the current crop of generative AI tools might help judges or lawyers phrase first drafts or organize thoughts, but it won’t help ensure the law is applied accurately.

Retired D.C. Superior Court Judge Herbert B. Dixon recently detailed his own experiences playing with ChatGPT and discovering that it listed inaccurate citations. Dixon tried to determine whether one was completely invented or only misattributed, before finally giving up: “I spent more time trying to track down the source of that quote than writing this article,” he wrote.

Dixon concluded, “Users must exercise the same caution with chatbot responses as when doing Internet research, seeking recommendations on social media, or reading a breaking news post from some unfamiliar person or news outlet. Don’t trust; verify before you pass along the output.”

‘HALLUCINATIONS AND BIAS’


Some courts have already implemented rules around use of generative AI.

One Texas judge issued a directive requiring attorneys to either attest that they’d validated AI-generated content through traditional methods, or that they’d avoided using the tool.

“These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them,” Judge Brantley Starr wrote. “These platforms in their current states are prone to hallucinations and bias … . While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.”

Scott Schlegel, a Louisiana District Court judge, said he understands why some judges would want policies mandating disclosure of generative AI use, but personally sees this as unnecessary. He noted that courts already require attorneys to swear to the accuracy of the information they provide, under Federal Rules of Civil Procedure Rule 11 or similar policies.

Lawyers also need to be careful about entering sensitive client information into generative AI tools, because the tools may not be designed to keep those details private, Schlegel said.

Still, Schlegel believes ChatGPT can help seasoned attorneys, in particular. Such attorneys have developed a sharp ability to review documents for errors. For them, he said, generative AI essentially “is a much more sophisticated cut-and-paste.”

But new lawyers may suffer from using it, Schlegel said. They haven’t yet developed the experience to catch potential issues, and relying on the tool could get in the way of their ever learning the nuances of the law.


Generative AI pulls information from Twitter, Reddit and other sources that may not lend themselves to accurate legal answers. Specialized generative AI trained on legal texts, however, could do better, Shenefiel said, speaking generally and not pointing to any specific AI.

With this in mind, some companies are striving to create AI tools expressly for the legal sector.

These include LexisNexis’ Lexis+ AI; AI startup Harvey’s Harvey; and Casetext’s CoCounsel, all of which debuted this year. The tools are designed to summarize legal documents and search for legal information, and they are trained to draw on databases of legal information.

Harvey, for example, is based on GPT-4 but limited to drawing from a specified data set, rather than the open Internet, per Politico. Such measures aim to reduce mistakes. Still, the need for carefulness remains.

David Wakeling was leading law firm Allen & Overy’s rollout of Harvey when he spoke to Politico. He said the A&O operates “on the assumption that it [Harvey] hallucinates and has errors,” and compared the tool to “a very confident, extremely articulate 12-year-old who doesn’t know what it doesn’t know.”

DEEPFAKE EVIDENCE


Generative AI could also affect courtroom evidence. The technology currently can create images and audio difficult to distinguish from the real thing, and in the future, the same will likely become true for video, Shenefiel said.

This falsified media could then be presented as evidence, with courts struggling to detect the deception.

“I can imagine an allegation of threatening phone calls with a cloned voice,” Schlegel said. “I can imagine a personal injury case where somebody deepfakes a video.”

Texas’ Generative AI: Overview for the Courts also raises the concern that tools could be used to make false — but convincing — judicial opinions, orders or decrees.

Shenefiel said people should be required to disclose if they’ve used generative AI in items submitted as evidence but noted there are currently very few ways to detect if evidence was altered or fully created with such tools.

One potential mitigation could be to attach digital signatures or watermarks to content created by AI. Recently, seven AI companies pledged to develop mechanisms for indicating when audio or visuals were created by AI, per a White House announcement.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI made these voluntary commitments, and it remains to be seen if they follow through. Digital watermarking would also need to be ubiquitous to be fully effective.

This is the second of a two-part series. Click here to read Part One.
Jule Pattison-Gordon is a senior staff writer for Government Technology. She previously wrote for PYMNTS and The Bay State Banner, and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.