IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Connecticut Supreme Court Asked to Dismiss Case Due to AI

Questions about fake legal citations created by artificial intelligence and overlooked due to lawyers' lax proofreading are currently before the Connecticut Supreme Court.

A gavel resting on a pedestal on a wooden table with a set of brass scales in the background.
(TNS) — Questions about fake legal citations created by artificial intelligence and overlooked due to lawyers' lax proofreading are before the Connecticut Supreme Court.

A central case involves a landlord's attempt to evict a Middletown tenant who objected to a rent hike and was backed by the town's Fair Rent Commission. Lawyers for the Brooklyn, N.Y.-based landlord submitted a brief to a lower court containing "hallucinatory" citations created by generative AI, according to a brief to the Supreme Court that included work by students with the Yale Law School-based Jerome N. Frank Legal Services Organization.

For example, a quoted phrase in a citation by the plaintiff's lawyers, Wallingford-based GLG Law LLC, does not appear in the cited case "nor has any other court ever written this phrase," the brief says.

The promise and pitfalls of AI have been much-discussed in legal circles across the nation.

In December, the American Bar Association's Task Force on Law and Artificial Intelligence released a report that includes guidance and recommendations meant to "uphold our profession's core values of competence, integrity, and public trust," the organization's immediate past president, William R. Bay, wrote in the introduction.

"The future of our profession will be shaped by how we meet this moment," Bay wrote. "Together, we will work to promote a future where AI serves both our clients and the public good."

The use of falsely generated citations, according to the brief by the Yale-based legal services organization, "is dangerous as it suggests nonexistent precedent for the plaintiff’s arguments."

"Such citations — in these cases and beyond — are unfair to opposing parties," the brief says. "Falsely generated case citations may be difficult to identify and detect, particularly when they are rampant throughout a 60-page brief. They especially harm self-represented or disadvantaged parties, who may not have the time and resources to detect such inaccuracies and may assume that their opposing counsel’s brief accurately represents case law and is written in good faith."

"The plaintiff’s conduct ought to result in the dismissal of its appeal and sanctions to deter such conduct in the future," the brief says.

In a memo to the Supreme Court, attorneys for GLG Law acknowledged errors in their brief due to the use of generative AI and their failure to properly proof citations.

AI was used to help organize, format and review the brief, the GLG attorneys wrote.

"Unfortunately," they wrote, "Counsel did not notice that AI had intuitively made changes to the brief prior to filing."

The law firm has taken steps to ensure such mistakes are not repeated, including a review process that includes more than one attorney, the lawyers wrote.

"Counsel takes this situation very seriously and deeply regrets that these errors occurred and for any and for any inconvenience to the Court and all Counsel," the GLG lawyers wrote.

The Middletown landlord-tenant dispute is not the only AI case in Connecticut. The attorney for a plaintiff in a Greenwich-based breach of contract lawsuit contends the defendant's lawyer used artificial intelligence in court filings that were rife with bogus and "hallucinated" case law.

At its Feb. 9 meeting, the Rules Committee of the Connecticut Superior Court meeting considered requiring attorneys who use AI programs in their legal research to certify that they have independently verified the accuracy of citations, according to meeting minutes. No action was taken.

In a notice to lawyers and litigants, the U.S. District Court in Connecticut warned against using AI without verifying accuracy, "like any other shoddy research method from other sources or tools."

The federal court, the notice says, "has a no-tolerance policy for any briefing (AI-assisted or not) that hallucinates legal propositions or otherwise severely misstates the law."

© 2026 Journal Inquirer, Manchester, Conn. Distributed by Tribune Content Agency, LLC.