One prosecutor has been relieved of all case-related duties, and the office risks being sanctioned by a state appeals court, the documents show.
The errors in Nevada County mark a sharp escalation from just a few months ago, when District Attorney Jesse Wilson told The Bee that a prosecutor in his office had submitted a single brief that contained AI-generated errors, which he said were quickly fixed.
“The office acknowledges that it was not fully prepared for the emerging risks presented by generative AI — including the prevalence of its use, the extent to which such tools may affect the accuracy of legal work product, and the difficulty of detecting deceptively plausible fabrications without careful scrutiny,” Wilson wrote in a brief submitted to the California 3rd District Court of Appeal.
Nevada County’s admission comes as AI programs are under close watch around the globe for introducing incorrect legal references in cases, yet also increasingly offered as tools to help lawyers and judges research and write their briefs, motions and opinions.
In March, the Superior Court of Los Angeles County said it was participating in a pilot program that used AI to help judges craft decisions in some cases.
The technology is sweeping almost every profession while also entering into people’s personal lives, sometimes with serious consequences. The errant or confabulated information that AI tools can generate is often referred to as “hallucinations.” An error in a legal brief could lead to an innocent person’s incarceration. In health care, errors in AI analysis of medical necessity has led to denial of care. And more than one lawsuit has accused AI chatbots of encouraging people to to kill themselves.
Last year, a federal court judge in Sacramento fined a public defender $1,500 after he submitted a brief that referred to nonexistent case law, and refused to admit it was anything other than a minor error. Over the past three years, more than 800 U.S. legal cases involved false citations linked to artificial intelligence, according to an online tracker by French researcher Damien Charlotin.
“It’s always been the responsibility of the lawyer who is submitting a document to the court to verify that the authority that they are presenting is real,” said Mary-Beth Moylan, who teaches professional responsibility at the University of the Pacific McGeorge School of Law. “We have an ethical duty of candor to the court.”
Law schools are scrambling to develop course material to help students manage the lures and pitfalls of using AI, but the course material is so new that young lawyers who graduated more than 18 months ago may never have discussed the ethics or the practicalities of using it, she said.
“You can’t do this,” she tells students, sharing stories of legal work containing faulty AI generated material that has not been fact-checked. “You are going to be in trouble. You’re going to be sanctioned, you’re going to lose your job.”
Abnormalities in legal filings
Questions began to emerge about the Nevada County cases last summer, after a judge noticed abnormalities in references to legal precedent in a brief filed by a prosecutor. Soon after, another prosecutor in the office filed a brief that the public defender’s office and a civil rights law firm they were working with quickly noticed contained similar faulty references.
The Nevada County public defender’s office and the nonprofit Civil Rights Corps asked the state appeals court to investigate the matter, and to consider sanctioning Wilson’s office for submitting the false citations. The appeals court denied the request, and the public defender’s client, Kyle Kjoller, was ultimately convicted on several felony firearms charges.
But in January, the state Supreme Court ordered the 3rd District Court of Appeals to take another look at the case, and to demand that the Nevada County district attorney’s office explain, on the record, why it shouldn’t be sanctioned.
The response filed by the DA’s office takes a fairly contrite tone, admitting that briefs filed in four different cases contained inaccurate AI-generated information that was not properly checked or researched by the attorneys who submitted them.
It also admits that “minor or non-substantive citation issues were occasionally identified” in an audit that the office conducted of briefs filed over the past 18 months, but said that those errors did not appear to have been caused by AI.
“During an approximate four-week period in August and September 2025, two deputy district attorneys submitted filings containing significant citation errors to include improper legal citations, citations to cases that did not exist, and attributions to cases that did support the proposition for which they were cited,” she said in her declaration, which was also signed by Wilson.
The briefs that contained erroneous legal citations included the response to a writ of habeas corpus in Kjoller’s case regarding bail, and a response to a motion to supress evidence in the case of Kalen Turner, who was accused of five felony and two misdemeanor drug counts, Assistant District Attorney Lydia Stuart wrote in a declaration submitted with the brief. In addition, motions in which the prosecution opposed mental health diversions for two defendants contained similar errors, Stuart wrote.
In Kjoller’s case, “the people acknowledge the pleading contains ‘all the markings’ of errors created by generative artificial intelligence (AI) tools,” Wilson wrote.
Since discovering the problems, he said, his office has made clear that attorneys must check all references in research that has been assisted by artificial intelligence. In addition, the office has since begun using an AI program provided by a verified legal research service, which he believes will lessen the possibility of errors.
The office has held trainings on the use of AI in legal cases and has removed one of its prosecutors from all casework pending an investigation.
“The people recognize that the risks associated with AI are particularly serious in criminal matters, where the rights of a criminal defendant are directly implicated and prosecutors bear heightened responsibilities as ministers of justice,” he wrote.
Should the DA be sanctioned?
But the Washington, D.C., based Civil Rights Corps, which signed on to represent Kjoller in a bail appeal and has taken over the case from the public defender’s office, filed its own brief late Monday arguing that the office should still be investigated and possibly sanctioned.
The organization accused Wilson and Stuart of not being forthcoming about the AI errors in the cases. It also cited an email Stuart wrote to public defender Thomas Angell saying filing a frivolous appeal could open him up to sanctions, a statement that he took as a threat, according to prior court filings.
“For months, the office appears to have misled courts, counsel and the public,” attorneys Peter Santina and Carson White wrote in their brief.
An investigation is needed, they wrote, to determine what really happened and to consider sanctions against the attorneys involved. The court still does not know, they said, what AI programs, if any, were really used, or when supervisors knew of the errors or any misconduct.
Increasingly, judges are opting to sanction lawyers who submit briefs tainted by AI errors, Moylan said, sometimes fining those who refuse to admit wrongdoing or referring them to their state’s bar association for disciplinary actions.
“Some courts are going to allow and expect people to use it, but then you have to use it ethically and responsibly,” she said, always deeply researching any suggested case law or language.
Firms like Westlaw and LexisNexis that provide legal research tools for lawyers have debuted their own AI products, which they say are safer than using models made for use outside the legal field.
But such tools can also be perilous, said Moylan. They may pull their research from a better set of data, but they can still generate inaccurate information, and they are likely to also fail to understand when judges are emphasizing or downplaying parts of their rulings.
In California, the home page of the state Supreme Court highlights a warning:
“Using AI for your court case?” it says in large type, with a link to a series of tips for ethical and responsible AI use.. “Read this first.”
© 2026 The Sacramento Bee. Distributed by Tribune Content Agency, LLC.