ChatGPT has helped criminals generate fake client notes to prop up bogus companies that raked in millions in Medicaid reimbursements for nonexistent services. State leaders are betting on machine learning to parse thousands of claims from providers in hopes of identifying ones that veer from procedures.
The situation — using AI to detect AI — reflects the very modern challenges facing Minnesota officials as they rush to stop a snowballing fraud scandal. They’re using an array of tools to prevent the schemes that trained a national spotlight on the state and helped derail Gov. Tim Walz’s bid for a third term.
Prosecutors have estimated that the total amount of fraud could be more than $9 billion, across 14 high-risk Medicaid programs over seven years, though Walz has called that number speculation. Fifteen people have been charged with defrauding housing and autism programs so far, and more charges could come as state and federal investigations into social services continue.
As the crisis unfolds, some experts are commending Minnesota’s embrace of AI to spot its more sinister applications.
“There’s kind of the old adage, just fight fire with fire,” said Jordan Burris, the head of public sector at Socure, which provides AI-powered fraud prevention software to companies and government agencies.
But others cautioned that AI-driven algorithms can incorrectly flag standard claims as suspect, dinging providers who don’t deserve it.
“They have really impressive results,” said Mona Birjandi, an economist and data analytics director at the New York law firm Outten & Golden. “But they’re not without unintended consequences.”
When two Philadelphia men traveled to Minneapolis to exploit the Midwestern state’s generous selection of social services, they turned to artificial intelligence to get the job done.
Anthony Jefferson and Lester Brown used ChatGPT to generate fake emails and notes discussing clients purportedly enrolled in their Housing Stabilization Services company, court records show. The fabricated documents helped the men steal some $3.5 million from the program for assistance they claimed to have provided to 230 Medicaid beneficiaries.
The so-called “fraud tourists” pleaded guilty to wire fraud in February, facing the first charges involving AI to further a fraud scheme in Minnesota, according to the Department of Justice.
It’s possible more are on the way.
Drew Evans, superintendent of the Bureau of Criminal Apprehension, said at a Feb. 26 news conference that his agency has noticed an uptick in people wielding artificial intelligence to pull off financial crimes, including using AI-generated voices to impersonate others to steal money.
People who prosecute Medicaid fraud in Minnesota, too, have recently come across progress notes that seem to be AI-created and submitted by mental health providers, according to a spokesperson for the state Attorney General’s Office.
Burris of Socure said AI has accelerated attacks by bad actors, who no longer need to be highly trained in data science. Almost anyone can harness technologies capable of producing official-looking emails, or rapidly collecting reams of personal information off the Internet for use in applications for government benefits.
AS FOR OUTMANEUVERING MODERN FRAUDSTERS?
“With the AI scams that evolve today,” Burris said, “the only way to get ahead of it is to use AI at scale to combat it.”
Part of a sweeping anti-fraud package that Walz presented Feb. 26 would pump more resources toward using machine learning to identify suspicious billing earlier. If successful, that proposal would build on the Department of Human Services’ work with Optum, the UnitedHealth Group subsidiary the state tapped to conduct an AI-driven review of claims.
Jon Eichten, the deputy commissioner at Minnesota IT Services, said Optum used a “collection of analytics” to parse provider claims to find the ones that veered from policies. That includes everything from providers saying they met with dozens of clients a day to repeatedly billing for the same hours.
Optum found widespread billing irregularities in an autism intervention program that’s among the 14 Medicaid-funded services officials have called fraud-prone. But Eichten noted that flagged claims aren’t necessarily fraudulent.
AI is a useful screening tool, he said, but it’s up to the social services agency to dig deeper into those initial results to identify wrongdoing. (A spokesperson for the state Department of Human Service said investigators don’t use AI in their post-payment review process.)
And there are potential pitfalls with using algorithms. Patients have accused insurance giant UnitedHealthcare of using a faulty AI program to deny coverage for post-acute care for Medicare patients. The insurer has called the allegations “unfounded.”
Birjandi, the economist, said improperly trained AI-driven fraud detection algorithms run the risk of mistakenly singling out lawful providers, who ought to have a clear process for appealing any initial determinations made by algorithms. Eichten said the state has worked with Optum to continuously improve analytics to prevent the sort of broad-brush flagging Birjandi described.
“We want analytics that send us down the right path toward the investigation, things that are actually representative of fraud, waste or abuse,” he said.
Eichten said fighting fraud requires a cautious, intentional embrace of some tools that bad actors harness — for good. Officials, he added, can’t afford to reject AI tools that aren’t perfect.
“If we do that, we are handing the hackers and the fraudsters the significant competitive advantage.”
©2026 The Minnesota Star Tribune, Distributed by Tribune Content Agency, LLC