House Bill 455 from Rep. Kim Banta, R-Fort Mitchell, would keep AI from making big decisions in a therapy setting, banning both large language models like ChatGPT, Gemini and Claude from making “independent therapeutic decisions” or generating “therapeutic recommendations or treatment plans” without review from a licensed therapist, as well as blocking therapists from using AI in a substantive way.
“This bill simply says that you cannot use AI to perform direct therapy,” Banta said in a House floor speech.
Therapists can use AI as a tool, but they can’t relegate their duties of making therapeutic decisions or directly interacting with clients to AI, Banta said. The bill also requires a therapist to inform the patient of how they’re using AI in their job, and to get the patient’s consent.
The bill passed the House 88-7. The only no votes were registered by GOP members of the informal Liberty wing, a Libertarian-leaning group of Republicans often opposed to regulation. Rep. Steven Doan, R-Erlanger, was one of them.
“We just saw that as increased regulations on AI. At the end of the day, these things are programmed by people. People are going to have a say over what they can and can’t do anyway. It’s not to say I think everybody should go and get AI therapy, it’s more that the marketplace should be open, and we should be receptive to change and innovation,” Doan told the Herald-Leader in an interview.
Rep. Lisa Willner, D-Louisville, successfully added an amendment to the bill on the house floor. Willner is a therapist by trade.
“Preserving the relationship between the therapist and the client — human-to-human, how critically important that is for the therapeutic process. We know how important these guardrails are. We’ve all seen and read horror stories where therapy bots have led clients to some very dark and sometimes deadly places,” Willner said.
The bill differed somewhat from when it was first introduced due to Willner’s floor amendment as well as a committee substitute that exempted self-help materials and educational resources.
Willner was referencing several stories of AI large language models encouraging people’s delusions and otherwise hurting their mental health. The models have been implicated in a handful of suicide cases.
Companies like OpenAI have attempted to address the concerns by making tweaks to their models, working to make them less encouraging of delusions and have them direct more users to localized mental health resources.
© 2026 Lexington Herald-Leader. Distributed by Tribune Content Agency, LLC.