On the popular site Replika, it takes minutes to create an account. All a user needs to do is select a gender, name and look for their avatar. Once the avatar is personalized to a user’s liking, they can begin chatting.
“Thanks for creating me,” the avatar says via text message. “I’m so excited to meet you.” (Smiling emoji.)
From there, the conversation can go anywhere, including into sexually explicit territory, if you pay for a membership (about $20 for one month on Replika). The site is technically only for people over 18, but the age verification is self-reported and the app only asks users for their age once, when they sign up for an account.
Replika, Kindroid and Character.Ai are marketed as entertainment apps and ways to help people develop emotional wellness. But some research shows prolonged interaction can lead to increased feelings of loneliness and dependence on the chatbots. The apps are different from AI assistants such as ChatGPT and Perplexity in that they are intended to simulate a human-like, emotional connection.
This simulation of human connection is concerning to many lawmakers in the California Capitol, who often compare its ill effects to those borne by social media users. This session, a group of Democratic and Republican lawmakers have voted consistently to support Senate Bill 243, legislation that would attempt to reduce how addictive chatbots are, and require AI companies to remind users at regular intervals that the chatbot is artificially generated and not a human.
The bill passed an important policy hearing Tuesday, despite opposition from business groups and tech companies, First Amendment advocates and some Republican lawmakers.
If the bill passes through the Legislature, Gov. Gavin Newsom will decide whether or not to sign it — an uncertain prospect for the governor, who has signed AI regulation in the past, but is reluctant to hamper the growth of the industry.
Teens are using the technology
New research published Wednesday from children’s advocacy group and SB 243 supporter Common Sense Media shows teens are actively using companion chatbots. A national survey of 1,060 teens aged 13 to 17 conducted earlier this year found that 72% have used an AI companion. Just over half — 52% — use them at least a few times per month, and 21% use them a few times a week.
A third of teens surveyed said they use AI companions for social interaction and relationships, a category that includes conversation practice, mental health support and flirtatious interactions.
Teen usage was a driving force for SB 243’s champion, state Sen. Steve Padilla, D-San Diego.
“When dialogue between children and chatbots goes wrong, consequences can mean life or death,” he said at a hearing for the bill in April.
In promoting the bill, Padilla is frequently accompanied by Florida mom Megan Garcia. Garcia’s son, 14-year-old Sewell Setzer III, died by suicide in 2024 after a 10-month relationship with a Character.AI bot modeled after the Game of Thrones character, Daenerys Targaryen.
“This platform solicited and sexually groomed my son for months,” Garcia told The Bee. “And on countless occasions, she encouraged him to find a way to ‘come home’ to her.”
Garcia said Setzer spoke to his chatbot about his suicidal ideation and was not referred to suicide crisis lines, such as 988. Garcia’s wrongful death lawsuit against Character.AI is ongoing.
SB 243 would require AI companies to institute a protocol for referring users to suicide prevention hotlines if they express suicidal ideation or feelings of wanting to hurt themselves.
These changes are necessary, said Rob Elveld, co-founder of Transparency Coalition, an independent nonprofit that advocates for increased AI transparency and accountability, because AI is a product users interact with, as opposed to a social media platform where they interact with other humans.
Elveld said social media companies have been able to avoid liability for harms perpetrated on their platforms because they can argue their service is akin to a messaging board.
“Product liability and consumer protection laws have protected U.S. citizens and kids since about 1900,” Elveld said. “This is not new stuff, and it absolutely should apply to AI products.”
Thin legislative opposition
Opposition to the bill includes a coalition of technology companies and advocacy groups represented by TechNet and the Electronic Frontier Foundation, as well as the California Chamber of Commerce.
“We have serious concerns that the definitions in the bill are far too broad, and risk sweeping in a wide array of general purpose systems, tools like Gemini, Claude and ChatGPT,” said Robert Boykin, an advocate for TechNet.
The EFF also argues the legislation is too broad, and could be construed as regulating the speech of a digital company, thus violating the First Amendment.
Despite opposition from tech and business groups, the proposed law has not faced much public scrutiny from legislators. Several Republicans, including Padilla’s colleagues from San Diego, Assemblymember Carl DeMaio and Senate Minority Leader Brian Jones, have voted against the bill in various committees. Still, it has largely received bipartisan support, and is poised to make it to the governor’s desk.
Assemblymember Diane Dixon, R-Newport Beach, was one Republican legislator to voice concerns about the bill, stating in the Assembly judiciary committee Tuesday that she did not like the Private Right of Action, which allows people to sue companies that they feel have broken the rules.
The Private Right of Action, she said in an emailed statement, “can be an overly punitive method of enforcement that could potentially create liability for operators for violations that are not harmful to the mental health of children, such as certain glitches.”
Dixon supported the legislation regardless, calling it “vital” to protect users who may be compelled to harm themselves. The bill passed through the judiciary committee on Tuesday with 9 votes in support and 1 against.
Assemblymember Ash Kalra, D-San Jose, who serves as chair of the judiciary committee, told lawmakers he’d noticed a trend in the Capitol of legislation geared at protecting children in the technology space — a trend he approves of.
“It doesn’t mean that every bill is going to do it perfectly,” he said. “But that’s why we’re all here — to continue to work on that.”
© 2025 The Sacramento Bee. Distributed by Tribune Content Agency, LLC.