The proposed measure, formally introduced Friday during a virtual press call by Common Sense Media, with public backing from OpenAI, would place requirements around age assurance, data protections, parental controls and independent safety audits for AI products used by minors. It has not yet been introduced as a bill in the state Legislature.
Speakers from Common Sense Media, a nonprofit that rates media technology tools for parents and educators, and the company OpenAI described the measure as a response to both growing parental concern and lessons learned from the earlier legislative effort, Assembly Bill 1064, which left Gov. Gavin Newsom’s desk in October without a signature.
“We truly believe in and support the best interests of kids and families,” Jim Steyer, founder and CEO of Common Sense Media, said during the call. “And we need to put these critical protections, these seat belts in place right now. Our kids deserve nothing less.”
WHAT WAS AB 1064?
Common Sense Media sponsored AB 1064 last year, a youth AI protection bill written by California Assemblymember Rebecca Bauer-Kahan, D-Orinda. That bill cleared the state Legislature last fall but Newsom declined to sign it.
According to his veto letter, Newsom’s primary concern was that AB 1064 would result in a de facto ban on chatbot use by minors, rather than allowing young people to “learn how to safely interact with AI systems.”
“The types of interactions that this bill seeks to address are abhorrent, and I am fully committed to finding the right approach to protect children from these harms in a manner that does not effectively ban the use of technology altogether,” Newsom wrote.
On Friday's call, Common Sense Media leaders characterized the new proposal as a recalibrated approach to AB 1064 — one that retains strong safety guardrails while shifting away from feature-based restrictions that could limit youth access to AI entirely.
“This new measure really has the same intent as our original measure,” Robbie Torney, Common Sense Media’s senior director of AI programs, said. “It articulates a really comprehensive set of safety standards that altogether we believe accomplish the same goal.”
NEW PROPOSAL’S REQUIREMENTS
Bruce Reed, Common Sense Media’s head of AI, outlined a set of requirements that would collectively impose new standards on AI systems used by minors, including tools increasingly marketed to schools as tutors, writing assistants, study aids and classroom supports.
At the center of the proposal is age assurance. Reed said AI companies would be required to determine whether a user is under 18, and to apply child protections when age cannot be confirmed with certainty.
For districts, that standard could shape procurement decisions and acceptable-use policies, particularly for platforms used both in and outside the classroom.
The proposal would also prohibit child-targeted advertising and restrict the sale of minors’ data without parental consent, expanding protections to cover all users under 18, Reed said. Currently, the California Consumer Privacy Act only applies to youth under the age of 16. Such a provision could have direct implications for ed-tech vendors whose models rely heavily on personalization and engagement analytics, particularly for tools used by middle and high school students.
Beyond privacy, Reed emphasized a set of safety requirements tied to student well-being. The proposal would require safeguards to prevent AI systems from generating or promoting content related to self-harm, eating disorders, violence and sexually explicit acts.
“It also prevents manipulating kids by creating emotional dependence, simulating romantic relationships, or making child users think that they're talking to a human,” Reed said, noting that companion chatbots are generally built around sustained interaction, personalization and tone.
The ballot proposal, Reed said, would also require AI companies to provide parents with “powerful, easy-to-use parental controls,” including the ability to monitor and limit AI use, and get alerts when systems detect signs of self-harm. Reed also highlighted controls that let parents set time limits and disable memory features: “Turning off memory makes every chatbot exchange a fresh start,” he said. “The risks of dependency and manipulation increase over time.”
Moreover, oversight would extend beyond product design: if enacted, the law would mandate independent, third-party audits of child safety risks, with results reported to the California attorney general; and annual risk assessments.
According to Reed, testing chatbots for safety must be consistent and ongoing because AI systems and products are constantly changing. For ed-tech vendors, that may introduce recurring compliance obligations tied specifically to child safety, while districts could increasingly expect evidence of audits or risk assessments as part of vendor vetting.
Chris Lehane, chief global affairs officer at OpenAI, framed the company’s support as both an alignment of values and a signal about the direction of AI governance.
“AI knows a lot, parents know best,” Lehane said, describing what he called the organizing principle behind OpenAI’s involvement. “Our aspiration is that this will not just be in California. This can be a model for other states ... potentially even at the federal level.”
THE ROAD AHEAD
Speakers on the call said they are pursuing a dual-track strategy: pushing for legislative action in Sacramento while preserving the option of a ballot initiative, if needed.
The proposal outlines a regulatory framework that could shape how districts evaluate AI tools, how vendors design youth-facing products, and how student safety and AI readiness are balanced in classrooms statewide.
“It is not a political partisan issue,” Steyer said. “All parents out there, all voters out there, pretty much everybody knows we need really serious protections for kids and teens and families as this goes forward.”