IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

As Regulation Ban Looms, California Issues Frontier AI Study

The California Report on Frontier AI Policy lays out regulatory principles prioritizing transparency and risk mitigation. It arrives as federal lawmakers consider a 10-year moratorium on state artificial intelligence regulation.

A small model of a government building on top of a digital interface.
The group of academic experts tasked with studying the safe and ethical policy “frontiers” of artificial intelligence has released its final report nearly a year after being convened by California Gov. Gavin Newsom.

For industry, The California Report on Frontier AI Policy offers a glimpse at where state leaders and lawmakers could soon be putting more regulatory energy, assuming they won’t be hobbled by the federal government.

The release of the report comes at a time when federal lawmakers are considering the passage of President Donald Trump’s spending package — dubbed the “big, beautiful bill” by the president and its backers — which includes a moratorium on state-level AI regulations.

“Well-crafted policies can simultaneously fulfill this obligation to consumers, allow states to carefully tailor policies to the specific needs of their constituents, and maintain critical pathways for federal action that provide a comparable degree of protection to consumers,” the authors of California's report wrote. “In pursuing this balance between innovation and safety, California has a unique opportunity to productively shape the AI policy conversation and provide a blueprint for well-balanced policies beyond its borders.”

The report outlined key principles that could be applied to the regulatory environment, including the need to strike a balance between risk and reward; the need for evidence-based policymaking and frameworks that are both comprehensive and flexible; the need for increased transparency and whistleblower protections; the creation of post-deployment impact reporting channels; and the establishment of thresholds for policy interventions.

Researchers also defined the various risks posed by AI technology, including “malicious risks,” or those posed by actor misuse, such as fraud, non-consensual pornographic imagery and cyber attacks; “malfunction risks,” or those posed by the unintended consequences of otherwise legitimate use cases; and “systemic risks,” or those posed by widespread deployments, such as labor market disruptions, privacy risks, copyright infringement and the like.

“Without proper safeguards, however, powerful AI could induce severe and, in some cases, potentially irreversible harms. Experts disagree on the probability of these risks," the report reads. "Nonetheless, California will need to develop governance approaches that acknowledge the importance of early design choices to respond to evolving technological challenges.”

California has been at the forefront of AI regulation and implementation in the last few years. In September 2023, the governor signed Executive Order N-12-23, directing agencies to explore the risks and benefits of generative AI. The state was also the first to launch a series of use case pilots in May 2024 across several departments. In late April 2025, the governor announced the expansion of those projects.

This story first appeared in Industry Insider — California, part of e.Republic, Government Technology's parent company.
Eyragon Eidam is the managing editor for Industry Insider — California. He previously served as the daily news editor for Government Technology. He lives in Sacramento, Calif.
Sign up for GovTech Today

Delivered daily to your inbox to stay on top of the latest state & local government technology trends.