IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

California Bill Proposes Strict Safety Checks for AI Companies

The legislation would set mandatory AI safety testing requirements before training or market release and would mandate an internal fail-safe be included in all AI systems to trigger an immediate shutdown if issues are detected.

A hand pressing an emergency shutdown button against a dark background.
Shutterstock
California companies may soon be forced to prove their AI tools’ behavioral characteristics before they hit the market, if a recent proposal gets traction.

Senate Bill 1047, proposed by Sen. Scott Wiener, would mandate that companies using AI models conduct thorough testing to identify and mitigate potentially "unsafe" behaviors before the technology reaches the public.

The legislation would form what would be known as the “Frontier Model Division” within the California Department of Technology to oversee developers of non-derivative covered models and require them to essentially plan for various considerations before undergoing any training on an AI model. This includes implementing a fail-safe for immediate shutdown if safety issues are uncovered in any AI tools until it passed further review.

In the event of safety test failures, the company would be obligated to undergo an annual certification of compliance with the Frontier Model Division. Approval for models then rests upon the signature of the chief technology officer or a higher-ranking corporate officer within the division.

According to the draft text, the legislation would also authorize the division to assess related fees if these specifications aren’t met. The money paid in fees would consequently be disseminated into the Frontier Model Division Programs Fund, which the bill would create.

Currently, there are hundreds of active bills related to AI in more than 40 states. Their overarching objective appears to focus on the creation of AI task forces to either study potential AI use cases or to review current AI models and their implementation by agencies.

Alabama recently created an executive order that paved the way for an advisory group comprised of state representatives, legislators and academia to examine whether the AI models being used in state agencies “pose a risk.”

Roughly 1,000 miles away, a Connecticut AI bill took effect at the beginning of February mandating that the Connecticut Department of Administrative Services conduct an inventory of all systems that currently employ AI and are used by any state agency. Additionally, they must perform ongoing assessments of the systems used in state agencies.

Maryland took a slightly different approach by centering a portion of its AI legislation on supporting small and medium-sized manufacturing businesses. The bill created an Industry 4.0 Technology Grant Program that will provide resources and education pertaining to AI implementation. The program took effect in October of last year.