IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Proposed SANDBOX Act May Remove AI Oversight for Developers

U.S. Sen. Ted Cruz has released a legislative framework that would let AI developers waive some regulations in an effort to advance new technologies, but experts warn there are privacy and security risks.

human in suit holding graphic with brain to represent artificial intelligence in one hand and team of people in the other.
Proposed federal legislation known as the SANDBOX Act, introduced on Wednesday, would grant AI developers regulatory lenience to launch new technologies — but some experts argue that the bill poses risks to consumers’ privacy.

Governments are increasingly exploring the sandbox model to allow for AI exploration in a secure environment, from Massachusetts to Delaware and beyond. In Utah, regulatory mitigation agreements with businesses allow for temporary relaxation of laws to develop new technologies, although data sharing, safety and compliance measures are in place.

The SANDBOX Act proposed this week by Sen. Ted Cruz — a.k.a. the Strengthening Artificial Intelligence Normalization and Diffusion by Oversight and eXperimentation Act — aims to do this at the federal level, establishing an AI regulatory sandbox program through the U.S. Office of Science and Technology Policy (OSTP).

Under this bill, AI deployers and developers would apply to modify or waive regulations, to more efficiently advance their work to launch new AI technologies. The bill would essentially offer select companies eligibility for two years of regulatory exemptions. OSTP would work across federal agencies to evaluate such requests, and the U.S. Congress would collect regular reports on how often rules were modified or waived to inform policymaking. The legislation aims to help position the U.S. as a leader in AI, which is a federal priority.

“[The SANDBOX Act] embraces our nation’s entrepreneurial spirit and gives AI developers the room to create while still mitigating any health or consumer risks,” Cruz said in a statement.

Stakeholders in responsible AI advancement, however, have raised concerns about the proposed legislation.

Public Citizen, a nonprofit consumer rights advocacy group, said that it “puts public safety on the chopping block in favor of corporate immunity.” The group released a statement from its accountability advocate J.B. Branch about the bill.

“Public safety should never be made optional, but that’s exactly what the SANDBOX Act does,” Branch said. “It guts basic consumer protections, lets companies skirt accountability, and treats Americans as test subjects.”

While proponents of regulatory amendments argue that AI companies are being restricted by these rules, Branch said that this is “simply not true,” citing company value assessments.

The CEO of the Alliance for Secure AI, Brendan Steinhauser, argued in a statement that Big Tech companies have repeatedly failed to make safety and harm prevention top priorities.

“The SANDBOX Act removes much-needed oversight as Big Tech refuses to remain transparent with the public about the risks of advanced AI,” he said, questioning who will be allowed to enter this sandbox environment and why.

Other groups, like the Information Technology Industry Council and the Abundance Institute, support this legislation.

This bill comes on the heels of much division about the future of AI regulation — and who holds the authority to implement safeguards.

There is bipartisan agreement among the public that both states and the federal government should be able to regulate AI. But the federal government has attempted to block states’ regulatory authority through a proposed moratorium in a recent budget bill, which was ultimately rejected by Congress; and more recently in the AI Action Plan, which could threaten states’ access to federal funding over their regulatory policies.

There is also bipartisan agreement on enacting some basic AI regulatory protections, such as a ban on lethal autonomous weapons and requiring AI programs to pass a government test before use.

“No federal legislation establishing broad regulatory authorities for the development or use of AI or prohibitions on AI has been enacted,” according to a June Congressional Research Service report.