The proposed regulatory ban has faced considerable pushback, with experts arguing that its language prevents states from enforcing laws they have passed and undermines their efforts to protect and serve residents.
The One Big Beautiful Bill Act, which proposes the AI regulation moratorium, cleared the House Thursday with a vote of 215 to 214. The language in the legislation dictates that no state or local government “may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decisions” for the next decade.
“This would apply to laws in red and blue states that are already on the books or could be passed,” Travis Hall, the Center for Democracy and Technology’s director for state engagement, said in a written statement to Government Technology, arguing that it would “tie the hands” of state-level officials to enforce laws focused on AI tools.
Brad Carson, president of Americans for Responsible Innovation, a nonprofit advocating for tech policy in the public interest, said in a statement that preventing state lawmakers from enacting artificial intelligence safeguards puts Americans at risk from its potential harms, which could range from bias to misinformation to data security issues.
“The arguments in favor of this provision only work if you believe that the federal government will soon pass broad guardrails to protect the public, or that AI will be a completely benign technology developed by companies that need no regulatory constraints,” Carson said, citing the federal government’s past lack of action.
The bill is expected to face challenges in the Senate due to the Byrd Rule. Adopted in 1985 to preserve the budget reform process, the rule prohibits the inclusion of “extraneous matter” in reconciliation. It applies to matters that do not produce a change in outlays or revenue or the way they are collected. National Conference of State Legislatures Chief Executive Officer Tim Storey addressed the issue in a letter.
“A provision broadly pre-empting state AI laws would certainly violate the Byrd Rule, as its principal purpose is to limit state legislative authority rather than to achieve substantive budgetary outcomes,” Storey said.
James P. Steyer, CEO and founder of the nonprofit Common Sense Media, which reviews and rates other media, urged senators on both sides “to do the right thing” in a statement.
“Let's be clear, tying the hands of state Attorneys General and legislators on AI laws is a policy measure, not a budget measure,” he said. “It has no place in the budget reconciliation bill.”
Experts have speculated on what AI policy would look like under this administration, with many expecting some level of federal deregulation — but also expecting state and local governments could continue implementing their own protections.
STATE OFFICIALS RESPOND
Elsewhere in government, 40 state attorneys general signed a letter to congressional leaders this month opposing the amendment.
“The impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI,” the letter said, arguing such a rule, compounded by congressional inaction, would be “irresponsible.”
The letter cited specific AI laws that, if states cannot enforce them, could expose residents to harm. These include laws protecting against AI-generated explicit material, deepfakes designed to mislead voters and consumers, and algorithmic rent-setting systems.
Other areas of emerging technology, like social media, have developed and scaled rapidly in a way that has been largely unregulated, making it a challenge for policymakers to retroactively enact safeguards.
As Utah Office of AI Policy Director Zach Boyd said, he believes there’s “a lot of regret about the state’s response — and lots of governments’ responses — to past emerging technologies like social media,” in that it evolved with little governance guiding its advances. Although this evolution was positive in some ways, he also noted that with the scale of modern social media, “Now, it becomes really hard to regulate.”
During a U.S. House Energy and Commerce Subcommittee on Commerce, Manufacturing, and Trade hearing Wednesday, witnesses raised concerns about the proposal. Rep. Lori Trahan said that it is big tech CEOs, not U.S. families, that will benefit from the moratorium.
A statement from the Council of State Governments said this amendment would “undermine state sovereignty.”
Officials in some states, however, including Colorado Gov. Jared Polis, have been somewhat supportive of the 10-year moratorium. His state has a generative AI policy on the books that follows federal standards.
Colorado CIO David Edinger wrote in a statement to Government Technology that the state created its GenAI Policy to provide guardrails, like conducting risk assessments, in alignment with standards set by the National Institute of Standards and Technology.
“We believe Colorado’s strategic approach to generative artificial intelligence (GenAI) of governance, innovation and education is still necessary to guide state agencies in the safe and secure usage of GenAI to advance government,” Edinger said.
Some members of Congress who support the proposal have raised concerns about the patchwork of AI legislation across states, but experts have said vendors are already familiar with navigating varied state legislative landscapes, and having state-level policies in place can actually create clarity.