AI was a top legislative priority for states in 2025. In December, President Donald Trump signed an executive order (EO) seeking to limit states’ abilities to enact AI-related policies. The response to this federal order has been largely split on partisan lines, although there is bipartisan support for letting states regulate AI. Some experts say that the order itself is illegal. It will affect states in different ways, as governments have taken unique approaches to AI regulation.
The legislative framework, called for in the December EO, specifies the intent that U.S. Congress should turn it into legislation and pass it.
It focuses on several key areas: protecting children and empowering parents, safeguarding and strengthening U.S. communities, respecting intellectual property rights and supporting creators, preventing censorship and protecting free speech. It also centers on enabling innovation and ensuring U.S. AI dominance, educating workers and developing an AI-ready workforce, and establishing a federal policy framework to pre-empt state AI laws deemed to be “cumbersome.”
Specifically, the framework calls for pre-emption of any state laws that regulate AI development. It would keep states from penalizing AI developers for a third party’s unlawful conduct via their models. And states could not “unduly burden” U.S. residents’ use of AI for activity that would be lawful without AI involvement.
Several areas, however, would seem to not be pre-empted by a national standard, as outlined in the framework. States would be unrestricted in their zoning laws to dictate AI infrastructure locations; officials have underlined the importance of selecting locations to build AI data centers. States would remain free to govern their own government’s use of AI, whether through procurement or services provided. And they would maintain traditional police powers to enforce laws against AI developers and users, including to protect children, prevent fraud and protect consumers.
Notably, the framework does not mention environmental protections and it does not address algorithmic bias except to prevent AI providers from any action “to ban, compel or alter content based on partisan or ideological agendas.”
EARLY INDUSTRY REACTIONS
Early industry reactions to this framework were mixed as of Friday morning. Proponents lauded it as an enabler of innovation, while opponents argued it does not effectively hold AI developers accountable.
Several Republican members of the U.S. House of Representatives, including House Majority Leader Steve Scalise, released a joint statement about the framework: “House Republicans look forward to working across the aisle to enact a national framework that unleashes the full potential of AI, cements the U.S. as the global leader, and provides important protections for American families.”
On the other side of the political aisle, Democratic Rep. Josh Gottheimer released a statement dubbing the framework a “half-measure,” which fails to address issues like AI company accountability.
The order lacks any mention of general privacy protections or of AI’s use of personal data, Electronic Privacy Information Center Executive Director and President Alan Butler said in a statement. Until and unless Congress passes legislation, he emphasized, this framework does not change existing federal or state laws regulating AI.
Robert Weissman, co-president of the nonprofit Public Citizen, criticized the framework in a statement, arguing that it attempts to limit AI safeguards significantly: “This is a national framework to protect Big Tech at the expense of everyday Americans.”
Daniel Castro, director of the Center for Data Innovation, credited the framework for focusing on enabling broad AI deployment in a statement, calling it “an agenda worthy of bipartisan support.”
The framework’s future will likely depend on the actions of Congress, which has already rejected an attempt to place a moratorium on state AI regulation through the budget bill, and another attempt through the National Defense Authorization Act.