WHAT THE FRAMEWORK DOES
The framework organizes its recommendations around six broad objectives: protecting children online, safeguarding communities from AI-enabled harms, respecting intellectual property rights, preventing AI-driven censorship, enabling innovation and developing an AI-ready workforce. It explicitly opposes creating any new federal regulatory body for AI, favoring instead a “light-touch” approach that relies on existing agencies and industry-led standards.
The framework’s most consequential provision, however, is its call for federal pre-emption of state AI laws. The administration argues that “a patchwork of conflicting state laws would undermine American innovation.” It calls on Congress to establish a national standard that displaces state legislation targeting AI development, restricts AI activity that would be legal without AI, or penalizes AI developers for third-party misuse of their systems.
Critically, the framework carves out several areas where state authority is preserved: traditional police powers to enforce general laws against fraud and consumer harm, zoning authority over data center placement, and — most relevant to government technology managers — laws governing a state or locality’s own procurement and use of AI, including in law enforcement and public education.
WHAT’S MISSING
For a document billed as “comprehensive,” the framework leaves significant terrain unaddressed.
Data privacy beyond children. The framework’s privacy provisions are narrowly focused on minors, calling on Congress largely to affirm that existing children’s privacy laws (currently protecting only those under 13) apply to AI systems. There is no call for comprehensive adult data privacy protections, no federal analog to the California Consumer Privacy Act or the EU’s GDPR. This is a significant gap, given that AI systems routinely process sensitive personal data belonging to residents who interact with government services.
Transparency and explainability requirements. For public agencies deploying AI in consequential decisions — parole recommendations, tax assessment, permit approvals — the question of whether an affected resident can understand or contest an AI-driven outcome is fundamental to due process. The framework is entirely quiet on explainability mandates or disclosure requirements for government AI use.
Enforcement mechanisms. The framework is a non-binding set of legislative recommendations. It directs no agency to take a specific action, sets no compliance deadlines and establishes no penalties. As legal analysts have observed, it is “not a governance framework in any technical or operational sense.” What teeth exist come not from the framework itself, but from the December 2025 executive order that preceded it — specifically the Department of Justice (DOJ)’s AI Litigation Task Force, established to challenge state laws in federal court.
Workforce displacement. While the framework encourages Congress to expand AI training programs, it explicitly favors “non-regulatory methods” and calls for studying job trends rather than addressing them. For local officials managing communities facing real displacement risks, this is aspirational language with no policy anchor.
WHAT THIS MEANS FOR STATE AND LOCAL LEGISLATION
The chilling effect is already real. Even before any legislation passes, the administration’s posture — threatening DOJ litigation, conditioning federal funding on compliance and establishing a task force specifically to challenge state laws — is shaping legislative calendars. Jurisdictions weighing new AI transparency bills or algorithmic accountability ordinances must now factor in the legal and financial risk of being targeted.
The carve-outs offer genuine breathing room. State and local governments retain meaningful authority over how they themselves use AI. Requirements governing government procurement, public school AI tools, law enforcement use of facial recognition and benefits determination systems appear insulated from pre-emption. Local officials should accelerate AI governance policies in these domains now, before the legislative landscape shifts further.
The patchwork isn’t going away soon. Congressional passage of a comprehensive federal AI law faces substantial political headwinds. Earlier attempts to include AI pre-emption in the National Defense Authorization Act failed. The framework’s own legislative path is uncertain, with Democratic opposition and intra-Republican tension over states’ rights complicating the math. California, Colorado and New York compliance regimes remain in full force. Any jurisdiction with AI vendors operating across multiple states will continue navigating a multilayered legal environment.
Procurement is the most immediate lever. Since state and local AI procurement rules are explicitly preserved, governments might want to treat their vendor contracts as the primary governance instrument. Requiring vendors to document model behavior, provide audit trails and meet explainability standards in government contracts is legally defensible under the framework and operationally critical regardless of what Congress does.
THE BOTTOM LINE
The new AI legislative framework signals a clear federal direction: less regulation, more innovation and federal supremacy over AI development rules. But it leaves a wide governance gap — no enforceable bias standards, no adult data privacy protections, no transparency mandates — that state and local governments will need to fill within the space they still control. The window to act in that space may be narrowing. The time to build AI governance infrastructure is now, not after Congress resolves a debate that could take years.
Alan R. Shark, a senior fellow at the Center for Digital Government, is an associate professor at the Schar School of Policy and Government at George Mason University, where he also serves as a faculty member in the Center for Human AI Innovation in Society. He is also a senior fellow and former executive director of the Public Technology Institute, a fellow of the National Academy of Public Administration, and founder and co-chair of its Standing Panel on Technology Leadership. He is the host of the podcast series Sharkbytes.net. The Center for Digital Government and Government Technology are both divisions of e.Republic.