States have been steadily advancing AI governance and adoption, which can help clarify the rules of the road for government service providers in the absence of comprehensive federal legislation on the technology. Despite bipartisan pushback, the Trump administration is continually targeting states’ authority to enact regulatory protections through a now-stricken decadelong regulatory moratorium in a budget bill, an AI action plan that threatens federal funding for states if their AI laws are deemed “restrictive,” and calls on lawmakers to act to prevent “overregulation.”
Now, a draft executive order aims to take further action to eliminate state laws on AI; it would establish an AI Litigation Task Force intended to challenge state-level AI laws, as reported by POLITICO Pro.
The current state-level AI legislative landscape is the topic of the Council of State Governments (CSG)’s latest report. It indicates that AI has been a top legislative priority for states in 2025; states and U.S. territories have proposed 252 AI-related measures this year alone. Only 12 states have not yet enacted AI legislation, per the report.
The focus of state-level legislation varies.
One legislative example highlighted in the report is Texas Senate Bill 1964, which created a regulatory structure for transparent government AI use. Another example is Louisiana House Resolution 320, which aims to bolster AI professional development in state education agencies and institutions.
Privacy and security are key focuses for AI-related policy, including the protection of personal information, data privacy and cybersecurity, and regulation of disinformation and deepfakes.
The 2024 Utah Artificial Intelligence Policy Act marks the first AI-centered consumer protections, per the report, and the Texas Responsible Artificial Intelligence Governance Act requires explicit consent for the commercial use of biometric data. On the disinformation front, California Assembly Bill 502 prohibits malicious AI-generated media in election communications, and North Dakota House Bill 1167 requires a disclaimer for political media that impersonates someone using AI.
The public is still skeptical of AI, and the report cites findings that only 32 percent of Americans trust it. Task forces can help, as can transparency requirements and legislative protections for whistleblowers, as in Michigan House Bill 4668. The use of AI tools to set prices is impacting public trust, too; legislation like North Carolina House Bill 970 can address these concerns by preventing algorithmic price hikes — in this case, related to rent.
Whether in spite of or because of the policy and governance in place, states are advancing AI use. It is being used by law enforcement, in public education and to make government administration more efficient, the report details. State-level protections exist in these areas, too.
Public-private partnerships are also an accelerator for state government AI implementation, the report reveals, highlighting Amazon’s investment in AI infrastructure in Pennsylvania, New York’s Empire AI initiative, and California’s partnership with NVIDIA to support AI workforce training. Experts argue that a varied state AI legislative landscape does not hinder these partnerships, but may actually enable them.
The report assesses states based on seven measures of technology infrastructure, and human talent supporting AI growth, through a benchmarking tool CSG has dubbed “State AI Competitiveness Indicators.”
AI competitiveness, per this tool, is based on the introduction of AI legislation to support development, the number of Forbes Top 50 AI firms in the state, venture capital investment in startups, the number of data centers, annual net electricity generation, cloud infrastructure capability, and the percentage of jobs requiring AI skills. Per these indicators, the quantity of state-level legislative measures related to AI is actually seen as contributing to a state’s competitiveness regarding AI.
The report offers several key takeaways for state legislators. Transparency and accountability are AI governance “cornerstones.” Human oversight should remain a central part of AI. Consumer and citizen protections are fundamental for AI. Attention is required on workforce impacts. Legislators cannot ignore environmental sustainability. And last, states are leading as federal regulation remains limited.
The report recommends state lawmakers and constituents should decide the path for AI and other emerging technologies within their state, as successful AI initiatives in one state may not be successful in another.
“States have not had the luxury of waiting for federal action on AI policy,” the National Association of State Chief Information Officers said in a statement in May, underlining that they have created their own AI standards to meet unique state needs and that preventing them from enforcing these standards would undermine service delivery and data protection efforts.
Editor’s note: This story has been updated with a graph showing the 10 states where the most AI legislation has been introduced in 2025.