The document, “Making AI Work for the Public: An ALT Perspective,” was developed by New America’s RethinkAI coalition, which works with cities to build AI pilots and transforms findings into guidance and policy recommendations.
Various frameworks have been created, by the federal government and others, to support responsible AI advancement, focused on areas including human-centered AI, civil rights, risk management and security.
The report from RethinkAI includes a new governance approach: the ALT framework. The model focuses on three needs: adapt, listen and trust.
The “adapt” component highlights the need for government to plan for AI demand — prior to any new product launch. Part of this is to pass policies that enable agile AI implementations, but the other aspect is to keep humans involved for decision-making and work that requires high-level judgment. AI agents can, however, be made responsible for some routine tasks.
The “listen” component calls on governments to ensure that AI use cases are meeting the actual needs of the community, and that AI tools take needs into account, too. For example, combining structured inputs like 311 reports with unstructured inputs like meeting transcripts can help AI tools provide more in-depth insights. AI models can also be trained for “context engineering” to better understand problems, rather than just the prompts used, and potentially find solutions. A public AI sandbox can enable collaboration between staff and members of the public to collaboratively translate documents like budgets into plain language.
The “trust” component encourages governments to go beyond offering transparency to enable public accountability. For example, community-controlled data and the use of resident sentiment to analyze impact can both help improve trust when using AI. In addition to gauging the efficiency of AI-related initiatives, trustworthiness should also be measured. Goals and timelines should be made public.
The report’s recommendations to align actions with this framework aim to inform governments, philanthropic entities, universities and community organizations.
For example, if a city deploys an AI-powered 311 chatbot that helps residents request pickup service for bulk items, but does not change staffing or processes, longer delays may be created for the services, as the report details: “Efficiency gains on paper turn into operational strain in practices.” By preparing more proactively for the demand increased by AI tools, per the ALT framework, a government could ensure staffing and funding are adjusted to meet an uptick in requests.
The report was informed by two efforts launched in the summer of 2023: an ongoing trends survey and a series of pilots with civic-sector partners in Boston; New York City; and San Jose, Calif. The goals were to evaluate the current landscape of AI implementation and propose a results-based framework. Providing an analysis of current policies and practices in U.S. local governments aims to inform how AI can meet democratic needs, rather than trying to reshape democracy to support AI needs.
“We are at an inflection point, and we need to rethink the role of civic technology as institutions change to meet the advent of AI and a new federal landscape,” the report said, underlining that “public institutions are under attack.”
Among other key findings in the report, its authors write that AI is a unique technology in that CIOs and IT leaders are driving its adoption and related rulemaking — as opposed to other IT advancements that have been driven by officials who may be outside of a particular IT organization.
Adoptions vary between the city and state level, and even policy focuses differ, from education to safety to elections and other areas. While policy focuses are varied, legislatures are moving forward to govern AI. More than 1,600 AI bills have been proposed or passed by states since 2019; about 45 percent of these were introduced in the first half of 2025.
In this ecosystem, governments are responsible for creating the conditions that enable AI implementations — with accountability measures in place — to meet residents’ needs and deliver value to the public. Philanthropic organizations can help fund new ideas, with a capacity to take risks that those in the public sector may not have. Universities can support AI’s advancement with research, training, evaluation and data capacity-building. Community and nonprofit organizations can mobilize residents and hold governments accountable. Lastly, coalitions can play a convening role.
With this ecosystem in place, the report said, “AI can move beyond efficiency and build a model that residents truly need: one that fosters adaptation, listening, and trust.”