IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

On the Edge: A New AI Is at the Intersection of Privacy, Data Use

A new type of artificial intelligence is helping city governments spot problems like potholes faster and with more accuracy than ever before, but government must maintain traditional privacy standards.

Big Brother eye
Valery Brozhinsky/Shutterstock
There’s a new set of eyes in American cities. Mounted on garbage trucks and other public works vehicles, these eyes are scanning block after block. They don’t take breaks and don’t go to sleep. They don’t even blink, not in the traditional sense. They just watch.

And they watch not for people, but for problems. Potholes. Graffiti. Illegal dumping. Overgrown lots. They watch for the kinds of issues that make a city feel neglected rather than lovely and well-maintained.

These eyes are actually cameras that process and anonymize data instantly, doing so with new AI-powered detection systems. And increasingly, local governments are relying on these cameras to find civic eye sores they need to fix, all before their residents report them.

What’s changing is not just visibility, but also how these cities are starting to act on what they see, and at the center of the shift is a combination of technologies that, until recently, operated separately: edge AI and agentic AI.

Now, as they come together, cities in California and Massachusetts are working to unlock their potential, all while protecting their residents’ privacy.

WHAT IS EDGE AI?


Historically, cities have relied on residents to spot and report problems in their communities.

For example, in San Joseé, Calif., Public Information Manager OfficerManager Chelsea Palacio described a traditional system where problems such as potholes, debris, or blocked bike lanes are mostly reported by humans. As such, Palacio said the city relies heavily on residents reporting problems through its San Joseé 311 program, as well as on its own staff seeing trouble spots in the field.

That model, while functional, leaves gaps — particularly in areas where issues tend to get ignored. AI-powered detection systems have the potential to close those gaps, while speeding up spotting and repair processes.

San Joseé’s roadway safety pilot, for example, now uses cameras mounted on city vehicles to detect trouble earlier, sending raw footage of them to the cloud. Compared to traditional street surveillance, this approach greatly reduces bandwidth, computing and storage costs, because only relevant detection events are transmitted. It also limits the need to store large amounts of raw video footage, Palacio said.

And it wouldn’t be possible without the relatively new technology called edge AI. Unlike traditional systems that rely on sending large amounts of video to centralized servers, edge AI processes data directly on the device where it is collected. In practical terms, that often means a small computer mounted inside a city vehicle.

This process is highly structured. A camera captures footage as thea vehicle moves through the city, and the onboard system immediately begins analyzing it in real time. So, as a public vehicle moves through San Jose, for example, it takes footage, and then the AI model identifies problems the city wants to look for, including potholes, illegal dumping and graffiti on street signs, Palacio said.

Analysis of the footage happens in milliseconds, with the system flagging things that need to be fixed.

And if edge AI is about detection, a related concept — agentic AI — is about what happens next. While it’s still in the process of becoming common in most government settings, agentic AI means systems that can take action based on what they detect, moving beyond analysis and into actual workflows.

Many cities are starting to use agentic AI, even if they don’t always use the term. In San Joseé, once an issue is detected, the system automatically flags it and sends a notification to staff. Staff then review the information and create a service request.

That progression — from detection to action — is where edge AI and agentic AI begin to intersect. Edge AI systems identify the issue, while downstream agentic AI processes determine what happens after, whether that’s generating a work order, prioritizing a response or reallocating resources. Basically, edge AI spots a pothole, and agentic AI assigns someone to go fill it.

Stockton, Calif., offers an example of this in action. In that city, the influx of data required rethinking how to respond to the data. As a code and housing enforcement official within the Stockton Police Department, Almarosa Vargas had a front- row seat to how the city tackled enforcement issues.

She said Stockton had multiple related staffing vacancies at one point, which created a gap they couldn’t fill by hiring alone. When the city began to pilot an AI-powered detection platform, the results were immediate.

In a five-day test run, the system found more than 4,000 potential violations.

“That was a light bulb moment,” Vargas said.

Within the first month, the number grew to 29,000. The spike wasn’t due to a sudden decline in neighborhood conditions. Instead, it was a result of having a new, clearer picture of what was always there.

“It was a paradigm shift,” Vargas said.

Vargas and her colleagues in Stockton are now looking at the true scale of code compliance, prioritizing issues by severity and developing structured responses, including a new city program that focuses on education rather than enforcement.

For staff on the ground, this shift has changed the nature of their work as well. Vargas said police officers are moving away from being “ticket writers” who respond to angry phone calls and are instead becoming “community improvement partners.”

The AI handles the initial detection of problems, which allows staff to focus on verifying information and deciding how best to respond. That distinction is central to how cities are framing these tools.

“This is a crucial point: AI detects, but humans decide,” Vargas said.

And so, every issue flagged by AI enters a review queue where a trained officer confirms whether it is a valid violation. Even something as straightforward as a parked car on a lawn requires human verification before any action is taken.

WHAT ABOUT PRIVACY?


Even with those benefits, the expansion of AI-powered systems has created some privacy concerns. While edge AI reduces the amount of data stored and transmitted, it does not eliminate the need for oversight or policy frameworks.

As such, edge AI systems deployed in cities are building in privacy protections that kick- in before any data leaves the device. In Stockton, for example, Vargas said the platform blurs sensitive data such as faces and license plates. By the time any data or footage reaches a human, it’s been scrubbed of personally identifiable information. Essentially, the system filters all that it sees, and it only retains what’s relevant, like the location of a pothole, not footage of a person standing near it.

In Boston, Chief Information Innovation Officer Santiago Garces described a comprehensive review process for his city’s use of the technology.

“The city reviews all technologies for alignment with Massachusetts state law, Boston's local ordinances, and our own security and privacy policies,” Garces said. “Boston has a sSurveillance tTechnology policy that governs how the city plans, procures and evaluates the use of technologies that have an impact on privacy.”

From a governance standpoint, edge processing does change the risk profile — but not entirely. Garces said that privacy preservation techniques, like blurring, filtering and hashing, are important for reducing the potential impact of these technologies. At the same time, he cautioned that it is important to screen how the tools are applied, as well as to verify claims from vendors and partners.

Garces also pointed to a broader shift in how cities manage data, noting that Boston has explored approaches that give it deeper control over data acquisition. The goal is for the city to be balancing their project goals with resident privacy, all while getting value from data in cost-effective ways.

“It ensures we balance achieving project goals, protecting constituent privacy, and deriving additional value from collected data more cost-effectively.”

As an example, Garces highlighted internal innovation efforts such as the city’s Curb Lab, which Boston has used to develop community engagement processes and technologies, open data standards and policy planning based on data it has collected the right way.

To guide how these technologies are used, formal safeguards are in place, Garces said, along with an annual compliance and reporting process reinforced by internal oversight.

Garces said the work "requires a continuous improvement mindset, dialogue with our departments and constituents, and an acknowledgment that technology, community needs and perspectives might change.”

This all raises a question for cities as technology evolves: Does edge-based anonymization allow governments to bypass some of the stricter privacy rules associated with traditional surveillance systems because it is not seen as a tool that generally holds its data?

The answer, in practice, is no. While edge AI can reduce the amount of sensitive data collected, it does not remove the need for policy, oversight or public accountability. Garces said that Boston’s approach is being aware of associated risks while also using factors like reliability, bias, and potential impact for harm to guide decisions.

He also said cities must be transparent about why they are using this technology with their constituents, and they must test their use cases in a way that backs that up. Essentially, technical safeguards may reduce risk, but they don’t replace the need to win public trust.

Against that backdrop, this is all a major transformation that is unfolding relatively quietly. There aren’t really major announcements when a garbage truck becomes a mobile data platform or when a camera begins analyzing road conditions in real time. The change is simply added to existing operations, layered into the routines of city work.

But the implications are far-reaching. Cities can now see more than ever, act faster and allocate resources with greater precision — all without significantly expanding their workforce. At the same time, local government surveillance is entering new territory, where the boundaries between assistance and automation are still being defined, and officials and the public are both keeping an eye on privacy.
Ashley Silver is a staff writer for Government Technology. She holds an undergraduate degree in journalism from the University of Montevallo and a graduate degree in public relations from Kent State University. Silver is also a published author with a wide range of experience in editing, communications and public relations.