Chamberlain is the COO, assistant secretary for security and operations for Massachusetts, and during the National Association of State Chief Information Officers (NASCIO) 2025 Midyear Conference in Philadelphia on Tuesday, he offered his view of the growing fight against generative AI fraud in the public sector.
Chamberlain said his state sees AI as it saw life sciences about a generation ago: A way to boost the economy and prestige of Massachusetts via governmental guidance and encouragement. The state now boasts of a “Life Sciences Corridor” thanks to state government and other actions taken a few decades ago, he told attendees, an effort that included incentives.
More recently, the state’s governor established an AI task force as part of a push to create an Applied AI Hub in Massachusetts.
Ideally, that effort would create jobs for the students who attend the many colleges and universities in the state and provide revenue and work in the “economically depressed” western part of Massachusetts.
“We want to retain that talent,” he said of technologically minded students.
Indeed, the state’s AI push involved partnerships with universities and colleges, which some students “embedded with agencies as a talent pipeline,” Chamberlain said.
Some of those students overcame skepticism among state highway officials to use AI to suggest improvements to road work. They did such a memorable job, he said, that those highway officials all but insisted on hiring at least two of those students after their cohort had expired.
But with such growing promise comes creeping security threats. Not only do cyber criminals keep pushing the edge of sophistication but they are learning how to use the latest tech in their scams. That has led the state to reorganize some of its cybersecurity organization so that it can provide a quicker response to breaches, he said.
But more threats are certain — that could include newer forms of unemployment or benefits fraud, especially if certain predictions come true and the economy devolves into recession in the near future.
One figure stood out during Chamberlain’s presentation: By 2026, cyber criminals, including those using AI, can be expected to “exploit vulnerabilities” within 60 minutes, up from one week now.
Exposure flows from such problems as applications “that are old enough to vote”; weak, disparate systems of identification verification; and easy access offered by Internet of Things devices. He used the example of an aquarium thermometer connected to the Internet but poorly defended.
Of those threats in this growing era of public-sector AI, he said: “If you are a cybersecurity expert and that doesn’t keep you up at night, you should probably do something else.”
He said even relatively simple defenses such as multifactor authentication (MFA) can help those charged with protecting state systems.
“MFA can weed out the unsophisticated attackers,” he said. “It makes them move on.”