IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Generative AI Use Proceeding Apace, Florida CISO Says

State Chief Information Security Officer Jeremy Rodgers talked about the Sunshine State’s approach to artificial intelligence at a recent cybersecurity conference. A centralized legislative framework around AI does not yet exist, he said.

Artificial intelligence technology is at work as a man uses a laptop in a darkened room
Shutterstock
Florida’s state Chief Information Security Officer Jeremy Rodgers shared several takeaways on how the state is working with AI during a panel discussion at the Sunshine Cyber Conference, hosted by Cyber Florida in Tampa.

The panel consisted of industry, government and academic experts, including Rodgers; Jeff Crume, a distinguished engineer at IBM; and Sagar Samtani, director of Kelley’s Data Science and Artificial Intelligence Lab at Indiana University.

Participants shared their perspectives on the implications of AI and machine learning in cybersecurity, including best practices, vulnerabilities, challenges and opportunities.

To understand the state’s use of AI, Rodgers kicked off the discussion by defining how the state’s overall organization works.

Florida Digital Service has statewide IT, but we’ve got multiple statewide agencies that all have their own services,” Rodgers said. “Right now, where we stand, there’s not a centralized legislative framework around AI; there was some language in session this year, but for us, statute gets defined by legislators, the governor signs and it becomes law, and then we execute on that in the state government.”

However, Rodgers said, “From our shop, a lot of our folks are using generative AI to help with policy issues, to do analysis and bounce around ideas.”

AI’s use across the state varies depending on the needs of each agency and local government. Here’s what panelists had to say on how recent advancements in AI could play a role in cybersecurity and cyber operations moving forward:

  • “AI in cybersecurity has been going on for the last 25 years,” Samtani said. “Right now, we’re at the height of limits, the height of inflated expectations and so on. We’re gonna go down into the trough of disillusionment fairly soon when we realize that a lot of language learning models and generative AI techniques aren’t necessarily going to be able to do all of the magical things that the Wall Street Journal says that it can do and instead ... we’re going to start to identify particular areas of AI, like predictive analytics, descriptive analytics, graph analytics and so on that are going to be really useful for particular use cases in cybersecurity.” 
  • “If you think about cybersecurity, we’re really concerned with outlier events,” Crume said. “We’re concerned with, why is that guy downloading 10,000 files a day when his peers are downloading 100. I need to investigate that. So, those are the kinds of things machine learning is really good at finding, needles in haystacks.”
  • Regarding AI, Crume added, “We’re going to be able to use it to do predictive analysis. We could also use it for other things like threat hunting, where I want to generate a hypothesis and then figure out if that hypothesis is true.” 

More information about the conference can be found online.

This story first appeared in Industry Insider — Florida, part of e.Republic, Government Technology's parent company.
Katya Diaz is a staff writer for Government Technology. She has a bachelor’s degree in journalism and a master’s degree in global strategic communications from Florida International University.