IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI Portends New Cybersecurity Risks, Opportunities for Higher Ed

While new artificial intelligence technologies could be used for nefarious purposes such as creating more convincing phishing attacks, experts say the technology might also automate and strengthen IT security protocols.

prepare for AI,How,To,Prepare,For,Ai,Revolution,,Education,Concept.,Educational,Institutions
Shutterstock
With the rise of artificial intelligence tools changing how work takes place in education and business, IT security experts warn that the emerging technology could present additional cybersecurity risks in the years to come — as well as new means of protecting networks more efficiently.

According to Amanda Stent, director of the Davis Institute for Artificial Intelligence at Colby College in Maine, the growing popularity of generative AI tools, particularly in the education and business sectors, may lead to more headaches for IT personnel already battling an onslaught of cyber attacks like ransomware and phishing attacks that have been on the rise since COVID-19. She said the growing use of open-source generative AI tools like ChatGPT could have major data privacy implications, and noted that users of these programs should avoid putting sensitive or personally identifiable information into prompts.

“It's one thing to be chatting with the generative AI and asking it for business ideas or [to] help you write an email. It's a completely other thing when you upload a client list or your pitch deck for the next six months, or your company financials. All of those things may lead to data leakage,” she said. “Employees in higher education, like employees in any business, need to understand what kinds of data is appropriate to put into external vendor systems, including these [GenAI] models, and what kinds of data is not appropriate. And in higher education, we are subject to additional regulation, including FERPA.”

Stent said these data privacy concerns are exacerbated by the fact that GenAI tools are still prone to making major errors, which can also lead to data leaks.

“There [have been] multiple jail breaks of generative AI models,” she said. “Some prompts have caused [GenAI] models to break, to fail or leak personally identifiable information in different ways. One example of such a randomly generated prompt was just entering the word ‘poem’ many times. … Suddenly, you're getting people's names, addresses, email, telephone numbers and [other] private information.”

V.S. Subrahmanian, a computer science professor at Northwestern University, said AI technology will likely be used by cyber criminals to create more convincing email phishing attacks, with fewer of the typical telltale signs of phishing. He said he also expects AI to generate phishing messages that combine text, images, video and audio, as well as fake social media accounts with fewer red flags.

Furthermore, he said, AI could be used to develop phishing attacks via email and other platforms that are microtargeted for specific users, making them even more effective. He added that some of these probing efforts could utilize deepfake technology to make content even more convincing.

“We expect to see adversaries use AI to craft sophisticated phishing messages that don't have the same kinds of grammatical and spelling errors," Subrahmanian said. “It's not just going to be coming at us through email, but also every other type of digital content that we consume, whether it's on a website, on our social media feeds or coming through text messages … all of these are vectors where adversaries can put together highly engaging posts that we would be inclined to click on.”

Rhonda Chicone, a professor of computer science and cybersecurity at Purdue University Global, said that in addition to concerns about data privacy and phishing attacks, GenAI could also be used by cyber criminals to create new types of malware and ransomware attacks, which have been a major concern for IT teams at schools and universities in recent years.

She said organizations and workplaces that are utilizing AI should provide cybersecurity training that’s up to date with AI advancements for better cyber hygiene, especially in schools and universities witnessing an increase in phishing and ransomware attacks.

“With anything else technical, there's good and there's bad. I think we'll see a lot of [processes and security measures] that are currently being done in cybersecurity to be more automated and faster and more accurate," she said. "But then on the flip side, I think we're going to see bad actors using AI as much as they can, so it’s kind of a double-edged sword."

Subrahmanian said AI tools will be used to bolster IT security. For instance, he said, Northwestern’s Security and AI Lab (NSAIL) has been developing techniques for using deepfake AI technology — known for its potentially nefarious and deceptive uses — more responsibly. NSAIL researchers also have built on existing AI techniques to generate fake documents and databases to combat intellectual property theft and data breaches.

“In my lab, we're doing a lot of work on using AI techniques to deter intellectual property theft,” Subrahmanian said, adding that he expects to see a “cat-and-mouse game” between IT professionals using AI to secure networks and adversaries using AI for nefarious purposes moving forward.

Stent agreed that in addition to new risks, generative AI may bolster and automate IT security processes.

"We think of it as generating pictures or text or music, but AI can also be used to monitor computer systems, analyze time-series data, look at logs, and identify flaws and vulnerabilities in existing infrastructure,” she said.
Brandon Paykamian is a staff writer for Government Technology. He has a bachelor's degree in journalism from East Tennessee State University and years of experience as a multimedia reporter, mainly focusing on public education and higher ed.