IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: Can AI and Cybersecurity Coexist?

As with any powerful new technology, the potential for artificial intelligence to analyze large volumes of data and automate processes comes with a risk that it will be used for nefarious purposes.

Closeup of a white robotic hand holding a large silver lock.
With the allure and hype of artificial intelligence in corporate, government and education sectors, there has been increasing excitement for its future uses. AI tools can make daily work faster and more efficient. However, while these tools provide great advantages in many scenarios, they can also be abused, or utilized for nefarious reasons. Of particular concern is the use of AI in cybersecurity.

Certainly 2023 could be described as the year AI became mainstream. In October 2022, best-selling author Bernard Marr postulated about the democratization of AI. He wrote, “AI will only achieve its full potential if it’s available to everyone and every company and organization is able to benefit. Thankfully in 2023, this will be easier than ever. An ever-growing number of apps put AI functionality at the fingers of anyone, regardless of their level of technical skill.” In the business world, companies quickly introduced products and tools with some AI components attached. According to a 2023 IBM survey, “75 percent of CEOs believe that competitive advantage will depend on who has the most advanced generative AI. However, executives are also weighing potential risks or barriers of the technology such as bias, ethics and security. More than half of CEOs surveyed are concerned about data security and 48 percent worry about bias or data accuracy.”


The Internet of Things (IoT) will continue to be a large driver of sophisticated AI tools and processes. This ecosystem includes smart devices, IoT applications and some form of graphical user interface to manage these devices. An IoT device can be found in appliances, sensors and communications devices which can be programmed to transmit substantial amounts of data.

According to FinancesOnline, “The number of connected IoT devices in 2020 is estimated to be 8.74 billion. The figures are expected to increase by about 200 percent in 2030 and have an estimated value of more than $1 trillion.” As the number of devices keeps growing, so does the need to collect, store and analyze data. As the IT publication InfoWorld pointed out in 2022, “With AI, IoT networks and devices can learn from past decisions, predict future activity, and continuously improve performance and decision-making capabilities. AI allows the devices to ‘think for themselves,’ interpreting data and making real-time decisions without the delays and congestion that occur from data transfers.” With the proliferation of IoT, AI will continue to grow as it manages an ever-increasing population of devices.


Smart homes, cities and connected cars will put new pressures on corporate and government institutions to ensure data safety. In December 2022, the market research company Insider Intelligence forecasted 4.3 billion IoT mobile connections worldwide by 2026 and more than 64 billion IoT devices installed by 2026. With this phenomenal growth of IoT, there is a natural demand for robust, autonomous cybersecurity tools. In the world of IoT, functionality doesn’t require human intervention. As Microsoft’s website points out, “there is real risk in what are really network-connected, general-purpose computers that can be hijacked by attackers, resulting in problems beyond IoT security. Even the most mundane device can become dangerous when compromised over the Internet — from spying with video baby monitors to interrupted services on life-saving health care equipment. Once attackers have control, they can steal data, disrupt delivery of services, or commit any other cyber crime they’d do with a computer.” So therein lies the challenge of managing an ever-increasing number of devices while keeping them fully functional, safe and secure.


There are specific advantages to AI’s ability to quickly react to cyber threats. There are a variety of AI solutions which utilize machine learning algorithms to monitor, detect and appropriately respond to cyber threats. AI can analyze large volumes of data much quicker than humans can. It can be a useful tool for detecting new cyber threats, analyzing web traffic and predicting what devices might be prone to data attacks, while at the same time making appropriate pre-emptive adjustments. AI can also recommend cybersecurity protection strategies.

But, at the same time, AI can also create “false positives” when it encounters new or unknown threats that it has yet to recognize. Elisa Silverman wrote in June for the workflow automation company Zapier, “If AI models can be tricked into misclassifying dangerous input as safe, an app developed with this AI could execute malware and even bypass security controls to give the malware elevated privileges. AI models that lack human oversight can be vulnerable to data poisoning.”


While AI can provide many tools to combat cyber breaches, it has also become a useful tool for cyber criminals. Joesph Menn in the Washington Post wrote in May that experts, executives and government officials are worried about attackers using artificial intelligence to “write software that can break into corporate networks in novel ways, change appearance and functionality to beat detection, and smuggle data back out through processes that appear normal.”

AI being used to “outsmart” established cybersecurity protection strategies and systems is referred to as adversarial AI. These attacks can be described as AI-based or AI-facilitated cyber attacks, and are also known as adversarial learning — the case of “machine versus machine as malicious AI algorithms are used to subvert (machine learning)-powered security solutions,” according to CTO Nadav Maman of the cybersecurity company Deep Instinct. These scenarios might appear like a scene from the movie “The Terminator,” as AI machines attempted to take over the “human world.” How can we prudently utilize AI tools for cybersecurity while maintaining appropriate human control?


When considering AI for cybersecurity, it’s important to consider the operation’s goals and carefully determine its measurable objectives. There are limitations to AI, but with appropriately educated and trained staff in AI and cybersecurity, safeguards can be put into place. As with any cybersecurity process, it’s not a one-and-done proposition but requires continually going back and monitoring, evaluating and auditing systems. AI and cybersecurity can coexist. But they will only do so successfully if there is a human component.
Jim Jorstad is Senior Fellow for the Center for Digital Education and the Center for Digital Government. He is a retired emeritus interim CIO and Cyber Security Designee for the Chancellor’s Office at the University of Wisconsin-La Crosse. He served in leadership roles as director of IT client services, academic technologies and media services, providing services to over 1,500 staff and 10,000 students. Jim has experience in IT operations, teaching and learning, and social media strategy. His work has appeared on CNN, MSNBC, Forbes and NPR, and he is a recipient of the 2013 CNN iReport Spirit Award. Jim is an EDUCAUSE Leading Change Fellow and was chosen as one of the Top 30 Media Producers in the U.S.