By terms of sheer volume, cybersecurity analysis can be a losing bet. The typical security operations center logs almost 17,000 malware alerts in a typical week, according to a report by the Ponemon Institute. The center may spend 21,000 hours a year chasing down false positives at a cost of $1.3 million. With the proliferation of networked devices in the expanding IoT ecosystem, the torrent of data will only become more intense.
But technology, which makes it so easy for cyberhackers and thieves to attack computers with malware and other threats, may offer the best counterattack.
IBM Security says its artificial intelligence (AI) platform Watson can help give analysts an edge. “It’s about bringing together human and machine,” said Diana Kelley, global executive security adviser for IBM Security. “Human intelligence is unique, but the machine can sometimes get the information together much more quickly.”
Watson’s handlers say they are set to integrate its cognitive computing processes into IBM Security’s QRadar security intelligence platform. The move would greatly enhance the human capability to parse vast quantities of security-related information in a timely manner.
For state and local government, the implementation of machine learning in cyberdefense could give analysts access to previously unavailable sources of information. In particular, Watson could tap into unstructured data, something human analysts cannot easily accomplish.
Analysts have an array of tools available for identifying and interpreting structured data: Well-defined components such as IP addresses, login activity reports and specific malware signatures.
Unstructured data is harder to tap. The typical human operator doesn’t have a fast, effective way to read through the many blogs, social media posts and research papers that may contain clues about emerging attacks. There are about 10,000 security research papers published each year and 60,000 security blog posts issued every month, Kelley noted. Lacking automated tools, cyberanalysts are likely to miss important clues that may be embedded in these resources.
“Humans can look at unstructured information, but we can’t parse it very quickly. It’s also harder for us to see subtle patterns in the data, where a machine can see those patterns and highlight them,” Kelley said. That’s where Watson comes in. IBM Security executives envision the machine intelligence capability standing by on call, ready to be invoked any time an operator comes across a piece of information that needs deeper investigation. Once the usual tools have been tried, Watson could step in to scour the unstructured data, seeking out echoes and indicators that might correlate the suspicious activity to other factors in the cyberdomain. It’s possible for instance that a piece of malware might show certain telltale signatures. Analysts could spot it, but not know exactly how it is being implemented. Now suppose that malware is being used to drive a phishing attack, generating fake pop-up notices that fool users into disclosing a PIN or password. The analyst might not have visibility into the phishing scam, but Watson might spot it in the blogs. “You need a machine that can go through vast amounts of unstructured data very, very quickly and then get a reminder email out to everybody in the company, telling people to never give out their PIN,” Kelley said. “Analysts may have access to great tools, but they may not be able to put it all together as quickly as they need to before damage occurs.” IBM Security says it has tested Watson on security detail and drawn positive results. In one test, operators launched a white-hat attack against a system and asked first- and second-level analysts to check the health of the network. After half an hour of prodding, both teams declared the system clean of any threat. “Watson looked at the same set of data but was able to pull in more information from more different sources,” Kelley said. Watson correctly declared the system infected. IBM is not alone in looking to wed human and machine intelligence in the cyberfight. Last year researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the machine-learning startup PatternEx announced an artificial intelligence platform called AI2 that they say detects cyberattacks significantly better than existing systems. “To predict attacks, AI2 combs through data and detects suspicious activity by clustering the data into meaningful patterns using unsupervised machine-learning. It then presents this activity to human analysts who confirm which events are actual attacks, and incorporates that feedback into its models for the next set of data,” the scientists reported. Some have speculated that AI could help ease the chronic shortage of cybersecurity workers, by freeing up analysts to devote their attention to high-value operations. “By taking the repetitive tasks out of the analysts’ day, filtering out the noise and automatically remediating the primary threats, a significant proportion of the drudgery is removed,” noted security expert Piers Wilson. Not everyone is convinced that the AI will be the silver bullet in cybersecurity, however. Researchers from Microsoft and the University of Louisville for instance offer a catalog of AI failures in cybersecurity, and they warn about the possible risks of applying large-scale, super-fast solutions to potentially critical systems. “A single failure of a super-intelligent system may cause a catastrophic event without a chance for recovery,” they caution. Despite such concerns, Watson’s advocates say the introduction of machine learning will be an overall win for the security operations center and that government in particular stands to gain. “It’s about constrained resources, the ability do more and to focus the people you do have much more quickly,” Kelley said. “If you are resource-strapped, as so many cities are, that is a big advantage.”