February 11, 2011 By Brooke Aker
In light of 9/11, the attempted Christmas Day bombing in 2009 and even last year’s WikiLeaks incident, it’s clear that the search and information-sharing process across government intelligence databases is flawed and missing an element that would potentially enable analysts to see threats and prevent future incidents.
Semantic technology is used increasingly to help organizations manage, integrate and gain intelligence from multiple streams of unstructured data and information. Semantic is unique in its ability to exceed the limits of other technologies and approach the automatic understanding of a text. While Semantic Web — a web of understood word meanings and connections — technology is quickly eclipsing first-generation, keyword-based index search systems and second-generation social media interaction, the transition is far from complete. Nowhere is this technology more useful than in the national intelligence space.
As a semantic technology professional, I think about how semantic technology could have aided in connecting the dots between the available information in the government intelligence community in the 2009 Christmas bomber case, and most recently, the highly publicized leak of classified government information about the war in Afghanistan.
As a former intelligence analyst, I know the frustration of lacking both complete information and computer systems capable of aiding the analysis process. Almost a decade after 9/11 and untold dollars later, the nation still struggles with effective intelligence sharing. An often mentioned issue is the lack of collaboration among intelligence teams on the analysis of incoming information from the multitude of existing databases.
The Los Angeles Times points to others:
“Lawmakers have been pushing for a capability to search across the government’s vast library of terrorism information, but intelligence officials say there are serious technical and policy hurdles. The databases are written in myriad computer languages; different legal standards are employed on how collected information can be used; and there is reluctance within some agencies to share data.”
The newspaper then makes the connection to 2009’s Christmas bomber threat:
“That makes it harder to connect disparate pieces of threat information, which is exactly what went wrong in the case of Umar Farouk Abdulmutallab, a Nigerian who on Christmas Day tried to blow up an airplane using explosives sewn into his underwear. The bomb failed to detonate, and a passenger jumped on him.”
Analysts must have a reason to collaborate. They must foresee or imagine how one or more evidence streams, often with many missing elements, overlap or fold into one another to form a complete picture. The reality is, even really good human analysts cannot juggle more than 50 to 60 data points — events, names, places, times, dates and the connections between them — at once.
But good technology that mimics the same approach has no such limitation. Allowing such a system to build the larger picture — to connect the dots — through trial and error, quickly and repeatedly with an analyst reviewing that picture for plausibility, internal consistency and impact, would be a more effective approach than adding a small army of new analysts to the problem.
A system that proactively and constantly builds and tests all the available evidence on a person, action, event, etc., is the current architecture of a Semantic Web. This approach is becoming prevalent in the private sector, and governments also are now taking advantage of the Semantic Web rather than a simple web of keywords.
To test this proposition, I used the timeline of known facts about the 2009 Christmas bomber as reported by The New York Times. Although this is a retrospective view, I wanted to know what I would have concluded over time, if I were an analyst and had good information sharing and robust analytical support, such as current Semantic Web technology can provide.
To begin, I took all the known facts and began to process them semantically. I used a semantic search and analysis system to analyze the content for people, places, things, facts, time and geography, but most importantly, for events. Such analysis answers: Who did what to whom when and where? Based on our established event timeline, in summer 2009, Abdulmutallab would have hit intelligence databases when Britain placed him on a watch/no-fly list after his student visa was rejected.
We can see right away that Abdulmutallab was known to have studied in 2004 and 2005 in Sanaa, Yemen; he has a direct connection to the radical Yemen cleric Anwar al-Awlaki; he’s loosely connected to al-Qaida because of his presence in Yemen; and he disappeared in September 2009. But the most important thread is that he was already on Britain’s watch/no entry list.
On the whole, perhaps this picture doesn’t portray a person who has planned a terrorist attack. But more connections come to light when we continue to build the picture of Abdulmutallab into fall 2009.
Through November 2009, several things become apparent. First, the number of connection points has risen significantly between Abdulmutallab, Yemen and al-Qaida relative to the previous summer. Second, the number of evidentiary warning signs around Abdulmutallab has grown to include his father, the United Nations and several U.S. agencies (e.g., the National Security Agency and National Counterterrorism Center). Third, there’s a lack of communication or information sharing among U.S. agencies.
Nonetheless, Abdulmutallab was placed on a terrorist watch list but not on the more restrictive no-fly list. This may not have been the case if analysts had a diagram that visualized the increased strength between him and al-Qaida, as well as the increase in additional connections of concern at this stage of this analysis.
In retrospect, we know that there was still more time. Adding the events from December 2009 in the examination makes the graph richer still:
Once again, as the connection between Abdulmutallab, Yemen and al-Qaida increases, more U.S. agencies take note, and now he has purchased airline tickets with a U.S. destination and didn’t check any baggage (a Transportation Security Administration warning signal since 9/11). As with most intelligence analysis, the strongest indicators come too late, so understanding how to fit them into the overall picture quickly is essential — in this case, the time it took Abdulmutallab to fly from Africa to the Netherlands and then to the United States. Semantic technology that can visualize the new input, can speed up analysts’ understanding.
Semantic Web technology can provide a window into how people, places, things and events come together into threats and opportunities. It’s impossible to expect analysts to manually “see” how anomalous and imperfect evidence streams fit together. And there is always more than one way that they fit together.
Let machines do what they’re good at. Namely when coupled with semantic understanding, measuring endless clues and hints, fitting, testing, removing and adding various puzzle pieces to see if the picture starts to make sense. Past a certain threshold, analysts can take over and do the work computers never will be able to do: apply human judgment and reasoning. Otherwise judgment and decision never arrive, connections are never made, and red flags are never raised.
The timeline of these past and recent events (9/11, the Christmas bomber and recent data leakage in Washington, D.C.) show a serious need to address the gaps in our country’s intelligence procedures and sharing processes. And this is where Semantic Web comes in.
Brooke Aker is CEO of Expert System USA. He writes and speaks on topics, such as competitive intelligence, knowledge management and predictive analytics.
You may use or reference this story with attribution and a link to