IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Beware the Super-Metric, and Other Analytics Advice

Industry Perspective: Ben Tomhave of LockPath writes about how to put “analytical islands” of data into proper context.

It’s rare today to find yourself or your organization lacking adequate data in a given situation. Rather, there is plenty of data to support decisions, but often it’s within silos and not rolled up into a more complete picture. If anything, our organizations are adrift in a sea of big data, and we’re searching for analytical islands that can be used to provide some frame of reference from which a reasonable course can be charted. Unfortunately, as pressure increases to make sound decisions, relying solely on these analytical islands is decreasingly defensible.

The challenge with big data is not one of having data, nor is it a matter of knowing how to perform analysis on a given data set. Instead, the challenge is in producing something useful and meaningful that ties together multiple analytical points. That is, rather than relying on siloed analytics in narrow contexts, the difficulty comes in finding ways to pull those disparate analytics together to provide a more complete contextual picture. In essence, this is akin to producing a large-scale composite map from a collection of individual charts. In other words, it’s like cartography — mapmaking — for data.


Context Is Everything


One of the key selling points for moving to a continuous monitoring model is that it decreases risk. However, defending that claim can be difficult at times. Consider the U.S. State Department, which captured headlines starting in 2010 with its Information Assurance program. A SearchSecurity article in April 2010 talked specifically about some of the key statistics surrounding the threat environment and how the State Department was mediating the associated risks: “In a typical week, the department blocks 3.5 million spam emails, intercepts 4,500 viruses and detects more than a million external probes to its network.”

Effectively dealing with this amount of negative traffic led to positive results, due in large part to getting system administrators to apply security patches in a short time frame, which reduced the window of opportunity for attackers to exploit those weaknesses.

“Using this system, overall risk to the department's key unclassified network has been reduced by about 90 percent at both overseas posts and domestic locations,” the article stated.

These data points are very interesting, but they raise a question (more than two years later): How do they fit into the overall context of a risk management program? There is a classic Rumsfeldian problem here — a quote attributed to former President George W. Bush’s Secretary of Defense Donald Rumsfeld: “We don’t know what we don’t know,” which is to say that intercepting viruses, blocking spam and detecting millions of network probes is interesting. But what’s the greater “big data” context? More importantly, how do we take these analytical components and map them to other key data sets, such as traffic to websites, incident response metrics, virus infection cleanups, mop-up efforts from lost mobile devices and mission performance?

From an enterprise risk management perspective, there is a big picture that must be considered. The real challenge with big data is in going from these individual examples of data analytics to a bigger picture that successfully and meaningfully puts those analytics into the larger full-enterprise context. It’s how we map these analytical islands to each other that ultimately provides the support we need for improved decision quality.


Pockets of Data Everywhere


There is no shortage of data, but there is a bigger question to consider: Is decision quality optimal in light of these data sets or is there room for improvement? In fact, one could even go so far as to wonder if data — such as key performance indicators and various other metrics — are being rolled up into a unified view along with other business statistics.

There are several key data sets within IT that can be useful to measure, including:

  • uptime and availability;
  • mean time between failure (especially for hardware);
  • mean time to repair;
  • incoming website access statistics (who’s hitting your sites);
  • outbound network/Web access statistics (where your people and data go);
  • operational security metrics (firewall blocks, viruses detected, scans detected, vulnerability scan and penetration testing results, denied access and failed logins, etc.); and
  • security incident metrics (data breaches, physical incidents, lost devices).
Overall, this world of operational metrics — key performance indicators (KPIs) — is well known. However, pulling them together in a meaningful way can still be difficult and elusive. Specifically consider how operational KPIs like uptime and availability may be impacted by security events, which are typically tracked as separate metrics. Then consider the impact of these statistics on larger business functions and processes.


Aligning Data to Value


What are the key metrics that are tracked at the top? What measurements demonstrate successful mission fulfillment? How are these metrics related to the operational picture? These are all key questions that must be considered in the face of big data. At the very least, there is an emerging imperative to have two tiers of analytics: one at the silo level and one at the overall enterprise level. Connecting these two tiers will provide the value-mapping that’s often missing today.

The first question to consider is: How is overall organization performance measured? Put aside operational concerns for a moment (including cybersecurity) and consider what the mission is and how it’s being fulfilled. What are the key attributes of these daily responsibilities? Identifying what your organization does, and how it performs these duties, is a good first step. Once these questions are understood, it is then possible to start developing and factoring the key metrics that go into measuring and demonstrating delivery on these missions. In doing so, it becomes easier to start equating operational analytics with overall mission analytics, which in turn provides a much-needed mapping of value from the top of the business to the daily operations that underpin it.

Another key component for aligning data to business value is in understanding the asset picture. In this context, “asset” is interpreted broadly to include people, process and technology. Understanding how organizational performance (and success) is measured is a great starting point, but it also needs to be considered together with the assets that comprise the organization. Factoring in business performance metrics and assets will lead to a deeper understanding of operational performance, which can in turn be correlated directly to operational KPIs. In essence, the entire process charts analytical islands throughout the organization, with the net result being to connect data to business value.

In context, the overall hierarchy looks at the top-level business performance metrics, which map to assets, which in turn map to various operational areas. Rolling-up these various analytical islands creates the second analytical tier, which helps address some of the challenges inherent with big data. Moreover, it allows organizations to move beyond disparate analytical islands to a cartographic perspective that views each island in a useful and meaningful context.


Beware the Super-Metric


Aggregating various metrics into roll-up analyses can be very beneficial, but there is a point of diminishing returns. In fact, over-aggregating data can have a detrimental result, as has been demonstrated by the financial services industry over the past decade. It may be tempting to create single super-metrics that are equivalent to a temperature gauge or thumbs up/down picture, but doing so will come at the expense of having a sufficient amount of detail in the characterization from which to make a well informed, defensible decision.

Reducing multiple analyses into a single super-metric obscures the truth. Consider, for example, a single metric that’s comprised of five equally weighted components. Four of those components could have high scores, whereas one could have a moderate to low score. The overall aggregate score will appear to be disproportionately high, even though one component area may in fact be failing and represent material risk to the organization.

Additionally single metrics tend to be misleading by virtue of being too abstract. For example, if someone told you that today’s risk rating was 79, you might think that is either good or bad, depending on how much you know about the score. If yesterday’s score was 65, then you might think that today’s score is better. On the other hand, if you learned that this score was out of 250, then perhaps you would not be so positive. At the end of the day, this number may really be quite meaningless unless you know exactly how it was calculated and what it represents. Worse, because it’s a number and not a label (e.g., high, medium or low), it may feel more authoritative, even though its basis is no more credible than a label.

Finally, it’s important to consider whether or not aggregated metrics derived from big data put decision-makers in a better or worse position for making a decision. Over-aggregating data sets into reduced metrics can distort reality, leading to worse decisions than if decision-makers were exposed to larger data sets. We know how to perform analysis on various data sets, but big data means that we must now evolve that approach to provide a reasonable secondary tier of analysis that balances aggregation and reduction against the value of the resultant metric or metrics.


From Data to Decisions


When all is said and done, the value of big data is in how well it informs the business and leads to better decisions. If this sounds a lot like decision sciences, then you are right. How leaders make decisions is increasingly influenced by the data available and how it can most effectively be used. Being able to present disparate data sets in a meaningful, consumable manner without losing a reasonable degree of detail is a key challenge.

In the short term, one of the key areas for focus is cybersecurity and related operational risk management concerns. The reality today is that IT operations and cybersecurity represent a disproportionate influence on overall operational risk. That is to say, if your IT systems go down or are compromised, then the effect goes well beyond just a minor operational inconvenience, potentially disrupting many — if not all — business functions. Addressing concerns in these areas today will help stabilize the environment and allow for advances in other key performance areas as well.

There are three considerations for achieving these objectives of better performance within a cost-efficient framework, which will result from putting analytical islands into a more complete context:

  • Know Yourself: What does the organization do, and how does it measure mission performance? How is improvement gauged and managed? Understanding the core functions of the business is a vital first step in being positioned to effectively handle big data. It is also important to understand the asset profile (i.e., people, process, technology) of the organization in order to properly factor key business metrics and values into functional roles and processes.
     
  • Know Your Data: What data sets are available? Are there areas where sufficient data is lacking? Are there common touchpoints in multiple data sets that can be leveraged in correlation and aggregation? Enumerating available (or desired) data sets is a key next step. Doing so provides the bottom-up view that needs to be aligned with the overall top-down view.
     
  • Connect the Dots: Being careful not to over-aggregate big data into super-metrics that undermine decision quality, the last step is connecting the top-down and bottom-up views through a secondary level of data analytics. The biggest challenges come in finding the right balance of aggregation versus detail in order to communicate a sufficient amount of information without overwhelming the decision-maker or obscuring vital details that are needed for making good decisions.
Quality decisions naturally will flow from having better data and following better decision-making processes. However, it’s important not to over-aggregate data sets, which can result in obscuring important details that are necessary in making reasonably well informed decisions. The example set by the U.S. State Department’s Information Assurance program demonstrates the value of analytical methods and the success that can be achieved through a continuous monitoring approach. However, the example also provides an early glimpse of the emerging challenge posed by big data. This challenge can be met through a multitiered analytical approach that charts the sea of data, connecting analytical islands into a super-set of KPIs and metrics that in turn improve security and performance.


Ben Tomhave is principal consultant for LockPath, which provides governance, risk and compliance software.