August 4, 2010 By Robert L. Otto
A challenge facing many new CIOs, CTOs and other technology leaders is the need to change their organization's culture and reputation. Fairly or not, technology leaders are often saddled with the perceptions (or misperceptions) that their team can't deliver projects on time or on budget; that they fail to contribute to enterprise performance improvements; and that their bureaucracy and processes prevent them from being agile or innovative. These perceptions can become reality if the technology team begins to accept these diminished expectations.
This was the situation I faced when I became CIO of the U.S. Postal Service (USPS) in 2001. Over the next seven years, we succeeded in transforming IT's performance and perception - reducing IT spending by more than $800 million annually while dramatically improving service levels. But it wasn't just cost cutting; we also matched our key competitors - FedEx and UPS - innovation for innovation. And for five consecutive years, we were named by ComputerWorld magazine as one of the best places to work in IT.
I'm often asked how we did it. Although numerous factors contributed to this success - like beginning with a great team that had significant potential - one of the most important was a metrics-driven approach to managing IT. Those familiar with process improvement methodologies, such as Lean or Six Sigma, recognize the need to continually measure performance to improve it. Equally important, this rigorous and business-focused approach also allowed us to establish ourselves as a valued partner to the rest of the organization. At a minimum, this data can allow you to readily refute erroneous charges.
Embracing measurement as a strategy - and not simply a tool - is key to creating a more strategic and accountable organization. At the same time, IT organizations must recognize that becoming more metrics-driven isn't a quick fix, but rather requires a long-term commitment. In reality, it encompasses three distinct phases - what I call architect, benchmark and communicate (ABC) - executed continually in a life-cycle model.
We need to agree up front that measurement for measurement's sake isn't the goal here. Too often, we fail to report on what's important to the overall organization and instead provide metrics that are meaningless to our internal stakeholders. Providing more of the same isn't the answer.
As a matter of fact, IT organizations can produce excellent individual performance metrics from a technical perspective and still fail to meet end-customer requirements. In some cases, we are simply measuring items that don't impact outcomes. In other cases, the aggregate variance from multiple metrics exceeds the norm despite the fact that the individual indicators are all acceptable.
If nothing else, we should begin looking at performance through our clients' eyes (and those of our stakeholders as well). This includes replacing our traditional, bottom-up approach to reporting with a top-down perspective that first builds upon the big picture metric, which is the business or mission outcome. Rather than reporting on uptime and system availability alone, we must consider how better integration or a new data warehouse will improve call center productivity and customer satisfaction.
The missing link here is often an architectural understanding of the enterprise that allows us to quantify and demonstrate our contributions to the organization's success. What's needed are more direct linkages between the IT services that we deliver and their impact on the organization's objectives and strategies.
For example, the USPS was maintaining six database platforms when I became CIO. This created many operational inefficiencies that were my problem alone. That is, until I could demonstrate to my business peers how these data silos also were impacting their performance.
Some may argue that it's impossible to establish these correlations. In reality, we've already created an entire discipline - enterprise architecture - to accomplish just that. The challenge is that in many examples, enterprise architecture isn't being used effectively to shape and mold the organization. This must change.
Another frequently cited challenge is mapping generic services to a specific cost center or allocating these benefits across multiple users. This is certainly difficult initially. However, as you develop your architectural understanding of technology's role in the business, it's possible to refine the model to become increasingly granular.
Going this extra mile is fundamental to addressing the 80/20 paradigm, as it will allow you to capitalize on shared service business models and identify specific areas ripe for outsourcing, consolidation and elimination. Through a more systematic review of inquiries, we cut help desk volume at the USPS from 150,000 to 25,000 calls per month. At that point, it was possible to outsource the "tamed" help desk process, which we did to capture additional savings.
Having established a measurement strategy that aligns with the organization's objectives, it is now time to begin using this information. Once again, it's important to recognize that we're not simply collecting data, but instead seeking ways to improve our performance. As a result, we need something to compare our performance with.
In terms of specific benchmarks, we should consider our past performance, industry yardsticks and long-term goals. These factors will allow us to readily answer three key
This insight provides context for the data that we are collecting. For example, was 25,000 distinct software applications too many for the USPS to maintain?
Ultimately knowledge is power. Many organizations lack an effective asset management strategy and rely on their vendors to provide this information. This puts them at a competitive disadvantage when negotiating contract renewals. At the USPS, we captured this information religiously, and we used the insight to optimize each contract - for example, per server or per user - for the application's actual usage profile.
Repetition and continuity are key to continuous process improvement. Without executive commitment to maintaining the rigor of continually measuring performance (and acting upon this insight), the effort will fail.
Being engineers at heart, technologists often are terrible marketers. We tend to believe that the best product or performance will emerge naturally, and we rarely see a need to promote this superiority. In reality, it's impossible to separate Apple's marketing prowess from its engineering strength when evaluating the company's enviable success.
We need to overcome our reticence to highlight the success of our IT organizations. This is important for two reasons. First, it helps to create a culture of accountability where expectations are regularly and publicly met. Second, showcasing our superior performance helps us realize additional rewards, such as more resources, trust and authority. These tangible and intangible benefits can be important motivating factors for both our leadership team and our rank-and-file work force.
As an example, I converted a conference room into a full-time war room at the Postal Service. Every surface was covered with metrics showcasing our performance and progress. This served as a constant reminder for our team, and myself, of the commitments that we had made and the expectations that we needed to meet. In terms of my peers within the organization, it visibly demonstrated our dedication to running IT as a business.
We also regularly promoted our performance across numerous channels. We published an ongoing newsletter and sought regular opportunities to brief the agency's senior leadership. Through these efforts, we established ourselves as change agents within the USPS.
Widespread use of the CitiStat/StateStat performance-measuring tools and the federal government's adoption of IT dashboards highlight data visualization's role as another important communications tool. Unlike traditional, static reporting, data visualization can be used to provide each stakeholder with a customizable real-time view into operational performance. I've used these tools to foster better collaboration between IT and internal clients, enabling productive conversations about what the appropriate goal should be and at what cost.
Building upon this point, it's important to distinguish between different measurements and segment them accordingly. In reality, organizations have strategic (i.e., long term) and day-to-day objectives, goals and commitments. Reporting on each metric's performance is important to some, but not everyone. Determine what matters to whom and align your reporting strategy appropriately.
While this isn't the quick fix that some might prefer, embracing these ABCs can help leaders create a more customer-focused and outcome-oriented IT organization. Through this process, technology leaders and their teams will secure a well earned reputation as change agents within their agencies.
You may use or reference this story with attribution and a link to