Photo: Federal CIO Vivek Kundra
When you go to a medical office with a headache and a sore throat, it helps to learn that you're running a fever. But it's even more helpful when the doctor identifies your illness and writes a prescription.
A surprising number of current federal IT projects show signs of serious illness. That was one of the first things that Vivek Kundra learned when he became federal CIO in March 2009. Just minutes after being sworn in, Kundra was handed a document listing $27 billion in IT projects that were behind schedule and over budget.
As an immediate response, he and his team at the Office of Management and Budget (OMB) developed a Web-based application to monitor -- and publicize -- federal IT spending. Based on monthly figures from agency CIOs, the Federal IT Dashboard reveals how much each federal agency spends on IT initiatives and how well it manages those investments.
The dashboard provides an important transparency tool, Kundra said. But its debut in June 2009 was just the first step in rooting out wasteful IT spending. "We recognize that shining light alone is not sufficient to take on some of these structural problems when it comes to federal IT," he said.
In other words, it's not enough to take the patient's temperature. You must diagnose the disease, prescribe a medication and ensure that the patient takes it.
Kundra's answer to ailing federal IT projects is the TechStat Accountability Session. A focused inquiry into a troubled IT project, TechStat is designed to determine the cause and prescribe a solution.
Since the OMB held its first TechStat session with the Environmental Protection Agency (EPA) in January, Kundra said the OBM has conducted more than 40 such investigations.
To illustrate the need for TechStat, Kundra cited one particularly disappointing IT initiative: the long-running effort to develop the Defense Integrated Military Human Resources System. "The Department of Defense spent 12 years and $1 billion to implement an integrated human resource system that unfortunately failed," he said.
Stoplights, Laser Beams
The notion of monitoring the progress of federal IT projects isn't new. During the George W. Bush administration, the OMB met quarterly with federal agency officials to review IT projects and score them on cost, schedule and performance. Results were published as stoplight-style scorecards, coded green, yellow or red to indicate that a project was progressing well, had problems or had gotten into serious trouble. "We also had a management and a high-risk watch list that were published quarterly," said Karen Evans, former administrator for the OMB's Office of Electronic Government and Information Technology during the Bush administration.
"There was a lot of information behind that, plus a lot of meetings
and oversight," Evans said of the OMB's process at the time. "[TechStat] is now the next evolution of that oversight."
Kundra is building on the previous administration's work by collecting data on IT projects more often, Evans said. He also is scrutinizing projects to investigate why they've gone wrong, she said. "What he's doing is very laser-pinpoint. He's been clear about the types of things he's looking at."
Kundra and Evans agree that inefficient IT project management is
an old problem in the federal government. Many projects have suffered from unclear requirements or overly broad scoping, Evans said. "The key is to get the data to the right project managers, program managers and acquisitions officials, so they can manage the process."
To better understand why projects run off course, Kundra said he has studied years' worth of documents, from the 1996 Clinger-Cohen Act, to the 2002 E-Government Act, to many reports and memoranda from the OMB, the General Accountability Office and the Inspector General's office.
These documents reveal a pattern, Kundra said. "Historically there's been a huge focus on policy, memos and guidance, and very little on execution." It's clear what the right solutions are: They include scoping projects correctly and terminating projects that don't produce results, he said. "The challenge has been that as a government, we haven't acted."
Kundra developed the TechStat session to convert good intentions into action. It's a compact tool, designed to run only 60 minutes. "It's time-boxed to an hour to make sure we have the appropriate decision-makers, and that the time of senior executives in that session is well spent," Kundra said.
The first five minutes of a session provide an overview of the IT project, the next 10 focus on problems that the agency and OMB have identified, and another 30-45 minutes are devoted to devising solutions, said Eugene Huang, senior adviser to Federal CTO Aneesh Chopra. Huang attends the TechStat sessions on behalf of the Technology Division of the Office of Science and Technology Policy.
In the final five minutes, attendees create a list of corrective measures for the agency to implement and report back on, Huang said. "In total, the scope goes from identification of a problem to closing out the problem."
Not surprisingly, the probing required to uncover what's gone wrong with an IT project happens long before the OMB schedules the TechStat meeting. "Sometimes these sessions take months of preparation," Kundra said. He, his staff at the OMB, staff from the agency whose project they're studying and members of the Federal CIO Council collaborate on this preliminary effort. They pore over Inspector General reports; interview CIOs, their deputies and end-users of the system that's being deployed; talk with staff at the General Accountability Office; and study feedback that citizens have posted about the project on the Federal IT Dashboard.
"Then we come up with a hypothesis," Kundra said.
During the 60-minute session, participants test that hypothesis against the dashboard data. They decide whether the hypothesis explains why the project is failing or if there's another cause. Then they create a plan. In some cases, they temporarily halt the project; sometimes they pull the plug on it altogether.
Officials from both the White House and the agency deploying the IT project attend the TechStat session. Agency participants might include policy officials, the CIO, staff from the department secretary's office, and staff who work on the project's business and IT aspects.
This kind of broad representation is part of what makes TechStat effective, Huang said. "TechStat is a good way to make sure there is buy-in from senior leaders within agencies as well as from the staff level."
To get an idea of how a TechStat exercise can change the course of an IT project, consider a recent initiative at the Small Business Administration (SBA) to give employees identity/access cards based on smart-card technology. According to the Federal IT Dashboard, the project had fallen behind its targets, sparking significant concern.
"So we started studying this issue, and we started benchmarking," Kundra said. Participants looked at factors such as how many cards the SBA had issued so far, the rate at which it was deploying them and how other agencies were doing their identity card implementations.
Among other things, the investigation revealed that the SBA was spending $1,614 for each smart card it issued. But the General Services Administration (GSA) had already developed identification for its employees using the same smart cards, and it was paying only $250 per unit, Kundra said.
During the TechStat session, the head of the SBA took the lead in halting the agency's smart-card program and obtained cards through the GSA's program. "That ended up saving taxpayer dollars," Kundra said. "But also the speed at which these cards were deployed was accelerated."
Such a review, which forces agencies to deploy technology more efficiently, can free up dollars for new IT initiatives, said Adelaide O'Brien, research manager at IDC Government Insights. In a tough economy, everyone in the public and private sector is seeking ways to cut costs. "It's a healthy exercise for government to be doing," she said. TechStat teaches skills that promote good government and make government officials better stewards of tax dollars, she said.
TechStat resembles a strategy that Kundra developed when he was CTO for the District of Columbia, O'Brien said. The district started analyzing its IT projects like they were stocks in a portfolio, regularly rating them to determine whether to "buy, hold or sell" -- continue, temporarily halt or terminate.
TechStat similarly drills down into an IT project, O'Brien said. "By getting underneath and looking at it, agencies can determine, 'Is this a project we should continue? Is it buy, is it hold, or is it sell?'"
"Selling" is a genuine option. For example, even before the TechStat sessions began, data emerging from the Federal IT Dashboard spurred the Department of Veterans Affairs to kill several poorly performing projects. "We ended up halting 45 IT projects, of which we terminated 12, which led to $54 million in capital that would have been wasted," Kundra said. "That's why the TechStat sessions are designed to be action-oriented. They're relentless and focused on results."
Among the "hold" decisions that the government has made, probably the one with the broadest impact emerged from the OMB's first TechStat session, focused on the EPA's troubled effort to modernize its financial systems.
"We learned from that experience, and that specific TechStat session, that frankly it's not just the EPA," Kundra said. "Financial systems across federal government have significant problems." The trouble is that these projects are too broad in scope, cover too many years and try to integrate all their functions too quickly, he said. "The scope of these projects, coupled with their complexity, ends up in failures across the board."
In June, the OMB issued a memo calling for all federal agencies to stop current efforts to modernize their financial systems. "That affects $3 billion of spending annually," Kundra said.
Applying It Elsewhere
The principle behind TechStat -- using hard data to monitor public initiatives and expose their problems -- didn't start in the federal government. Many analysts cite CompStat, an online system for mapping and monitoring criminal incidents, as a forebear of the Federal IT Dashboard and TechStat. CompStat was developed in New York City and later adopted by other cities.
Another Web-based performance tracking system is CitiStat, a program in Baltimore to monitor and evaluate the effectiveness of a broad range of city policies and processes.
On the federal level, the Department of Justice is developing an online dashboard to monitor federal agencies' compliance with Freedom of Information Act (FOIA) requests. "The dashboard will allow the public to generate statistics on FOIA compliance across the government and from year to year," stated the department's Open Government Plan website.
The dashboard isn't the only TechStat feature that other government management initiatives already use or might use in the future. According to Huang, TechStat offers an excellent model for policy development discussions because of how it assembles many stakeholders, from different levels within their organizations, to focus on common concerns.
"In areas where there is a defined problem, or that require coordination across various government stovepipes, this ends up being an effective tool for reaching across the stovepipes and making sure they get senior agency buy-in, as well as staff-level buy-in, to drive the key decisions," Huang said. The Office of Science and Technology Policy might, for example, apply lessons learned from TechStat to future policy discussions about wireless spectrum allocations, he said.
A disciplined process like TechStat is helpful for governments that want to innovate with technology despite budget constraints, Kundra said. "There's no better way than to get more out of technology through these approaches, to make sure you're going hard after wasteful spending, making sure that you're unearthing the best practices and scaling them rapidly, to save money and produce results for your constituents."