Stock value and quarterly revenue may help shareholders gauge a private corporation’s track record, but such standards simply don’t apply to government agencies. Rather, most public offices rely on performance metrics to determine how successfully they’re delivering services to citizens and adhering to legislative regulations. By tracking everything from project deadlines to productivity levels, the public sector can better justify its IT investments, communicate its overall value and garner support from citizens and state assemblies alike.

Unfortunately while there’s no shortage of measurements promising to keep government agencies on track, developing meaningful performance metrics isn’t easy. For starters, many agencies aren’t sure how to determine what exactly they should be measuring. Others struggle to select metrics that communicate a project’s true value. What’s worse, failure to capture the right data can result in a loss of time, money and manpower for even the best-intentioned department.

Yet ask CTOs like Claire Bailey, who’s also director of the Arkansas Department of Information Systems (DIS), and they’ll tell you that it’s worth all the hand-wringing to establish effective metrics.

Making the Right Metrics

The Arkansas DIS provides more than 28 categories of services, including telephony, data networking and technical consulting for a variety of public entities. Brokering and managing telco services for the state is a highly regulated process that requires the DIS to keep close tabs on what it bills for its services — and the money it collects in return. Upon close examination, however, the DIS discovered that it wasn’t recovering the appropriate costs on its long-distance communications services.

“The DIS was having a huge problem in that there were a lot of costs the agency couldn’t account for,” recalled John Talburt, professor of information science and engineering at the University of Arkansas, who worked with Bailey to establish clear metrics.

Enter effective metrics. First, the DIS examined what it pays for its long-distance services. Next, the agency looked at what it was billing to provide these services and to which public agencies. By measuring these two variables, the DIS determined that it was only billing for a portion of the long-distance services it resells to the public sector.

“The metrics showed that there were actually minutes lost that were not being billed because our data didn’t have an appropriate customer identifier built into the system,” Bailey said. “When we analyzed the data, we found some phone calls that were being made that didn’t align to specific customers. Therefore, we didn’t know where to bill them. Metrics helped us identify a gap between what we were paying for the service and what we were billing for the service.”

It’s a discovery that wouldn’t have been possible without the DIS’ use of effective data-quality metrics. “The DIS’ [under-recovered costs] were measurable because they could easily look at how much money they were spending on wholesale communication and what they were recovering based on the itemized bills that were given by the agency,” Talburt said. “That was a very measurable product.”

Crunching Numbers

But effective metrics are more than just a high-tech sleuthing tool. By pinpointing exactly where the agency was leaking funds, the DIS avoided passing down additional fees to its customers.

“We would have had to raise the rates, which would have a fiscal impact on our customers if we didn’t correct the issue,” Bailey said. “By identifying the root cause and correcting the issue, we were able to save the state money and not inflate the long-distance rate. Now, the under-recovery of long-distance costs is no longer in existence. In fact, we were able to lower our rate.”

That’s not to suggest, however, that formulating effective metrics is a cut-and-dried endeavor. Take the recent trials and tribulations of the Miami-Dade County Department of Solid Waste Management. For years, one measure of the department’s performance was the tonnage of illegally dumped waste it removed from roadsides and other areas in the county. To reduce these “unsightly piles of waste,” the agency began collecting garbage more often, said Chris Rose, the department’s deputy director. The aggressive clean-up strategy reduced the total amount of illegal dumping, since people tend to add to existing trash piles. Fewer existing trash heaps meant fewer people were piling additional garbage on top.

Unexpected Challenges

But the Solid Waste Management department suddenly faced a conundrum. The department’s success in reducing the amount of illegal dumping earned it a red flag from its performance tracking system.

“The tonnage collected had been going down over time and therefore was showing up as ‘red’ because we weren’t collecting as much material as before,” said Rose. “Once we delved into the problem, we began calling it a ‘good’ red because it meant we were getting out and catching the piles faster.”

But that’s not all. “We had to start viewing the metrics in the context of total tonnage, the number of piles present and the speed at which we were responding to those piles,” Rose said. “All of that context made us realize that a lower tonnage of illegally dumped material was not a bad thing.”

The department faced a quandary. On the one hand, the garbage collection metrics made it falsely appear as if the agency was sleeping on the job. On the other hand, tweaking the metrics to reflect the agency’s new garbage collection strategy could look suspicious.

“We don’t want to be viewed as rigging the totals,” Rose said. Or worse, external factors such as spring break could result in an uptick in illegally dumped material, requiring the agency to modify its metrics yet again.

“As soon as you change metrics to fit your current situation, external circumstances can change on you,” warned Deborah Silver, Miami-Dade County’s director of information and technology services. “You shouldn’t be flipping metrics on a monthly basis. There has to be a balance.”

So far, the department has opted to leave the metrics as is, but that could change soon, Rose said.

“As long as the information is internal and doesn’t go too far outside of the department, we know what it means,” Rose said. “But if it gets published, we’re going to have to change it, because someone who doesn’t do this day-to-day won’t catch the context.”

Cindy Waxer  |  Contributing Writer

Cindy Waxer is a journalist whose articles have appeared in publications including The Economist, Fortune Small Business, CNNMoney.com, CIO and Computerworld.