This story was originally published by Data-Smart City Solutions.

PerformanceStat is all but synonymous with performance measurement in the public sector. This is especially true at the city level where the program originated. Developed as CompStat by the New York City Police Department, the tool was implemented with the aim of improving public safety through better understanding of crime trends within the city. It operated by assessing a wide breadth of data to gain new insights into how, why, when and where crimes were taking place. The success of this implementation was due in part to its narrow, well-specified focus. The objectives were clear and the data available was able to clearly link performance (the level of crime) to indicators (police manpower, etc.).

The “stat” technique took on a cost and efficiency mission with CitiStat, its first city-wide implementation -- launched in Baltimore in 1999. CitiStat set out to replicate CompStat across multiple city agencies. The city followed the same basic process, focusing on data to drive performance accountability in each department. The process featured strict responsibility structures and continual adjustments to agency strategy based on the program’s findings. The success of CompStat and CitiStat led to the rapid expansion of Stat models through other city governments, and to even more geographical and functional rebranding including PhillyStat in Philadelphia and SchoolStat, a system used to measure education performance in various jurisdictions.

Programs under the PerformanceStat umbrella all have a few things in common. First, at their core, they ride a wave of “big data” tools and techniques, using analytics to take a new look at city data and gain unique insights into city operations. The key selling point is often how these new insights can pull down costs and loosen the noose of constricting city budgets. The programs have also been characterized as much for their showmanship and strict accountability as they have been for their use of data. University of Pennsylvania’s 2010 report on PerformanceStat referred to “the iconic ‘Stat’ meeting, with its projector and dual screens, pre-prepared slide-decks, and the agency director at the podium before the Mayor to review the numbers in plain language” as driving the success of the program. Cities use strict and detailed templates for reporting, making PerformanceStat as much an exercise in adding consistency to the review process as it is about digging into piles of data.

Scenes from the HBO series The Wire, set in Baltimore, notoriously highlight the most negative aspects of these meetings, showing fictional administrators terrified of the podium and doing anything possible to game the numbers in their favor. While this dramatic retelling of the PerformanceStat narrative almost certainly overstates its pitfalls, it raises valid concerns. There is a risk that the pomp and circumstance of the process has grown to overshadow the program as a tool. A handful of larger cities have abandoned their Stat programs after not realizing the same benefits as early adopters. Ultimately, PerformanceStat should be recognized for what it is – one tool among many designed to help decision makers make better decisions.

PerformanceStat has advanced internal performance management and accountability processes in government, but has done less to address how cities interact with outside stakeholders. Some cities have opened Stat meetings to the public, but many still view the “Stat process as private and its information as proprietary” (Kingsley, et. al.). Further, simply opening PerformanceStat data to the public without appropriate framing is likely ineffective transparency. The majority of citizens lack the ability to digest voluminous “Stat” data or the will to attend meetings where agency heads plod through results with the Mayor’s office.

New ways of engaging the public on city performance are clearly needed, those that take into account the information wants and needs of external stakeholders.