Visitors to the Boston website “cityofboston.gov/bar” are first met with a grid of sixteen flashy icons. They show children smiling in front of school buses, emergency service vehicles at the ready, and dramatic cityscapes, all overlaid with bolded statistics capturing the details of life in Boston. Each icon also acts as a portal, allowing users to drill down into more elaborate views of city services. From “Parks” to “Police” to “Innovation and Technology,” these portals provide the information needed for residents to actively monitor city government performance.
Cities around the United States have taken steps to improve the connection between performance reporting and their communities. City level solutions have taken the form of everything from simple spreadsheets and hard-copy reports to more advanced online solutions such as the portal system implemented in Boston. However, what is being said through these tools is more important than the medium. The Boston About Results (BAR) site is one example of how city governments continue working to manage the quantity, type, and framing of reporting.
The site is managed by Boston’s Office of Administration and Finance to provide a public-facing dashboard for the review of city operations across sixteen areas of city government. These areas are framed around citizen use cases (e.g. Schools, Parks, Public Health) rather than underlying administrative departments, and each is tied to a host of underlying strategies and performance indicators. For example, the “Inspectional Services” area of the scorecard includes the strategies “cleaning of vacant lots” and “respond to housing no heat complaints within 24 hours.” “Cleaning of vacant lots” is in turn tied to indicators for “vacant lots reported,” “vacant lots cleaned,” and “percent of vacant lots cleaned by owner.”
BAR presents data on metrics over a three year time horizon, graphing both actual and targeted performance. The data is curated from among a much larger number of internally generated metrics to provide a focused view of city performance. The current scorecard reports data from only 16 of 46 departments within Boston city government, and even among these areas, the metrics are selectively chosen. Despite this narrowing, the final product includes hundreds of strategies and supporting indicators. The BAR site helps users navigate this data through a tree-like structure: first associating indicators to strategies, then associating strategies to city performance areas.
The drill-down organization of the site helps comprehension, but can make it hard to gain an overall picture of city performance. Arguably, aggregating results at the city level is not the role of the BAR system. Performance management literature suggests that measures should be curated to present “the minimum number of metrics to effectively provide decision making information” and should be “tied to outcomes” (Poister). At a strategy level, BAR certainly succeeds on both these scores. It narrows down a wide array of statistical data to provide the most meaningful indicators for select city functions.
This relationship between outcomes and indicators can be difficult to draw in a citizen-facing reporting model. Government officials likely have an idea of the types of indicators they want to watch, but these are not necessarily the same as those which interest the population at large. In San Francisco, Performance Measurement Program Lead Kyle Burns describes how the city formerly monitored only the cleanliness of parks. But after conducting citizen surveys, they found that a park’s smell was actually a stronger determinant of whether a citizen would actually stay and use the area. People may not like seeing trash in their parks, but at least for those living in San Francisco, smell actually mattered more in determining how they valued the city service.
These types of findings also beg the question of audience for a performance reporting tool. Who is actually looking at the report or portal? At least anecdotally, visitors to these sites tend to be the usual suspects – reporters covering local government and so called super-citizens, the minority of residents who take a particularly active role in government. Ideally, performance reporting systems should have the ability to cater to both these “super” users and to casual visitors. High-level, easily comprehendible information should be available to those with a passing interest in city performance, while more in depth reporting should be made available to the reporters and super-citizens ultimately using this information to better inform the public at large.
For many of the reasons discussed, interpretability remains one of the most important, yet most difficult, parts of effective citizen-facing performance reporting. Through its curation and parsing of data, Boston’s scorecard does a great job in finding this balance. However, even here, cracks in data presentation can lead to head-scratching results.
BAR’s performance indicator for delinquent real estate tax collection presents a clear cyclical pattern in collection, with numbers spiking between June and December and dropping off to almost nothing in the remaining months. Meanwhile, collection targets are mapped on the same graph as a constant monthly goal throughout the year. The cyclical nature of collections may be annualized, aggregated, or otherwise accounted for as part of internal reviews, but it remains unclear how the external user should interpret the results presented on the scorecard.
Painting a meaningful picture of performance with city data is just as important as finding the right audience or selecting the appropriate measures to evaluate success. Balancing all of these areas remains something even the most forward-thinking cities in the space must continue to actively manage.
The next article in this series will examine how the city of San Francisco has used different tactics to approach the area of citizen-facing performance reporting.