While measuring how cities perform their functions may seem easy in the abstract, it's no easy task to draw a line from one policy to a specific outcome: Has that new traffic management system eased congestion? Or was less congestion the result of the increased road maintenance project? Or was there more traffic because the economy picked up following a new civic coding program's launch?
Answering these questions is tough, but San Francisco has risen to the challenge.
In 2003, San Francisco voters opted to allow monitoring of the level and effectiveness of services provided by the city and county. This led to the creation of the Controller's Office City Services Auditor Division (CSA), which would offer objective and independent analysis on municipal services being provided. The CSA is broken into two units: the city’s internal auditor and the City Performance Unit (CPU).
The CPU team provides analysis, problem-solving and practical support to city departments to improve their service delivery. In the team's mission to improve how city services are administered, it prepares annual performance measurement reports to help the public, city officials and department heads understand the real impact of programs.
The goal of the Citywide Performance Measurement and Management Program (Performance Program) is to work with departments to create reliable and easy-to-use performance data — which helps the city and its residents make efficient, effective and thoughtful operational and resource decisions. As the nature of information sharing has changed and people are starting to discover that visual representations of data are much more easily digestible, the CSA has made it a priority to create more graphical representations of report data.
The CPU began releasing information through its Scorecard program in March 2016, which helped break down information on livability, public health, social safety net, public safety, transportation, environment, economy and finance.
“The scorecards site was trying to get away from PDF reports,” said CPU City Services Auditor Natasha Mihal. “We want something a little bit more interactive.”
Through the scorecards site, the CPU delivers high level statistics on information that is easily trackable and lends itself well to comparison with years past. But the team decided to look across horizontally, at measures, seeing how San Francisco compares with its peer cities in terms of the six measures.
In its Citywide Benchmarking Report (PDF), the city looked at open data available from 16 other municipalities and sent out surveys to see how it compared. The cities were chosen using a “likeness score” methodology that accounted for population and population density:
|Baltimore||Long Beach, Calif.||Oakland, Calif.||San Diego|
|Boston||Los Angeles||Philadelphia||San Jose, Calif.|
|Denver||Minneapolis||Sacramento, Calif.||Washington, D.C.|
For water usage, safety net and population health metrics, this analysis compares San Francisco with peers solely in California. Mihal did warn that “benchmarking is not 100 percent apples to apples.” There are some subtle differences in how cities report certain things and it's hard to show how a city may have improved on a particular subject.
The reports take on the same characteristics as the scorecards by prioritizing visualized data. As Mihal explained the conscious decision, the team wanted to base its report in the mold of a PowerPoint presentation — one slide for one idea.
Through these reports, however, one can paint a fairly detailed map of what life in San Francisco is like. The city notes that residents log the second highest library usage rate (only behind Seattle), San Francisco houses more mental health institutions than any other peer city, and the city earned 3.9 percent more revenue than budgeted — higher than the 2.5 percent peer average. Rankings go both ways, however. The city also has one of the slowest traveling speeds for buses and the fourth highest property crime rate among peers.
In many of the cases, the data seemed to back up what many San Franciscans had already believed to be true, Mihal said. For instance, as anyone who has tried to leave the Mission District and cross the Bay Bridge anytime between 4:00 p.m. and 6:30 p.m. knows, you are going to be sitting in an unpleasant amount of stagnant congestion. The fact that there is now data to back up that experience helps policy makers form better solutions. "We are hoping the the decision-makers use these reports as a tool to make better decisions,” Mihal said.
Other times, the data can run contrary to what may be the general feeling in the city. According to the Pavement Condition Index (PCI) the streets are improving year to year, and already rank high among its peers. However, if you asked people on the streets, said Mihal, they would generally think the opposite is true. It is good to keep in mind that the benchmarking report “doesn't tell us whether we are doing well or poorly,” said Mihal, “but helps illustrate how we compare, which can start the conversation.”
As for future versions of these reports, the city plans to continually improve measures by which the city can understand how it is performing. According to Mihal, the next versions of the reports will be less about output measure, i.e., how much money is being put into fixing potholes, and more about outcomes, i.e., how many potholes are actually being fixed and how short the response time is. However this is done, the team will remain committed to providing the highest level data available to the greatest number of people in order to help make sure everyone is well-informed and makes evidence-based policy decisions.
Ryan McCauley was a staff writer for Government Technology magazine from October 2016 through July 2017, and previously served as the publication's editorial assistant.