IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Outcome-Driven Policy With a Human Touch

December’s Summit for State and Local Government, co-hosted by the White House, emphasized the need for increased adoption of outcome-based policies to create economic opportunity and build stronger communities.

This story was originally published by Data-Smart City Solutions.

Outcome-driven policy in many ways seems like common sense: setting measurable objectives for policies gives government a much better chance of making meaningful change than implementing reforms with no clear means of tracking success. Working towards quantifiable goals not only gives policies more direction, but also allows governments to monitor the success of their programs and reform them if need be.

December’s Summit for State and Local Government, co-hosted by the White House and behavioral science company ideas42, emphasized the need for increased adoption of outcome-based policies to create economic opportunity and build stronger communities. Cities and counties shared lessons learned with the hope of galvanizing effective outcome-driven policies in local government. However, representatives from non-profits, think tanks and local governments alike agreed that implementing outcome-driven policies is not always as simple as it may appear. Many of the attendees made it clear that determining what policies to implement and whether or not they are working often requires a great deal of nuance.

One of the principal takeaways for outcome-based programs was the need to develop and implement metrics that direct city energy in a useful way. A city can devote a great deal of time and money to tracking and achieving some quantifiable goal, but if the goal isn’t the right one, the city’s efforts will be for naught.

For example, Third Sector Capital—a non-profit that partners with governments to deliver funding for service delivery projects—discussed its project with Santa Clara County in California to address the county’s homelessness problem. While Santa Clara County was committing a great deal of its budget to building homeless shelters, the county had the fifth-largest homeless population in the country. The problem was not in how much funding the county was providing, but where it was allocating this funding; instead of directing funds to put people in permanent homes, the county had focused on the goal of providing beds in homeless shelters. While these shelters provided temporary relief, they did not offer a permanent solution for homeless citizens, many of whom would end up back on street after spending a night in a shelter. To resolve the problem, Third Sector Capital reworked many of the county’s contracts with service providers, prioritizing putting people in permanent homes, rather than shelters.

Yet while the number of people placed in permanent homes is a fairly obvious metric to use for homelessness initiatives, the process of determining how to actually collect this information can be more difficult. In the case of homelessness, it is often difficult to track homeless individuals over long periods of time to determine whether or not they have secured permanent housing, as they often have no address or phone number. A homelessness study in Minneapolis used a number of strategies to improve the responsiveness of its participants, including hiring individuals familiar with the homeless to contact them, collecting names and addresses of friends, relatives, and agencies who would be likely to know of their whereabouts, and distributing postcards for participants to send in every few months. The study also paid participants as an incentive to respond, but still saw high attrition rates, meaning that innovative techniques for this kind of long-term outcome tracking are still needed.

In some cases, determining what metrics to use can also be complicated. For example, if the goal is to improve education in a community, how might a city measure this? Should the metric be improved test scores for students? Or perhaps longer-term measures, like admissions to college or employment, would be more relevant. Engaging members of a community can help service providers choose both the most pressing problems to address and the right goals for resolving those problems. In many cases, the solutions developed by academics or non-profit boards may not align with the priorities of local communities. In the education example, school boards may see test scores as a useful representation of educational achievement, while it is possible that a community may be more concerned with how education helps residents secure employment in the future. Understanding the problems that really plague a community requires putting people on the ground and facilitating conversations with those that will be affected. Hearing what results they value and implementing metrics that reflect their priorities can ensure outcome-driven policies are human-oriented and caring reforms.

Relatedly, it is important for policymakers to keep in mind that the goal of outcome-driven policy is not to produce an impressive stat a city can publicize, but rather to improve economic opportunity, social justice, or services for citizens. The incentive to take part in cream-skimming — targeting those easiest to serve — can be a problematic temptation built into outcome-driven policies. When the goal is measurable success, service delivery providers may be tempted to turn to those with whom their reform is most likely to succeed. For example, if the goal is to maximize the number of former prisoners who find jobs, a provider may target those who could have gotten a job without help in order to produce the best success rate.

In order to prevent this selective service, outcome-driven goals must emphasize value added: for instance, how many ex-offenders that would not have found a job on their own succeeded thanks to the intervention. In order to best assess value added, cities may employ data analysis to determine those in greatest need of intervention and those likely to succeed without government aid. This data may be derived from randomized control trials, which test interventions on a random sample of the population and track their effectiveness on some metric. By comparing the success rates of those who did and did not receive intervention, cities can determine where to best allocate attention: to those groups unlikely to succeed without intervention but likely to succeed with intervention.

Once a local government has selected metrics and collected data, it must then monitor this data to produce actionable insights. Many local governments have accomplished this by implementing performance measurement systems: structured processes for tracking data that often involve regular meetings where attendees analyze data and organize interventions.

Importantly, these outcome-based policies do not necessarily need to involve costly implementation and data collection efforts by governments. At the Summit, representatives from the Laura and John Arnold Foundation emphasized the need to start small and cull data from already available sources. This is exactly what the foundation did with its Bottom Line program for low-income students, which provides one-on-one counseling to help students get into college and then support them while in college. The program used a randomized lottery to determine which students would receive the service, which, while limiting the number of participants, allowed the organization to conduct an inexpensive trial of effectiveness so it can evaluate and build upon the test in the future. Moreover, the program used data from the National Student Clearinghouse Research Center, which collects education data to inform practitioners and policymakers about student pathways. Leveraging existing data allowed Arnold to avoid costly independent data collection, freeing up funds for more substantive intervention. Governments may also renegotiate existing contracts with service providers, building in data collection requirements so service providers help with collection work.

Local governments may also use Pay for Success (PFS) contracts and Social Impact Bonds (SIBs) to mitigate the financial risks associated with outcome-driven programs. Under PFS contracts, governments agree to pay for a program only when the service delivery providers achieve an agreed-upon result. Often these PFS Contracts are paired with SIBs, in which private investors provide the upfront capital for service delivery and are repaid only if the providers achieve the desired outcomes. This scheme not only reduces the financial risk for governments intent on introducing outcome-driven reforms, but also spreads accountability for programs’ success among governments, service providers, and private investors. As a piece of its homelessness initiative, Santa Clara County introduced a Pay for Success project called Project Welcome Home. Thanks to $6.9 million in upfront investments, the county and housing services provider Abode Services intend to serve 150-200 chronically homeless individuals with clinical services and housing options. The county, which also invested $4 million of its own money, will repay Abode and the program’s other funders based on the number of months that participants achieve continuous stable housing. Santa Clara’s ultimate goal is for 80 percent of the participants to achieve 12 months of continuous stable tenancy, at which point the county would fully repay its funders.

Outcome-driven policy has immense potential to bring pointed and accountable solutions to the most compelling social problems facing local governments. Focusing on outcomes forces government to be more direct in its policymaking as well as more accountable for its performance, often at little cost. However, it is important that policymakers take a cautious approach, engaging with communities and ensuring their reforms address those in greatest need. To implement outcome-driven policy that significantly benefits citizens, a human touch is key.