When Big Data Gets It Wrong

The number of information sources streaming into government has never been greater. But is it making us smarter?

  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email
“Data” is the watchword of the progressive gov tech professional today. If we can put a number to a problem — quantify it, catalog it, assault it with an avalanche of analytics — all will be well.

Sometimes data doesn’t deliver, or at least it appears to come up short. What does this say about data-driven government? Are we asking too much of big data? “There are people out there who have oversold the potential of data to transform how we deliver government services,” said Paige Kowalski, a policy director at the Data Quality Campaign, which promotes effective data use within education.

Data advocates are steadfast: When data doesn’t deliver, it’s because people are using it wrong. “Garbage in, garbage out,” runs the old adage.

Three instances offer an opportunity to explore the apparent limitation of the data-driven mindset:

  • The presidential election. Hundreds of pollsters, thousands of polls, masses of analytics applied in various models, and yet nearly all called it wrong. Did big data fail?
  • Predictive policing. Advocates say data models of police activity can highlight potential trouble spots and drive smart allocation of assets. Critics say it’s a way to worsen existing bias and deepen community divides.
  • Relying on data-driven systems, Michigan wrongly sent letters to 12,000 residents telling them their Medicare benefits had been cut. Not “big” data, but still an example of data efforts coming up short.
Examples like these can give data a bad name, maybe make people question whether analytics are in fact the wisest way to govern. We’ll give away the ending now and say that this is probably the wrong conclusion: Data works.

“What all these [examples] show us is that data in and of itself isn’t enough. The quality of the data is always of prime consideration. The skills of the analysts and the people who do the data modeling really have to be up to the task,” said Rick Howard, a Gartner research vice president in government and education. “It’s all in the formulation of the question. If you ask the wrong question, building data models isn’t going to give you the right answer.”

Let’s see how this plays out in practical terms.


The famous news pic of Harry Truman hoisting a flubbed headline used to be the icon for how the media can call an election wrong. Maybe a new image will emerge to express the epic polling flub of 2016.



Associated Press photo by Byron Rollins

In the days before the election, Reuters gave Hillary Clinton a 90 percent chance of victory. The Huffington Post said 98 percent, the Princeton Election Consortium 99 percent, The New York Times 85 percent and Nate Silver, whose 538 blog was more conservative than others, still gave Clinton a 71 percent chance of victory.

How can so many smart people, armed with so much data and equipped with the finest in analytic tools, be so very wrong? Surely data isn’t all it is cracked up to be — or maybe the data was faulty.

“It seems clear that the data being fed into those models had some flaws in it,” said Alex Howard, deputy director of the Sunlight Foundation. “Human nature as of yet cannot be reduced to a series of ones and zeros. We cannot always expect rational actors. People make decisions against their own interests and against common sense.”

What exactly was wrong with the data?

“The pollsters collecting data didn’t sample the right populations in the right way, and respondents might not have been entirely truthful,” said Kentaro Toyama, a professor at the University of Michigan’s School of Information and author of Geek Heresy: Rescuing Social Change from the Cult of Technology. He pointed to statements by GOP activists saying the polls understated Trump’s support because voters don’t want to admit to backing him.

It may be too that there is something fundamentally flawed in the presidential polling models.

“It’s not like we were measuring actual voting behavior. We were measuring public opinion and taking that as the sole indicator of voting behavior,” said Daniel Castro, director of the Center for Data Innovation and columnist for Government Technology.

“There should have been more information on the number of attendees at rallies. There should have been analysis of people’s social media comments. There could have been facial sentiment analysis at rallies. There are lots of other data sources out there that could have been used. That’s big data, and we didn’t have that in this case,” he said.



For several years, government IT has preached (and been preached to) in words along the lines of: Data will fix everything. Shall we now amend? Proposed revised credo: Data will improve many things in government,
if used with proper care by thoughtful, well trained analysts.


It’s a little wordy, but probably closer to the truth. In the real world, this revised and somewhat more modest vision of data works like this:

“You need to formulate the questions properly. They have to be technically translated in the right way and the models have to be used, and then the data itself has to have some accuracy."
 — Rod Davenport, chief technology officer, Michigan Department of Technology, Management and Budget 

“There may be some overselling going on, some hubris among the proponents of these things. You have to systematically evaluate how effective the data analytics are compared to more conventional approaches. The only way to know that is to remain agnostic and when you do apply it, perform a real evaluation on how well it is doing, especially compared to the status quo."
 — Daniel Nagin, professor of public policy and statistics, Heinz College, Carnegie Mellon University

“We have run a lot of our public institutions on anecdote and gut feeling. In almost any government situation, data can help to give you some insight into why that reality looks the way it does. It’s less about the limitations of the technology and more about the skills of the user. It comes down to the skills and experience of the people who are setting up the analytic models and the policymakers who are there to help interpret the results."
— Rick Howard, vice president in government and education, Gartner Research

“We need to be humble and realistic about the kinds of problems data can solve. In situations where data is the bottleneck, we should have more and better data. But in government, so much depends on factors not contained within the data. Improving the data is definitely worth it, but it will not always solve the entire issue."
— Kentaro Toyama, professor, School of Information, University of Michigan

We can ask data to manage Medicare, to police our streets and to predict our elections, and data can help do all those things. At the end of the day, though, the government technologists at the heart of the data enterprise will be the ones who determine whether the effort succeeds or fails.

 

Big data isn’t just about the volume of information but about the frequency, the iterations. Big data models a scenario tens of thousands of times, whereas presidential elections happen only once every four years. So that piece was missing too, Castro said.

Some say the electorate may simply be too complex to poll with any accuracy.

“This is not a failure of big data. This has nothing to do with data analytics,” said Daniel Nagin, professor of public policy and statistics at Carnegie Mellon University’s Heinz College. “This reflects the fundamental difficulty of getting a representative sample of people in a large, complex society.”

When pollsters read the data wrong, they raised a few hopes and broke a few hearts, but the implications of “data gone bad” (if that’s what this was) can sometimes be far more profound. Take for instance the case of predictive policing.


Predictive policing promises to use historical crime data, sophisticated algorithms and heavy-duty analytics to help police make best use of their resources. When successful, predictive approaches “allow police to work more proactively with limited resources,” but the technique is riddled with vulnerability, according to a Rand Corp. report.

As the report notes, “some descriptions of predictive policing make it sound as if the computer can foretell the future.” It can’t. Nor can analytics actually stop crime. “Actual decreases in crime require taking action based on those predictions.”

At its heart, predictive policing promises to expand by mechanical means the same sorts of strategies long used by cops on the beat: Watch for the hot spots, the places where trouble happens, the people typically found nearby. Keep an eye on those. When escalated to the level of big data, however, this approach raises concerns among human rights advocates.

“If 90 percent of rapes are committed by men, should we lock up all men?” asked Nik Rouda, senior analyst at the IT analysis firm Enterprise Strategy Group. The absurd example makes a bigger point: History itself can’t be a basis for crime prevention. “The important part is in finding the balance. Big data analytics are tools. How you interpret them and how you use them can have a serious impact.”

Most concerns around predictive policing have to do with the origins of the data that feeds the analysis. “We assume that crime reports are representative and random, but they can actually reflect institutional biases. Often those crime statistics are the result of where you put the police in the first place,” said William Isaac, a consultant to the Human Rights Data Analysis Group.

If cops concentrate their patrols in a certain neighborhood, that’s where the crime stats will pile up. “When you predict the future based on the past, you are guiding future law enforcement to go to the same places and take the same actions. You perpetuate the existing bias in terms of where and how police take action,” said Nancy La Vigne, director of the Justice Policy Center at the Urban Institute. Analytics will reflect this, more police will be deployed and the cycle perpetuates.

The end result may be a policy that conforms to the data, but generates undesirable social outcomes. “Statistically speaking, there may be crimes that are committed in neighborhoods where minorities are the majority residents. So if you take a very innocent reading of the data, you start stopping and frisking black and Hispanic people,” Toyama said. “As a society, we are not comfortable with running a program that is effectively racist, just because that’s what the data says.”This issue here is not with the data, some would argue, but rather with how one puts that data to use. Many shudder at a sci-fi vision of “pre-crime,” where police act against an individual based on a statistical model suggesting that person could commit a crime in the future. Stopping short of such scenarios, is it possible to put analytics to work in the service of law enforcement without deepening existing social imbalances? Many say it is.

“If you see a high volume of robbery and assault around transit points, you should station police there for public safety. That just makes sense. If we see violent assault around burglaries, those data analytics should inform where and how law enforcement responds to that,” Howard said. “But those policy decisions can’t be made simply because an engineer somewhere wrote some software and marketed it to a police department.”

Rather, crime data should be used to drive policies beyond policing. “Why do we only see drug arrests in this neighborhood? What are the social conditions and what are we going to do about it?” said Castro. “Are we just going to do more policing? Or are there are things we need to do to solve those problems?”

Maybe predictive policing can work, some suggest, if its implementation goes beyond the mere matter of cops on the beat. “Do you police more, or do you put in more social programs and support? Do you want more drug treatment programs, more jobs programs? Then it starts to become political,” said Rouda. “Do you treat the symptom or treat the cause? That is not a data issue. It’s a policy issue.”

Just as with the election, where pollsters failed to use tools such as social media monitoring and facial sentiment analysis, some say the way to make predictive policing better is to make it bigger — to incorporate more varied data, beyond historical crime statistics.

Scenario: What if big data could find a correlation between paydays, local bar revenues and domestic violence calls? “If you use it to really think about what is happening in these places, if you draw in additional data and you use all that information to better engage businesses and community members — now you are talking data,” La Vigne said.


If big data can make big mistakes, it’s equally true that small data projects can cause big headaches for government technology professionals. Michigan’s Medicare fumble was nothing on the scale of misreading the national election, but it’s a good example of how even the best-intended data-driven effort must be approached with caution.

In June 2016 the Michigan Department of Technology, Management and Budget adjusted its data systems in order to help certain Health and Human Services functions conform with federal statutes. As a result, some 12,000 people got letters incorrectly stating that their benefits had been cut.

“It was related to the system’s ability to match up data across multiple data sets,” said Linda Pung, the department’s general manager supporting health and human services. “It basically caused a business rule in the system to be changed, unintentionally, which caused those letters to go out.”

Some see in this episode a warning to IT leaders about the possible shadow side of data-driven everything. “We have these very large interconnected systems now, and so a small error in one part of the system can trigger an avalanche of errors across multiple systems, something that could not have happened in a paperbound world,” Nagin said.

Others read into this incident a reminder to IT leaders that they must pay even closer heed to the data imperative. “Just about any law or policy or regulation will have a digital component, for good or ill,” Howard said. “Digital services are going to be part of our lives and it is wholly rational to expect government to measure how effective those services are. If they are not building in the capability to extract data — to understand performance, to understand usage, to understand where and how people opt in and drop out of a particular service — they are making a mistake.”

Michigan’s IT team meanwhile said the takeaway here has to do with degrees of diligence. “When you are making changes to a system, when you are trying to integrate data and report on that information, you have to have a really thorough test plan, a thorough process you go through to determine everything you need to test before you release that change,” Pung said. “Then you need to try to find and fix those unintended consequences.”
 
  • Facebook
  • LinkedIn
  • Twitter
  • linkText
  • Email
Adam Stone is a contributing writer for Government Technology magazine.