IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

4 Reasons Data Analytics Endeavours Don't Succeed

Like other types of IT projects, an analytics initiative can fail for a variety of factors, big and small, but several key reasons stand out.

The Chicago Police Department thought it had a fail-proof strategy for keeping a lid on violent crime: a heat map of the 400 individuals most likely to break the law. The index of violent individuals was the result of a predictive analytics program that used a mathematical algorithm to sift through crime data. It worked much like the analytics programs Netflix or Amazon use to predict a person’s next movie rental or book purchase. 

But the algorithm ran into a firestorm of controversy in late 2013 when a Chicago Tribune article told the story of a man on the list who had no criminal arrests. While the police defended the tool, critics said it was nothing more than racial profiling. They compared to it to a bad version of Minority Report, the popular sci-fi film about police who predict crimes before they happen. 

Chicago’s experience demonstrates both the promise and limitations of analytics in government. The public sector is already using it at all levels: The U.S. Border Patrol uses it to figure out how best to allocate resources along the border with Mexico. States use it to stop fraud in Medicaid and tax returns. Local governments use analytics to determine which buildings may have code violations, or to predict possible traffic and transit disruptions before they happen.

But despite all the successful implementations of analytics, many such projects actually fail. According to IT research firm Gartner, more than half of all projects aren’t completed within budget or on time, or they fail to deliver the expected results. Like other types of IT projects, an analytics initiative can fail for a variety of factors, big and small. But several key reasons stand out.

First, there are misconceptions about analytics. It’s not a technology project that should be run by the IT department, though it will need input from CIOs and their staff to manage the databases and networks that underpin it. It’s also not about data. Rather, it’s a way to predict future strategies and support decision-making. That’s why the right stakeholders need to be involved.

Second, analytic projects fail when the data quality is inferior. Bad data creates poor results. Lack of data sharing can also hobble the best planned analytics project. While there are technical barriers to data sharing, too often the problem is an unwillingness to share between agencies or departments. The result is turf battles that erupt when one agency wants to protect the data they’ve collected.

Third, states and localities suffer from a talent shortage when it comes to finding people who can successfully run an analytics project in the public sector. The field of analytics is still relatively new, so the pool of skilled analytics experts is shallow. To improve a public service, you need analysts with domain knowledge, says Jennifer Bachner, director of the master of science in government analytics program at Johns Hopkins University. “This is essential to identifying and measuring outcomes that matter.” 

Last, measuring the impact of analytics in government is far more complex than in the private sector. As the Chicago Police Department found out, analytics can lead to messy results. The mathematician who created the algorithm for the heat map of likely criminals said the data did not use any racial or negative bias about minority groups. But that’s not how the results were viewed by others.

Finding a correlation between two sets of data and predicting an outcome works fine in the private sector, but as Bachner points out, government policymakers need to identify where they can intervene in a policy to make it better. “That’s hard to do and requires more substantive knowledge,” she says. “Improving a government program requires policymakers to make changes that lead to desired outcomes. This kind of challenge is about identifying causal relationships, not just correlations.” 

This article was originally published on Governing.

With more than 20 years of experience covering state and local government, Tod previously was the editor of Public CIO, e.Republic’s award-winning publication for information technology executives in the public sector. He is now a senior editor for Government Technology and a columnist at Governing magazine.
Special Projects
Sponsored Articles
  • How the State of Washington teamed with Deloitte to move to a Red Hat footprint within 100 days.
  • The State of Michigan’s Department of Technology, Management, and Budget (DTMB) reduced its application delivery times to get digital services to citizens faster.

  • Sponsored
    Like many governments worldwide, the City and County of Denver, Colorado, had to act quickly to respond to the COVID-19 pandemic. To support more than 15,000 employees working from home, the government sought to adapt its new collaboration tool, Microsoft Teams. By automating provisioning and scaling tasks with Red Hat Ansible Automation Platform, an agentless, human-readable automation tool, Denver supported 514% growth in Teams use and quickly launched a virtual emergency operations center (EOC) for government leaders to respond to the pandemic.
  • Sponsored
    Microsoft Teams quickly became the business application of choice as state and local governments raced to equip remote teams and maintain business continuity during the COVID-19 lockdown. But in the rush to deploy Teams, many organizations overlook, ignore or fail to anticipate some of the administrative hurdles to successful adoption. As more organizations have matured their use of Teams, a set of lessons learned has emerged to help agencies ensure a successful Teams rollout – or correct course on existing implementations.