IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Keeping Bias Out of Big Data (And Other Key Considerations)

Experts weighed the benefits and potential pitfalls of the aggressively expanding big data movement during a Council on Foreign Relations panel in Washington, D.C.

Big data holds a lot of opportunity for government and the public. We’ve seen it transform transportation systems, schools and public safety. But behind the benefits, there are some inherent risks that need to be addressed as organizations continue to move along in the process of collecting, analyzing and sharing data.

As the experts argue, big data is both an opportunity and a potential pitfall in the technology realm. During a Nov. 16 panel discussion hosted by the Council on Foreign Relations in Washington, D.C., industry leaders talked through what they see as the primary considerations in the space.

One of the more glaring concerns in the big data environment is that of bias. Despite the fact that algorithms are meant to sort information and filter it using bias, the wrong kind can lead to negative outcomes, according to Andrew Hilts, executive director of Open Effect, a Canadian nonprofit focused on privacy and security.

Hilts explained that racial bias in certain algorithms could skew the results of efforts to track recidivism in the prison system. Though black and white offenders were equally likely to reoffend, a biased algorithm might portray black subjects as a greater liability. The issue of bias, whether recognized or not, is linked directly to those responsible for writing the algorithm. While there's no way to fully eliminate human bias, checks and balances in the algorithm building process could help to reduce what slips through into the public space.

“I think that the notion that technology is inherently neutral needs to be questioned more thoroughly in society, and I think that’s probably sort of the underlying risk that emerges in this big data age,” he said.

His solution: a system that accounts for and monitors bias in the programming process to remove what he calls the “veneer of objectivity.”

Hilts cites examples like Facebook’s recent turmoil surrounding so-called “fake news” and the recommendation algorithms that allowed them to propagate in step with legitimate news reports, as well as the accusation the company was allowing racial affinity profiling in its online advertising.

Considerations

Microsoft’s Elizabeth Bruce doesn’t discount some of big data’s inherent issues, but she sees more benefit than risk in the movement. She sees the growing ability to collect and examine data sets as a way to increase the collective powers of observation, whether that be in the public or private sectors.

“This can be around physical things, when you think about [the Internet of Things], you can now massively track pretty much anything you want at any time," Bruce said. "It also increases our observational power of human behavior in a way that was never before possible."

These abilities, she explained, hold the power to address some of the most complex issues seen today. One such example is the use of EKG results and Massachusetts Institute of Technology’s efforts to examine the intricacies of human heart conditions across a broad range of patients. Most of the data garnered from the medical process is discarded, along with a substantial amount of significant data, she said.

Under the MIT program, Bruce said the results of a multitude of patients could be analyzed to target abnormalities that might otherwise go unrecognized in the context of one patient's EKG patterns. By expanding the scope, researchers may be able to identify problems that could be tied to a certain group of people.

Moving Forward 

Amid the push and pull occurring in the real world, the inability to stop progress appears to have overtaken policy. The gap between policy standards around privacy, bias and everything else will need to catch up.

Dan Chenok, a senior fellow with the IBM Center for the Business of Government, argued for standards that evolve with the data practices and what he calls “privacy by design.”

“I think the concept of building privacy elements into the design of systems and the design of data flows is something that companies are working on and that governments are learning from the private sector how to do better,” he said.

In addition, Chenok advocated that the principles of privacy need to be updated to better correspond with developing technology. One such solution could be notifications that data is being used and by whom, rather than a one-time terms and conditions agreement.

The benefits of working through the policy and best practices portion of the big data conversation are fairly clear: better insights, faster.

“I think the benefits [of big data] are clearly significant in sort of how data flows have evolved over the last 30 years,” Chenok said. “The concept of moving toward this changed notion of big data and enabling open interface, it also lends itself to some of the evolving systems of artificial intelligence, of cognitive computing. … It can enable human decision-making. It’s not something to replace humans, but it can look across vast stores of data and help people make better decisions.”

Eyragon Eidam is the web editor for Government Technology magazine, after previously serving as assistant news editor and covering such topics as legislation, social media and public safety. He can be reached at eeidam@erepublic.com.