IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What Do Public Health Officials Want From Big Data?

Health Datapalooza panelists describe work-arounds they have developed to get the granular, neighborhood-level data they need.

One of the central themes of the fifth annual Health Datapalooza conference in Washington, D.C., this week was how innovative approaches to data can help local public health agencies better target their limited resources. Two panels of public health experts described some creative solutions they have developed to work around the fact that data coming from federal and state sources is often years old and not geographically meaningful enough. 

Brian Castrucci, program director for the de Beaumont Foundation, which seeks to catalyze new thinking about public health innovation, said his organization has surveyed local public health officials in 18 cities and found a strong desire for more local data. “They need neighborhood-level data to inform policy,” he said. “It has to have the right level of geographic aggregation.” The data they get now to help target chronic conditions such as diabetes is not granular enough, he added. It doesn’t help them to have state-level data that is two years old. 

Castrucci, who previously worked in the departments of health for the state of Georgia and the city of Philadelphia, noted that every medical visit starts with taking the patient’s weight. “The natural history of the obesity epidemic lives in electronic medical records of health systems,” he said, “but we can’t access it.”

But Castrucci introduced two city public health officials he called “rock stars” in working around these limitations. The first, Bechara Choucair, M.D., commissioner of the Chicago Department of Public Health, has turned to some novel approaches involving social media and predictive analytics to deal with his department’s limited resources. 
“Our focus is on how we can get real-time data and not ancient data. This is all about strategy to improve population health,” Choucair said. “That is why we are using health IT — to see if we can move the needle on our top 12 health priorities in the city of Chicago, and not because it is a cool or fun to build apps or because some foundation is willing to give us money.”

One of the projects involves food protection and safety. Chicago has 16,000 restaurants and only 32 safety inspectors. One day Choucair noticed a tweet by someone who said they got sick eating at a specific restaurant. He asked an inspector to check it out, but he also started using the social media management software HootSuite to search for other tweets about people getting sick in Chicago after eating out. He then worked with volunteer civic apps developers at a hackathon to create an app that automates searching Twitter and routing the messages related to food-borne illnesses to inspectors, who follow up and encourage the Twitter user to file a complaint through the city’s Open311 system. The app, FoodBorneCHI has been up and running for a year.

Not satisfied with that, Choucair also sought a way to better allocate the time of inspectors rather than just randomly assigning them.

He reached out to the insurance company Allstate for its expertise in helping the city predict which restaurants are going to fail inspections. They developed an algorithm based on 20 variables and came up with a list of 500 restaurants most likely to fail. They did a controlled study of those 500 and a random 500 restaurants. In the first round of piloting, they found 5-7 percent more violations in the targeted group. “That is good,” he said, “but we are trying to play around with the algorithm to get even better.” And there are other cases that the city could apply predictive modeling to, such as lead poisoning or supporting vulnerable population in an emergency. “How do we make sure we target the folks who are most vulnerable at a time of need?”

Barbara Ferrer, executive director of the Boston Public Health Commission, said one of the biggest issues for her department right now is dealing with an opioid overdose epidemic. “The data we get from the state is three years old, and from the medical examiner confirming deaths is two years old,” she said. “I am looking for real-time data. We need it to influence programming dollars.”

Ferrer realized that overdoses almost always lead to someone calling 911, and her office has that data. So using EMS runs, she now gets weekly reports on narcotic-related illnesses. Her team can document illnesses compared to the same time period a year ago and map the results by city section.
“If we see unusual activity, we can deploy outreach teams where we see more overdose activity or a clustering of deaths,” she explained. Outreach teams have been successful, not in diminishing overdoses but in reducing the number of deaths, she said.

One problem, Ferrer added, is that there isn’t agreement on the need of public health departments at the local level having good access to data. “There is agreement that data is important. We have seen a significant investment in state capacity to build bridges with electronic medical records that would create a statewide system, but no subsequent investment in local public health. That tension has to get addressed.” There is no reason those EMR feeds couldn’t happen simultaneously if there is agreement on which data should be sent where. As it stands, Boston has to build out its own systems and pass ordinances requiring data submission. “It is wasteful but better than no data access at all,” she said, “or waiting for others to decide when we are going to get access.”

Alan Tomines, MD, director of Child Health and Disability Prevention for the Los Angeles County Department of Public Health, started his presentation by apologizing if he sounded like a “Debbie Downer” in comparison to others detailing their innovative approaches, but said he wanted to describe the many challenges Los Angeles County faces in managing and analyzing data. Much of the data coming to the department is still paper-based and requires lots of manual effort to collect. There is still limited access to electronic data, but the electronic data they do receive is non-standardized, so it requires lots of effort to clean up. 

“There is still much uncertainty about the utility of electronic clinical data and how representative it is, and whether we can make population-based inferences from it,” Tomines said. “You might think the prospect of getting electronic data is a panacea for us,” he added. But for both electronic lab reporting and syndromic surveillance the county is resource-constrained to expand. There is no significant health information exchange in the region to help with expansion, he noted. 

Even if L.A. County had standardized data all in place, it would need more staffing and resources to become a “big data” organization, Tomines said. “We don’t have the analytical work force or infrastructure. We need to think about what types of electronic data might be useful in supporting our core functions and work with external providers of that data to develop policies to help capture data that is not part of clinical workflow.” He added that if there is any further federal funding to support health information exchange, it should focus on supporting public health. 

Chesley Richards, deputy director for Public Health Scientific Services at the Centers for Disease Control and Prevention, finished off one panel by noting that local public health agencies have to understand that going forward they won’t have all the data they need. It will come from transportation, housing, income and other data sets. “We have to get comfortable not owning the data and using the data other people own,” he said. “We have to figure out those data use relationships.”