Government Technology has covered the many ways public-sector agencies are utilizing predictive analytics to become smarter about services, but an area where there’s room for growth is in managing emergencies.
During a White House Innovation Day in July, Adam Thiel, deputy secretary of public safety and homeland security for Virginia, pictured below, discussed the opportunities — and challenges — for using predictive analytics to improve disaster response and recovery. We caught up with him to get more details on the benefits of incorporating data into emergency management.
In emergency management and disaster response, it depends on how you define predictive analytics. Weather forecasting to some extent is an example: We move assets around and we make decisions based on weather forecasts all the time when we see hurricanes or other potentially severe weather events approaching.
There’s always uncertainty. Past performance is no guarantee of future results. There are always issues with the fidelity of forecasts and what actually happened versus what’s predicted. It’s something for people to keep in mind and to be explicit about. This is something the Weather Service is increasingly doing in giving us probabilities of certain severe weather events impacting certain areas.
A lot of folks when they hear “predictive analytics,” they’re thinking about predicting or forecasting the probability of a certain event or incident occurring at X place at Y time. It can be very difficult to know with a high degree of certainty exactly where and when a particular event is going to occur. But once an incident occurs, predictive analytics has a lot of promise for helping us identify and forecast the probabilities of the next effects of that event.
For example, the probabilities of an improvised explosive device detonating at the corner of Walk and Don’t Walk can be very difficult to predict — we don’t have a lot of past data on those types of events. But once that event happens, predictive analytics can hold a lot of promise for helping us understand the cascading effects: What are the traffic impacts and what’s going to happen with the surrounding infrastructure if that device affects the adjacent water or power infrastructure, even medical infrastructure, schools and things like that.
It’s really a matter of sitting down and thinking about all the potential variables, what the relationships are between the variables, mapping out conceptually how those variables interact and making sure that as many of those as possible are accounted for in the prediction algorithms. It’s also understanding and being explicit about the variables that are not accounted for and how those may or may not affect the ultimate outcome or the uncertainty around the predictions.