The field of disaster robotics has been studied for the last two decades. Robin Murphy has been there from the start and explains where it’s headed and what you need to know.
Robin Murphy is a leader in the field of disaster robotics, having started working on the topic in 1995 and researching how the mobile technologies have been used in 46 emergency responses worldwide. She has developed robots that have helped during responses to numerous emergencies, including 9/11 and Hurricane Katrina. As director of the Center for Robot-Assisted Search and Rescue at Texas A&M University, Murphy works to advance the technology while also traveling to disasters when called upon to help agencies determine how robots can aid the response. The center’s first deployment was in response to 9/11, which also was the first reported use of a robot during emergency response. Murphy also is an Institute of Electrical and Electronics Engineers fellow.
Emergency Management: Since 9/11, how have you seen the use of robots in disasters change?
Robin Murphy: We started out in 2001 and up until 2005 you didn’t see the use of anything but ground robots. Everything was very ground-centric, and I think that reflected the state of the technology. For years we had bomb squad robots, which were being made smaller and smaller for military tactical operations so that gave them a tool that was pretty easy to use. Starting in 2005, we saw the first use of small unmanned aerial vehicles that were being developed primarily for the military market and those were very useful. Those have really come up and, in fact, since 2011, I’ve only found one disaster that didn’t use an unmanned aerial vehicle and that was the South Korea ferry where they used an underwater vehicle. So we went from ground robots dominating to about 2005 and then we started shifting toward unmanned aerial vehicles. In about 2007, it became much more commonplace to see underwater vehicles being used. Then starting in about 2011, I think if you have a disaster and you’re an agency and you haven’t figured out a way to use a small unmanned aerial system, it’s kind of surprising.
EM: Is one of the issues that people are waiting for FAA regulations to use UAVs?
RM: Every single disaster since about 2011, but definitely since 2012, looking at the 46 disasters we’ve kept tabs on, have used unmanned aerial systems, including the ones here in the United States. I would not say the adoption problem is the FAA regulations. It takes very little time to get an emergency COA [certificate of authorization]. It does take time to get some of the paperwork in advance done to fly a regular COA but the FAA has given jurisdictional COAs. The emergency COAs take a very short period of time — it’s knowing the paperwork, like with any new technology.
The deterrent to adoption seems to have been the lack of money to flat out purchase them. They’re basically computers and you know how fast the technology for your cellphone and computer changes, you wouldn’t expect to have a computer that’s 10 years old, so you wouldn’t buy these the way you buy big equipment. We’re suggesting that agencies look at plans that allow them to lease the technology. And also because it’s a new technology, you don’t know what that means in terms of training and how it’s going to be integrated and that means they don’t have to recoup some of the training and maintenance costs right off the bat.
EM: What are you currently working on?
RM: I work mostly on the human factor side: How people actually use these. I am not worried about whether a UAV is going to fall out of the sky or a ground robot’s wheels will stop turning. In my book, Disaster Robotics, I go back over 34 disasters, of which I was in a bunch. If you look at the data that’s available, there were 13 terminal failures where the robots failed for some reason and that caused the mission to be aborted. In about 51 percent of the cases it was human error. When I go back and analyze that I see that it’s human error, but it was the designer — the designer didn’t give it an interface that allowed the user to have the right type of information to make a different or better decision. You can only see what you can see.
We’re also very interested in how these technologies change the way emergency response works. What we saw at the Washington state mudslide was that everybody’s thinking “these UAVs will be useful for ESF [emergency support function] 9 for urban search and rescue” but actually they’re more useful in that particular case, in a mudslide, for public safety and you can start thinking about ESF 14 and recovery. [Questions include:] How are you going to share that information? How are you going to do that without creating a data avalanche that just overwhelms different decision-makers who need to share, to plan, to interact with each other? And if I want to use this robot and you want to use it and we both want to point it in different directions, how do we handle that? So how do you get these interfaces that let people interact in real time and then process the data and share and work together?
We’ve got a group of students that have put together what we call the Skywriter interface that lets somebody with a tablet, laptop or mobile phone see what a UAV or robot is seeing and communicate to the operator or system what they want to do, like circle or draw an arrow, which indicates where to go, or follow my finger and track this.
EM: When you deploy to a disaster, what’s your role there?
What we’ve found is that most responders prefer to work with us side by side. We’ll drive and you tell us what to do. We also do formal studies and what we’ve found is that in looking at the video data and the sensor data coming from robots, two heads are nine times better than one head. Having a team work together really takes off the cognitive load, one person will catch something that the other person didn’t and it just adds a vast improvement to performance. … With that said, I would love to be out of business, I would be just as happy for groups to have robots on their own. I would like the data though; I love learning from the practitioners what’s working and what’s not.
EM: Have you been working on anything in response to public health needs like for the Ebola response?
RM: We find that in Ebola a lot of people are thinking about clinical applications, like replacing the nurse. Nurses and doctors are hard to replace and duplicate; robots rarely are cost-effective at replacing what humans do. They’re often better at giving some capability that you didn’t have before. So in this case, rather than looking at clinical needs, we’ve looked at logistical needs and the fact that a lot of people involved in health aren’t really doing health work, they’re cleaning up messes, they’re hauling all of these sheets that are contaminated, they’re trying to move people around. So having one person instead of four doing that begins to be more of what the military calls a “force multiplier” and becomes much more efficient. There are things like that that exist. There’s general reconnaissance: How long is the line outside? What’s going on in the villages in the rainforest, do they seem empty? Is there dirt overturned that may indicate graves? That can indicate what’s going on.
We’re also looking at clinical but that’s going to be much more specialized. We like to work with the practitioners and find out what’s going to be the most bang for the buck. If there’s one thing to do to make your life easier, what would that be?
EM: Looking to the future, where do you see disaster robotics headed in the next five to 10 years?
RM: There’s that idea of adoption, which will hopefully continue to accelerate.
In the future, for the new technology, I expect to see three things: Better software on what we call emergency informatics; it’s how you share the data and how you visualize it. In ground robots, I am so excited at work at looking at burrowing robots. The big value in most big building collapses lies in the smaller the better, what do you do when there’s not an obvious void and can you get something to literally snake and nudge and worm its way through there. There are some animals that do that — there’s a sand lizard and types of snakes that navigate in the ground — so we’re doing some work with Georgia Tech and Carnegie Mellon on that. There are also some great advances being made in manipulation. Initially I would characterize the first decade of robots as having been all about allowing the responders to see at a distance, but now we’re seeing a shift. We can see at a distance but now we would like to poke things, we would like to move them over, we would like to drop off things. So we need to act at a distance and not just see at a distance. There’s some advances in robot manipulation that are coming up and are very exciting and we’ll be incorporating those into future work.