Editor’s Note: This is the first in a series of interviews with industry thought leaders on the future of technology. In our next issue, we’ll talk to Stuart McKee, Microsoft’s CTO of state and local government, about the future of productivity in the public sector and how it will shape the next wave of devices.
Soon, police officers will be surrounded by a web of connectivity linking them to dispatchers and information sources. Instead of traditional radios, they’ll wear devices that let them communicate via voice, data and video. Using sophisticated analytics, these devices will be smart enough to give officers only the information they need — alerting them, for instance, that someone’s approaching from behind — while filtering out nonessential data. Innovative display technology will deliver this data without distracting officers from the real-life environment around them.
That scenario is envisioned by Motorola Solutions CTO Paul Steinberg, who recently spoke to Public CIO about the future of mobile technology for public safety agencies. Steinberg is responsible for the company’s technology strategy and vision. He’s also a member of the FCC’s Technical Advisory Council and served on the FCC’s Technical Advisory Board for First Responder Interoperability.
Here, he discussed user trends that will reshape what mobile devices look like and how they work. He also talked about technical components — from new display and user interfaces to better energy management — that’ll make it all possible.
One key trend is that commercial technology is shaping the expectations of the professional users. One very large metropolitan police force told me that 50 percent of their officers on the street are age 30 or younger. So, of course, they are influenced by contemporary technology. Number two, hands-free operation is really important, and that is going to directly influence what we do with mobile technology. The third thing is what I would call device or radio or environmental convergence. Today a police officer might have a land mobile radio and also some kind of a data device. The confluence or the intersection between those two is getting more and more profound.
The importance of hands-free operation is pushing us toward wearable technology. Tablets and smartphones require you to look at them and interact with them. They offer voice recognition and that is a small step, but when I say wearable, I am talking about the real input and output.
Photo: Motorola Solutions CTO Paul Steinberg. Photo by Steve Johnson
I think the device becomes a platform that offers intelligence not just interaction. Take Apple’s Siri application as an example: That platform has information about the context in which it exists. So it knows my location, it knows about the environment around me, it may know the path that I traversed to get there and it may know things like temperature. Take those things and juxtapose them with the historical information it knows about me and now the challenge is: How do we reduce all the data that is potentially available into information and knowledge that is usable in real time?
I think that the mobile platform becomes an analytics engine that synthesizes not just the Siri-style Web-based look up, but it is much more contextually aware and much more able to bring the right information to the right person at the right time.
So what does that mean to a police officer? Police officers often tell me that a device like Motorola’s LEX 700 is wonderful — it’s rugged, it is clearly a broadband device and has a great user interface — but you have so many things in there that you can show me at any given time. If you show me more than two at once, it’s not useful. The problem is, how do you know which two things are relevant at any given time? For a police officer, what’s relevant changes depending on whether he’s writing a traffic ticket or responding to a fellow police officer who is in trouble. So it is not just a simple query; it’s analytics to synthesize what is important and provide that information through the user interface that is best adapted to the particular situation.
There will be a lot more voice interaction and more going on around gesture interaction. Let’s face it, the keyboard isn’t really so good on mobile devices, so that is obviously not the way of the future; it’s the next barrier to take down. On the enterprise side, we have invested in something called a ring scanner, where the scan engine is worn on the fingers and the computer that it’s driving is worn around the wrist. We developed it with one of the large transportation logistics companies. It’s the kind of thing that you are going to see a lot more of.
We also are thinking of embedding display technology — you may be familiar with Google glasses, something like that but much more robust and appropriate for the environments that we serve — that would let police officers or firefighters always be looking up. They would remain situationally aware but have their reality augmented with the technology.
The key message is that the technology-to-human barrier will continue to go down. As the technology gets smarter and as the user interface improves, a mobile device becomes much more an extension of the body rather than a separate piece of equipment. That is the long view of where we are trying to take this thing.
People are going to want to wear devices and have them be as unobtrusive as possible. Obviously display technology and analytics will influence that. But energy and batteries will be other key components. If these devices are going to unobtrusive and wearable, they need to be small and compact.
One very big thing that is driving battery technology is the growth of hybrid and all-electric vehicles. That is really pushing battery technology and it is making the economics and the investments that companies would make in battery technology an entirely different game. We don’t invent battery chemistry here, but we invent battery management systems. And it’s not infeasible that by the end of the decade there could be technologies that double the energy-to-volume ratio for batteries. That means I can get the same amount of energy from a battery half the size. Those sorts of things start to make a huge difference in terms of the wearability of devices.
The President’s Council of Advisors on Science and Technology has recommended that spectrum policy should shift from dedicated spectrum toward shared spectrum — meaning that instead of dedicating the spectrum, you make the spectrum publicly accessible and create an access mechanism so that it can be intelligently shared.
You can start to see the technology itself with things like cognitive radio or software-defined radio. Increasingly, these devices will be able to find spectrum that is available, adapt to it, use it and then hop off. So connectivity gets to be a lot more efficient and resilient. Over the near term, you will see devices that have many bands of capability and/or many different methods of access within them.
The next trick becomes allowing them to operate on one or more bands concurrently so that you can listen on one band while talking on another, for example. Then if you could get a device that is actually capable of hawking on two different pieces of spectrum — maybe even with different technologies concurrently active — multi-networking comes into play. It becomes a situation where the device and the network can make decisions about which path to use depending on which is the most economical or which offers best performance for what I need to do.
In the future, the police vehicle is a virtual partner. So the pieces that need to come together look something like this: There’s a personal area network around the officer that connects all the devices that he or she is carrying or wearing. Around the vehicle itself, there’s a vehicle or incident area network. If you step back, there is wireless infrastructure surrounding all of that. Hopefully the new FirstNet network is a part of it, but there also are carrier assets and potentially local assets that are available to connect the people on the incident scene.
The police vehicle is equipped with 360-degree video cameras, so it can constantly watch the scene. And it has a digital video recorder, so it is recording everything that happens. It has a powerful computer in it, and it has a broadband connection back to command and control.
As the officer pulls over a driver, the police vehicle can immediately read the license plate from the stopped vehicle and run a real-time license plate check. It also can recognize the color and type of vehicle that is stopped, and determine if the vehicle description matches the license plate. The same correlation is going on with registration information. All of that information can be automatically entered in a citation; the officer doesn’t have to write anything.
The police officer is wearing what looks like a pair of sunglasses that are equipped with a forward-facing camera to capture video of the scene and they also deliver augmented reality to the officer. So [officers] can concentrate on what’s happening in front of them, but they also are getting additional information about the scene and the person they stopped.
Someone may be in the backseat out of the officer’s sight line, or an accomplice may pull in behind the police car and approach the officer from behind. But the police vehicle, with its 360-degree video, is looking for motion and other things that shouldn’t be happening. It can alert the police officer of the potential danger.
In addition, officers in the field increasingly will be able to plug into information from city assets — like video surveillance cameras, biometric sensors, motion sensors — that will give them even more information about what’s going on at the scene. ¨