For five minutes, I was a self-driving car. Thanks to Moovel Lab, the R&D arm of German auto manufacturer Daimler-Benz’s Moovel Group, I experienced what it's like to drive by sensor at SXSW in Austin.
Of course, when I received the invitation, I was immediately intrigued, but trying out the experience gave me a new perspective on the technology behind self-driving vehicles going forward.
The self-described purpose of this Moovel Lab experience is to create “a prototype for empathy, creating situations in which the driver and nearby bystanders become part of a conversation about responsibility, fascination, and perhaps, hope for a world that is moving towards autonomous mobility.”
The self-driving experience involved riding headfirst and head down (yes, your head is theoretically the bumper) in a vehicle with a frame slightly smaller than a golf-cart. The open cab vehicle looks like it could have been built during World Word II with its angular, chunky design, except for the onboard technology. It is equipped with manual driving controls and a variety of sensors that emulate the eyes of today's self-driving cars.
To experience what a self-driving car sees, you must wear a virtual reality headset as you maneuver the vehicle around, which sounded much easier than it actually was. After putting on the headset and orienting myself with the controls, I quickly realized that my perspective on reality was completely flipped.
Even with all of the overlaid data and grids, my orientation with the environment around me was completely different. Detecting where objects were and their exact proximity became much more challenging. At one point I went over a bump that on my VR headset appeared to be an obstacle the size of a mountain, but afterward, I realized it was only a few inches tall.
One of the interesting aspects of the experience was that the color of the lines would change based on the objects it detected. If there was a car in my field of view, they would turn blue. If there were no obstructions, they would stay green. I could barely tell what I was looking at, but the image recognition was far better than my own, leveraging the stream of sensor data it collected.
You can ride along with me in this video of my test drive as a driverless car.
I’ve written about self-driving cars and autonomous technology for years, but being able to experience what they see provided a me with a couple new thoughts on how we need to prepare for them.
1. We need to optimize new and existing infrastructure for the way self-driving cars see — Although self-driving technology has advanced significantly, our built environments are still not optimized for their continued emergence. This is even more critical when you think about the rise of self-driving taxis and delivery vehicles that will have to navigate complex streets and obstacles in city centers.
2. The convergence of sensors is making self-driving cars more efficient, but there are limitations — As I watched the colors change on my VR headset each time the sensors spotted and identified different objects, it became clear that the technology was better than I was at image detection, especially at great distances, but communicating proximity was another story. Understanding the true advantages and limitations of the technology is important as agencies look for where to start with self-driving pilots.
It’s encouraging to see cities across the world continue to experiment with self-driving technology, but this experience made it clear that optimizing our transportation systems for its emergence requires a new perspective that goes beyond setting policies and understanding the trends. You may just need to become one for five minutes in order to start.
Looking for the latest gov tech news as it happens? Subscribe to GT newsletters.