IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Data, Public Perception and the Tesla Crash: Exploring Options for the Self-Driving Car Industry

In order to put self-driving cars on the streets, auto makers must first prove themselves, speakers at an automated driving convention have argued.

SAN FRANCISCO — It was only mentioned a couple of times, and still the looming specter of Tesla Motors’ first death while one of its cars was in self-driving mode hung over the 2016 Automated Driving Symposium’s first full day of discussion.

Sitting outside on the street during the convention was a white truck, a smaller analog to the one that a Tesla Model S crashed into in Florida in May while in Autopilot mode, displaying a sign that read “Tesla don’t hit me!” The group that put it there, Consumer Watchdog, has been hounding regulators and self-driving car manufacturers to take the process of developing and deploying the technology slowly.

It was a reminder of what’s at stake for the people inside the building, a reminder that’s become rote at self-driving car conventions, press events, webinars and speeches: Safety is the No. 1 priority behind the technology. Since humans are responsible for most traffic accidents, the theory goes, giving control to a computer should dramatically reduce the number of people killed on the road every year.

The Autopilot crash is the first confirmed fatality to happen while a car was in self-driving mode. There have been many minor crashes — Google and the California Department of Motor Vehicles have begun documenting these — but they are almost always fender benders, and almost never the fault of the automated driving software.

Tesla+truck
Consumer Watchdog displays a white truck similar to the one hit in May by a Tesla Model S while in Autopilot mode with a sign that read “Tesla don’t hit me!” The group later parked the vehicle outside the 2016 Automated Driving Symposium in San Franciso. Photo courtesy of Consumer Watchdog.

Still, it has loomed large over the world of automated driving since Tesla broke the news in June, grabbing headlines and prompting think pieces.

And perhaps the most important piece of it all is that the accident has prompted an investigation from the National Highway Traffic Safety Administration (NHTSA) into the efficacy of Autopilot.

Which is exactly the type of thing that some speakers at the symposium think the industry needs in order to finally make the leap from development to deployment. After all, there has been precious little public data available to gauge just how much safer automated driving might be compared with human driving. Preliminary research has suggested that self-driving cars might get into accidents more often than human-driven cars, but that those accidents might be much less serious and destructive on the whole. When Tesla announced the NHTSA investigation, it noted that the first Autopilot fatality took more miles of driving to occur than the average rate of traffic fatalities in human-driven cars in the U.S.

And really, some of the speakers argued, such research will be the key to government decisions on how to regulate self-driving cars.

“The process of drafting legislation or regulation is only possible when this kind of information is available for [review] — industry technology standards, data about performance across different conditions, information about the real-life risks you’re wanting to protect the driver from,” said Sarah Hunter, head of policy for Google X. “Without this sort of information, even the most simple kinds of question are very [difficult to answer].”

Bryant Walker Smith, a law professor at the University of South Carolina who has written about the legality of automated driving in the past, pointed out that at present, self-driving automakers will need to either change certain federal rules or apply for exemptions in order to put cars on the street. An easy example is the Federal Motor Vehicle Safety Standards, which require a vehicle to have features like steering wheels.

Google has been developing self-driving cars without steering wheels. Giving a human the ability to relax and keep their eyes off the road, only to suddenly hand over the task of driving to them later, is dangerous, Google representatives have posited.

“If you simply want an exemption, if you want to get around a law of general application, then show the government why that is justified," Smith said during the symposium. “And in showing government, show the public that it represents the same.”

That last part is especially crucial, Smith believes. As regulators begin to grapple with the idea of how to approach the question of automated driving — a question that has the potential to remake cities so that they have more space and serve more people more efficiently — they will be looking to the public, he said. Public perception, which will be influenced by the data available for automated driving, will certainly play a part in the development of regulations.

As of now, an oft-repeated standard from automated vehicle developers is that self-driving cars should be allowed on roads as soon as they are safer, on average, than human drivers. But as other speakers at the symposium noted, real research on fully automated driving is scant.

“My prediction," Smith said, "is that automated driving will truly be imminent when an automated driving developer is willing and recognizes the necessity of sharing its safety philosophy with the public through concrete data and thorough analysis."

Ben Miller is the associate editor of data and business for Government Technology. His reporting experience includes breaking news, business, community features and technical subjects. He holds a Bachelor’s degree in journalism from the Reynolds School of Journalism at the University of Nevada, Reno, and lives in Sacramento, Calif.