It’s a far cry from hard proof, but the first attempt to determine whether self-driving cars are safer than conventional vehicles goes like this: They might get into more crashes, but the crashes are far less serious than the ones humans get into when they’re at the controls.
That’s according to a study released Thursday from the University of Michigan’s Transportation Research Institute. The report pulled together all the publicly available data on the crashes that self-driving cars have been in and compared it with crash statistics involving conventional vehicles from the National Highway Traffic Safety Administration.
After adjusting for under-reporting of accidents, researchers Brandon Schoettle and Michael Sivak found that:
Compared with conventional vehicles, AVs mostly got into low-speed rear-ending crashes, as opposed to the more violent, higher-speed, head-on and T-bone crashes that cause worse injuries and fatalities. The AVs also didn’t hit pedestrians or bicyclists in any of the incidents, something that happens regularly with human-driven vehicles.
“Having a crash, of course, isn’t desirable, but the increased crash rate with the vehicles at this point doesn’t imply that they’re less safe,” said Schoettle, the lead author.
He compared the early trends to those found in intersections before and after traffic circles are installed. Though the number of crashes at an intersection can increase after a traffic circle is put in, they are typically less severe because they are mostly not head-ons or angled collisions. Property-only damage is preferable to damaging human bodies.
For instance, Google's crash reports describe situations where drivers felt whiplash, but nothing more serious than that. Some crashes were so minor they didn't even involve damage to the vehicle, such as an incident where a human driver passed a Google AV on the right and brushed a sensor.
Some, including a representative of Consumer Watchdog, have posited that autonomous vehicles might be accident-prone because they behave differently from human drivers. Schoettle said the data in the study isn’t enough to weigh in on that question, but it's one possible explanation for why the vehicles appear to be involved in more crashes even though the self-driving software hasn’t yet been at fault.
In fact, Schoettle said, the anecdotal evidence bears out that people in Mountain View, Calif. — the main testing ground for Google — act differently around the driverless cars.
“They see the vehicles and they recognize them and everyone is of the opinion that you want to get in front of it,” Schoettle said. “You don’t want to be behind it because it might be overly cautious and stop [when you don’t expect it to].”
The authors cautioned readers to take the findings with a few grains of salt. With just 11 self-driving car crashes on the record and almost all the miles driven in favorable weather, Schoettle said he and Sivak couldn’t yet rule out the possibility that self-driving cars actually get into fewer crashes than conventional vehicles.
“The margin of error, the confidence intervals for the self-driving vehicles, are still pretty large at this point,” he said. “The real number will fall between those two extremes, the margin of error.”
But with Google — the company with the largest self-driving vehicle fleet, at least in terms of those cars reported to the California Department of Motor Vehicles — surpassing 1 million miles driven in autonomous mode earlier this year, Schoettle said he felt that research could finally begin into the cars’ safety.
The bottom line, he said, is that there needs to be more data.
“As is probably the case for the conventional vehicles, a significant increase in mileage … if we could get a situation where there’s 50 million or 100 million [miles driven in autonomous mode], that would really strengthen the statistical power of the self-driving data,” he said.