IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

For the First Time, Google Admits Partial Blame for Self-Driving Car Crash

The company said it will improve its self-driving software in response to a crash one of its autonomous vehicles got into on Feb. 14.

For the first time, Google has identified that one of its test autonomous vehicles was at least partly responsible for a collision.

On Feb. 14, a Lexus RX450h outfitted with Google’s test-stage autonomous driving tech collided with the side of a passenger bus in Mountain View, Calif. According to the accident report Google filed with the state Department of Motor Vehicles (DMV) , the Lexus AV was routed to take a right at a light when it turned red. Since other cars in front of the Lexus were stopped in the right lane, the AV edged out to the right of the queue — only to find sandbags blocking its path. So it attempted to head back into the lane as the light turned green. When the driver looked into the rearview mirror and saw a bus approaching the light, the driver assumed the bus would yield to the Lexus and let the AV continue into the lane. The Lexus hit the side of the bus, causing damage to itself but no human injuries.

In its upcoming February self-driving car report, Google representatives write that it “clearly” had some responsibility because the crash wouldn’t have happened if the AV stopped.

“This is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision,” the report reads. “That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.”

The cars didn't stray from the middle of lanes until a few weeks ago, according to the report. They wrote that maneuver into their software in an attempt to avoid annoying drivers behind them at lights.

Company representatives said they will learn from the incident by working into their driving software an expectation that buses and heavier vehicles aren’t as likely to yield to their AVs as smaller vehicles are.

Since the company began testing its self-driving cars in 2009, it has staunchly defended the performance of its vehicles — though its AVs have now been in 18 accidents, spokespeople have reported that none of the previous 17 were the fault of the self-driving software.

Chris Urmson, leader of the company’s self-driving car project, has also spoken publicly about the value of mistakes the vehicles make as a tool for improving the driving software — specifically when it comes to disengagements. Disengagements are when the self-driving software turns over control to the driver of the vehicle due to a hardware failure, a sensor anomaly or other factors.

Urmson said the rate at which its self-driving software disengages has dropped dramatically during its testing, signaling that that software is getting better and better at operating without the need for humans. One metric the company has looked at is “simulated crashes” —  times when the company believes the car would have crashed if a driver had not taken control. In the final three months of 2014, the Google fleet had eight such events. In all of 2015, it had five. Put in terms of miles driven, the rate of simulated crashes dropped from once every 6,250 miles to once every 74,000 miles.

Consumer Watchdog, which has been monitoring the development of regulations regarding AVs in California, called on Google to publicly release video of the incident. John Simpson, director of the group’s privacy project, wrote in a statement that the crash is a reminder that California’s DMV should insist that such cars are rigged to hand control over to humans when necessary.

That’s become a sticking point in the DMV’s development of AV regulations. A set of draft regs the department has published would require that a licensed driver sit behind the wheel of AVs at all times ready to take control. Google and other companies have suggested that those regulations would, at best, stifle the development of cars that can do things like drive the blind from place to place. At worst, they might actually be dangerous — some have argued that it’s not a good idea to let a human driver do whatever they want and then suddenly ask them to take control of a moving car.

Simpson said the disengagement reports, and now the bus crash in Mountain View, are examples of exactly why the cars aren’t ready to handle themselves without human intervention — even if they can handle most situations, they haven’t reached the level of sophistication necessary to perform by themselves under every condition.

“This accident is more proof that robot car technology is not ready for auto pilot and a human driver needs to be able to takeover when something goes wrong,” he said.

Ben Miller is the associate editor of data and business for Government Technology. His reporting experience includes breaking news, business, community features and technical subjects. He holds a Bachelor’s degree in journalism from the Reynolds School of Journalism at the University of Nevada, Reno, and lives in Sacramento, Calif.