After a self-driving Uber SUV fatally struck a pedestrian last weekend in Arizona, global attention is concentrated on the future of autonomous vehicles. Here is a summary of what happened, the public reaction and some of the tough questions that need to be addressed moving forward to build trust.
Now it’s happened. We’ve had our first well-publicized fatal accident involving a fully autonomous vehicle.
“The Volvo was in self-driving mode with a human backup driver at the wheel when it struck 49-year-old Elaine Herzberg as she was walking a bicycle outside the lines of a crosswalk in Tempe, police said.
Uber immediately suspended all road-testing of such autos in the Phoenix area, Pittsburgh, San Francisco and Toronto. The ride-sharing company has been testing self-driving vehicles for months as it competes with other technology companies and automakers like Ford and General Motors. ...”
The immense global coverage of this auto accident has been unprecedented, and the breadth and depth of issues range from the role of the driver to the reliability of the technology used to new state and federal laws governing autonomous vehicles.
A full investigation is underway, and all sides cautioned against jumping to any conclusions without all the facts being examined. Nevertheless, several early articles seemed to defend the autonomous Uber and claim the woman may have suddenly and unexpectedly jumped in front of the car.
However, by later in the week, videos released by the police show the woman more than half way across the road and walking slowly along with her bicycle.
Here is a sample of some of the headlines involving this topic:
Bloomberg: Uber victim stepped suddenly in front of self-driving car — “Police say a video from the Uber self-driving car that struck and killed a woman on Sunday shows her moving in front of it suddenly, a factor that investigators are likely to focus on as they assess the performance of the technology in the first pedestrian fatality involving an autonomous vehicle.”
TheVerge: Uber ‘likely’ not at fault in deadly self-driving car crash, police chief says. “Uber was likely not at fault in the deadly crash of its self-driving vehicle in Arizona on Sunday evening, Tempe Police Chief Sylvia Moir told the San Francisco Chronicle in a startling interview the following day. Her comments have caused a stir in this closely watched investigation, which is being characterized as the first human killed by an autonomous vehicle.”
NY Times: Uber SUV's Autonomous System Should Have Seen Woman — “Two experts say video of a deadly crash involving a self-driving Uber vehicle shows the sport utility vehicle's laser and radar sensors should have spotted a pedestrian, and computers should have braked to avoid the crash.”
The Atlantic: Self-Driving Cars Still Don't Know How to See — “This is the second death in the United States caused by a self-driving car, and it’s believed to be the first to involve a pedestrian. It’s not the first accident this year, nor is this the first time that a self-driving Uber has caused a major vehicle accident in Tempe: In March 2017, a self-driving Uber SUV crashed into two other cars and flipped over on the highway. As the National Transportation Safety Board opens an inquiry into the latest crash, it’s a good time for a critical review of the technical literature of self-driving cars. This literature reveals that autonomous vehicles don’t work as well as their creators might like the public to believe.”
USA Today: Operator of self-driving Uber vehicle that killed pedestrian was a felon — “The operator behind the wheel of a self-driving Uber vehicle that hit and killed a 49-year-old woman Sunday night had served almost four years in an Arizona prison in the early 2000s on an attempted armed robbery conviction.”
WSJ: How Uber’s Self-Driving Car Could Have Missed the Pedestrian — “The roads north of Arizona State University are in many ways ideal for testing self-driving cars, with wide, clearly marked lanes and minimal traffic late at night, when the vehicle’s laser sensors work best.
The optimal conditions make it especially troubling that an Uber Technologies Inc. self-driving car plowed straight into and killed a pedestrian walking across a street here at night, without appearing to brake or veer, according to a video from the vehicle released by police Wednesday.”
Background: Autonomous Vehicle Context Within Technology Trends and Platforms
The Internet of Things (IoT), and, more specifically, smart cities and smart transportation systems, are the hottest of leading-edge technologies as we head toward 2020.
Within this IoT space, autonomous vehicles (AV) are perhaps the No. 1 area of public interest. (Side Note: Gartner puts AV within the artificial intelligence (AI) area). Public- and private-sector involvement (or lack thereof, as the case may be) and global research are all around us. Here are helpful articles on the background and research regarding the technology.
I really like this analysis from Undark.org on key questions posed by the rapid rise of autonomous technology. There are numerous questions and ethical considerations that many people are looking at in a much closer way now that someone has died. This examination is very important, but I am sure the auto companies were hoping to discuss these challenges and trade-offs under different circumstances.
Here is an excerpt:
“Why didn’t the automated car avoid the pedestrian? Was there a sensor failure? A faulty algorithm? Why didn’t the human “safety driver” prevent the crash? Was the car at fault? Or, as seems to be implied by coverage that takes pains to point out that she was not on a crosswalk, was the pedestrian really to blame? Talking to The San Francisco Chronicle, Tempe Chief of Police Sylvia Moir noted gravely that “It is dangerous to cross roadways in the evening hour [outside crosswalks] when well-illuminated, managed crosswalks are available.”
As for Uber, Sunday’s fatality clearly hurts their numbers. Their test fleet has logged over two million miles, a figure the company hit in late December. That would extrapolate out to a rate of close to 50 fatal crashes per 100 million miles driven, which, caveats about apples and oranges aside, stands poorly against the figure for human drivers in the U.S. of 1.18 deaths per 100 million miles.”
Other important questions were raised earlier by the Institute of Electrical and Electronics Engineers (IEEE). Read this informative article on “the big problem with self-driving cars,” which is the behavior of people crossing streets in congested neighborhoods without rules being followed.
Wired Magazine offered this helpful perspective on the current reality that research shows that puny humans still see the world better than self-driving cars. “You’re probably safer in a self-driving car than with a 16-year-old, or a 90-year-old," says Schoettle. "But you’re probably significantly safer with an alert, experienced, middle-aged driver than in a self-driving car." (Vindication for those 40-somethings feeling past their prime).
Public-Sector Involvement Regarding Autonomous Vehicles
Here are some of the important policy efforts related to autonomous vehicles over the past few years:
Important Questions and Next Steps After This Arizona Accident
So a fatal accident has occurred. What’s next?
First, we must recognize that states and political leaders are competing for dollars, jobs, research facilities and their future innovative images as they try to attract more testing of autonomous vehicles in their states. No one wants anyone to die in the process, but Uber moved to Arizona from California because of less regulation regarding autonomous vehicles.
TheNextweb.com discussed how states are competing to get autonomous vehicles on the road, but asked: Should they be? After this accident, there is no doubt that the risks are higher, but most governors are not changing their policies and are taking a wait-and-see attitude. The articles list several pros and cons for current policy, and now citizens are paying attention.
Third, here are several different road maps and next steps to consider:
Wrap-Up — My Take
I am generally an advocate for autonomous vehicles, and I am excited about the prospect of driverless cars coming soon for my family.
Nevertheless, I was a surprised by several articles (like this one) that came out immediately after the accident that basically defended autonomous vehicles on the grounds that thousands of people die on the highways every year. While technically true, these AV defense pieces seemed pre-written — ready for someone to die in an autonomous vehicle crash to be published.
Not only do I think these articles are in poor taste (written too soon after the accident), I also think they don’t help the auto industry as much as the authors think they do.
Building trust is key, and a subliminal point of these pieces is that some people will need to die in order to get society more quickly to an all-autonomous road situation (with presumably fewer accidents coming when we reach this transportation utopia.) This defense makes me feel that victims are being compared to soldiers fighting in wars — dying for the good of society. In my opinion, this comparison doesn’t work because we are not fighting enemy combatants, but accidental deaths on highways.
More important, this rationale does not build societal trust in the testing and public/private decision-making processes with autonomous vehicles during the next decade of transitions. It may actually lower trust in autonomous vehicles.
To build more trust in AV, the facts of this case (and future cases) must be transparent. Blaming the woman for not using a crosswalk won’t help, because most drivers have seen this situation many times and reacted. The public wants to know why the car did not see the woman, or even slow down or swerve when she was half-way across the road? Was this truly a one-off situation or a deeper flaw?
We have not even added in other autonomous vehicle questions/factors such as hacking or bad weather or unmarked roads, which will complicate the conversation further.
Talking about the Uber accident, one friend commented, “I am now watching more closely to see what happens over the next few years. Fool me once, shame on you, fool me twice, shame on me.”
Bottom line, everyone in the world is now focused more intently on autonomous vehicle testing, and the margin for error just got thinner.