Tesla’s Elon Musk isn’t alone in thinking that autonomous vehicles will dominate the roadways in a few short years -- but not everyone thinks that way.
Others, like Steve Shladover, a transportation researcher at the University of California, Berkeley, say Musk’s optimism is “ridiculous.” In fact, he told Automobile magazine that today’s public shouldn’t expect completely autonomous vehicles within their lifetime, because there are a lot of hurdles standing between the research and the road.
“Merely dealing with lighting conditions, weather conditions and traffic conditions is immensely complicated,” Shladover told the publication. “The software requirements are extremely daunting. Nobody even has the ability to verify and validate the software. I estimate that the challenge of fully automated cars is 10 orders of magnitude more complicated than [fully automated] commercial aviation.”
Shladover doubts, too, whether the bold safety claims made by proponents of self-driving vehicles would actually materialize once the vehicles became common. While robots exhibit much faster response time and decision-making in controlled environments than humans, the chaos of the roadway does not eliminate the potential for disaster, but merely shifts the potential for error from a human driver to a human programmer.
Chris Gerdes, a mechanical engineering professor at Stanford University and director at the Center for Automotive Research, also expressed his doubts about self-driving vehicles. Gerdes championed development of a self-driving Audi TTS that made headlines for keeping pace with human drivers, but even so maintains that the human machine is an exceedingly difficult one to replicate. It may be lifetimes before artificially intelligent systems can near the complexity of the human brain and the capabilities it enables.
Issues of morality and ethics, too, stand in the way of self-driving vehicles making headway.
If computers are expected to make decisions that humans would normally need to make, then a general consensus would need to be established on what is right and wrong in various scenarios. Values resting on a linear scale would need to be assigned to different life forms. A vehicle unable to stop in time before hitting one of several objects would then need to make a moral judgement about whether it’s better to kill a deer, an old man, a young woman, a child or fly over a cliff and kill the passenger. Most would agree a dead deer is preferable to any other outcome in which a human is killed, but animal rights advocates might take exception to such a judgement. In any case, the point is made that moral decisions are subjective, contentious and represent a massive barrier to the development of self-driving vehicles.
Adjacent to moral issues are legal issues. Automobile magazine reports that the existing legal framework in the U.S. is flexible enough to deal with the issues that would arise from self-driving vehicles, but lawsuits resulting from accidents would also likely involve greater sums. The reason is because the targets of the lawsuits would not be the person in the vehicle, but the manufacturers. While self-driving vehicles might increase safety overall and concomitantly reduce insurance costs, the potential for larger lawsuits that target manufacturers with deeper pockets could raise insurance rates.
That vehicles might control themselves also raises fears that they might be controlled remotely. Fears of remote vehicle theft, kidnapping or terrorism pile on to an already substantial list of obstacles.
The challenges are many, no doubt, and while experts continue to disagree about when the technology fueling self-driving vehicles will proliferate and flourish, there are few who can deny that the technology is coming, sooner or later.