IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Should Your Self-Driving Car Kill You or the Two People Walking Across the Street?

In event of an accident, a utilitarian algorithm would instantly calculate the number of lives at stake inside the vehicle, versus the number of lives at stake if it hits bystanders.

(TNS) -- People are less likely to buy a self-driving car if it may deliberately kill them in an accident -- even if it saves the lives of bystanders who would otherwise be killed. They especially don't like it if such a feature is mandated by the government.

That's the not entirely surprising conclusion of a study examining public acceptance of such "moral" or "utilitarian" cars, which would be programmed to make ethical calculations to kill the fewest people.

Mandating vehicles to act in the name of the collective good runs up against the public's desire to control their safety as individuals, said the study, published Thursday in Science.

In surveys of hundreds of people, the concept of self-driving cars proved popular, but only if they put passenger safety first.

Autonomous vehicles have been in limited use for years; Google uses them to photograph areas for its ground maps; automakers are now testing them. But before they can be made generally available, laws need to be updated, such as liability issues. In event of an accident, who will be liable -- the passenger or the company that made the car?

Such accidents would actually be quite rare, researchers say. Human error, the cause of 90 percent of accidents, would be removed. Driving would become far safer, and traffic casualties would plunge.

However, most those surveyed said they didn't want to surrender control of life-and-death decisions in their own vehicles. They would rather not buy such cars, and continue driving on their own.

So paradoxically, equipping self-driving cars with this "utilitarian programming" could result in more deaths than by leaving it out, the study said.

Here's how the programming would work. In event of an accident, an algorithm would instantly calculate the number of lives at stake inside the vehicle, versus the number of lives at stake if it hits bystanders.

If the car cannot safely protect both passengers and bystanders, the algorithm would act to kill the fewest number of people, even if it means sacrificing its occupants. For example, it might cause the car to veer into a concrete wall to miss a group of pedestrians.

And if the car could not avoid hitting bystanders, it would hit the fewest possible.

Researchers surveyed people on their attitudes toward such calculations, presenting them choices among algorithms that calculate these life-and-death trade-offs. In six surveys covering various scenarios, researchers found the results varied depending on whether the vehicle's passengers were sacrificed.

For example, an algorithm that told the car to kill one person to save 10 bystanders was popular, with 76 percent of 182 surveyed approving. Another survey of 451 people, with the number of bystander lives saved varied from 1 to 100, produced similar results.

But when the person killed was a passenger, the favorable response rate dropping by a third.

In other words, it was "considered a good algorithm for other people to have," but people valued it less highly as a desirable option when they might be the person sacrificed.

Government-enforced sacrificial algorithms were even more unpopular, as shown in a survey of 393 people asked to rate from 1 to 100 their willingness to buy a self-driving car. For self-driving cars without such a mandate the median response was 59. But if it was mandate, the median response dropped to 21.

After all this questioning, researchers concluded that building in these death options could deter people from buying utilitarian autonomous vehicles.

"This is the classic signature of a social dilemma, in which everyone has a temptation to free-ride instead of adopting the behavior that would lead to the best global outcome," the researchers said in the study.

The authors of the new report were Jean-François Bonnefon of the University of Toulouse Capitole in, Toulouse, France; Azim Shariff of the University of Oregon in Eugene; and Iyad Rahwan of the Massachusetts Institute of Technology in Cambridge. The study can be found at j.mp:/autokill.

An accompanying perspective article said the public's conflicted feelings on allowing vehicles to choose which lives to spare mean that autonomous vehicles will be controversial no matter what choices are made.

"Manufacturers of utilitarian cars will be criticized for their willingness to kill their own passengers," states Joshua D. Greene of Harvard University. "Manufacturers of cars that privilege their own passengers will be criticized for devaluing the lives of others and their willingness to cause additional deaths."

There is a chance that public opinion may shift over time, Greene stated in his piece, available at j.mp/carkillper.

"People may get used to utilitarian autonomous vehicles, just as some Europeans have grown accustomed to opt-out organ donation programs and Australians have grown accustomed to stricter gun laws," he wrote. "Likewise, attitudes may change as we rethink our transportation systems."

©2016 The San Diego Union-Tribune Distributed by Tribune Content Agency, LLC.