IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Where can you go help an autonomous car think?

Answer: MIT’s Moral Machine

The good folks at the Massachusetts Intitute of Technology (MIT) have developed an online program that gathers human perspective on moral decisions that a driverless car may have to make. The scenarios are presented to the user, who must decide the lesser of two evils: killing group A or group B.

Participants are told that the brakes of an autonomous vehicle have been cut and can either decide that the car will keep going in its lane and hit the pedestrians ahead, or swerve into the other lane and strike another set of pedestrians. In some cases, a cement barrier is presented as an alternative choice, but if chosen will kill the driver and passengers.

The user is not a part of the scene, but makes the choice for the car. Thirteen dilemmas are posed, each time varying the number and type of people walking and those in the vehicle.

Variables such as age, gender, weight, social value and species are also changed to add complexity. The data collected serves as a crowd-sourced picture of how humans would react in these situations.