In a recent paper in Nature1 entitled The Moral Machine Experiment, Edmond Awad, et al. make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called “autonomous vehicles” and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:
1) Find out what “public morality” will prefer to see happen.
2) On the basis of this discovery, claim both popular acceptance of the preferences and persuade would-be owners and manufacturers that the vehicles are programmed with the best solutions to any survival dilemmas they might face.
3) Citizen agreement thus characterized is then presumed to deliver moral license for the chosen preferences.
4) This yields “permission” to program vehicles to spare or condemn those outside the vehicles when their deaths will preserve vehicle and occupants.
This paper argues that the Moral Machine Experiment fails dramatically on all four counts.