Self-Driving Car Tech Is Easy: Autonomous Morals Are The Killer
Your self-driving car is running a smooth 50 mph when a kid chases its ball into the road. Swerve, and the kid is safe but your car will crash; keep going, and there's a good chance of running them over. With a split-second to react – not enough to push responsibility back over to whoever is inside the vehicle – what should the AI in charge do?
That's the moral question running alongside the technical and legal challenges of making autonomous cars, and it's arguably no less tricky to address. With a legal system – and, for that matter, an insurance system – set up to take into account human reactions and flesh and blood reaction-times, introducing computers with programmed-in morals adds a whole new, philosophical edge to the discussion.
Researchers Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan of the Toulouse School of Economics, University of Oregon, and Massachusetts Institute of Technology respectively set out to pin down exactly what humans faced with this sort of challenge figured the cars should do, and exactly who should be sacrificed.
Turns out, when it comes to choosing between "two evils" – such as running over a pedestrian or alternatively putting the occupants of a self-driving car in danger – the participants in the study favored self-sacrifice by whoever is inside the vehicle.
Indeed, "even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles" the researchers say.
Unsurprisingly, there's a strong degree of self-interest when self-driving car safety is concerned. The study found that so-called "utilitarian autonomous vehicles" – which would sacrifice whoever is being transported for the "greater good" – were not only approved of, but the preferred choice when other people were buying such cars.
However study participants themselves would much rather ride in a car that had an AI programmed to preserve their life above that of all others.
That appetite for self-preservation could have a serious knock-on effect on regulations and the eventual roll-out of vehicles that drive themselves, it's suggested. Those surveyed expressed disapproval of any enforced "utilitarian" behavior for autonomous vehicles, saying they'd be less likely to buy a car where that was the case.
"Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology," the authors conclude.
Where it gets particularly complicated, of course, is when you consider that few real-world incidents are going to be as clean cut as "does the occupant die or does the pedestrian die?"
Instead, there's likely to be some combination of potential death or injury, more than two people to consider – say, swerving to avoid a group of people in the road but then endangering a single person elsewhere – and the certainty that no two individuals in the situation will be the same in terms of age, prevailing health, and other factors.
Even just choosing one "moral" version over another could introduce further legal issues, it's suggested, since those picking a car that favors occupants could be seen as another form of liability.
"If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them," the authors ask, "is the buyer to blame for the harmful consequences of the algorithm's decisions?"
Already, we've seen some automakers take at least a partial stance on this; Volvo, for instance, has publicly committed to being responsible for any accidents caused by its vehicles while in autonomous mode.
Nonetheless, it's a question with no straightforward answer, and while many companies are busy hunting down the solutions to the technical side of self-driving vehicles, there's comparatively little in the way of discussion about how they'll co-exist with everyone else.
SOURCE Science