I ran across two articles today on the trolley problem as it applies to driverless (or self-driving) cars:
one in Science by Joshua Greene and
one in the LA Times by Karen Kaplan. Both are based on
this article by Jean-François Bonnefon and colleagues in today's issue of
Science. We discussed the trolley problem briefly
here at PT last October. More precisely, we discussed an extended trolley problem wherein you are in a driverless car and the choices are to kill 5 people, kill 1 person, or kill yourself.
The current research also concerns driverless cars. Not surprisingly, the researchers found support for driverless cars choosing to kill one person rather than five, but they also found that such support withered when you were the one. Their result in fact is completely consistent with the
research of April Bleske-Rechek, which I outlined in
my talk on the evolution of morality. Professor Bleske-Rechek found that people's willingness to sacrifice one person in favor of five decreased with, for example, increasing relatedness of the one person.
Professor Bonnefon and his colleagues employed a survey, similarly to Professor Bleske-Rechek and hers, and found that people's enthusiasm for a "utilitarian" car – a car that will sacrifice the driver in favor of a larger number of pedestrians – decreased as the driver became closer related to the respondent. Professor Greene asks whether driverless cars should indeed be programmed to be utilitarian in that sense; or programmed to behave in some other way, say, to save the driver; or simply be programmed to avoid a crash, come what may. He notes,
Manufacturers of utilitarian cars will be criticized for their willingness to kill their own passengers. Manufacturers of cars that privilege their own passengers will be criticized for devaluing the lives of others and their willingness to cause additional deaths.
Professor Bonnefon and colleagues similarly conclude,
Although people tend to agree that everyone would be better off if AVs [autonomous vehicles] were utilitarian (in the sense of minimizing the number of casualties on the road), these same people have a personal incentive to ride in AVs that will protect them at all costs. Accordingly, if both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so. ... [M]ost people seem to disapprove of a regulation that would enforce utilitarian AVs. Second--and a more serious problem--our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether.
This question – whether to design utilitarian cars or to let the chips fall where they may – is precisely the trolley problem which, as I showed in my talk, is very real and not simply a philosophical exercise.
12 Comments
RJ · 26 June 2016
We could move to a society with a lot fewer cars instead, on the utilitarian grounds that car accidents are one of the leading preventable causes of death . The resulting massive reduction in the need for petroleum also would greatly reduce the likelihood of resource-based wars which bring death as well as other very disutilitous outcomes to people, 99% of the time people who have no stake in the outcome of said wars.
Wanna massively increase human utility? Massively decrease car travel.
I thank you for demonstrating that this sort of problem deserves to be taken seriously rather being used as a reason to ridicule philosophy. In my opinion, a refusal to consider the abstract entailments of one's views is a refusal to examine oneself.
Matt Young · 26 June 2016
I certainly think we need to reduce travel by car, but we are not going to do so any time soon. The next best approach to reducing "death by car" may therefore be to develop driverless cars (and also vastly smaller cars. In the 1970's, during the so-called oil crisis, we confidently predicted that by 2000 cars would weigh 500 kg, get 100 mi/gal, and bounce off each other when they collided. Almost as serious an error as when I predicted that the 3-point basket would have little effect on girls' basketball).
Back on task: I have been thinking more about the problem and realize that it is not the same as the trolley problem. In the trolley problem, you are guaranteed to kill 5 people or 1. In the driverless-car problem, you are not guaranteed to kill anyone, if the car can avoid an accident. So the car has to make a choice among perhaps 4 options: kill 5 people, kill 1, kill the driver, and avoid an accident entirely.
Such consideration, combined with the observation that hardly anyone would buy a car programmed to kill the driver, makes me think the best option is to program the car so that it does its best to avoid an accident, come what may. Supposedly, it is more capable of avoiding an accident than is a human driver, so maybe they should not be programmed to make any ethical decisions whatsoever.
I wonder what other people think?
Mike Elzinga · 26 June 2016
The Trolley Problem actually fits within a broader perspective of human choices that affect the lives of millions of others. If those "others" are strangers, or "foreigners," or especially those who are not born yet, then the weight of these others in critical decisions is considerably diminished in the decision-making process. Zero-sum games, such as "The Tragedy of the Commons," have this feature.
Economist, Nicholas Georgescu-Roegen, had an interesting way of putting it with regard to how our economic decisions about consumption affect future generations. He said that our decision-making processes suggest that we think, "Why should we care about posterity; what has posterity ever done for us?"
Physicist, Albert Bartlett's book "The Essential Exponential outlines the issue in mathematics that is relatively easy to understand.
The relationship of this to Atonomous Vehicles is that that AV problem fits into a larger context of population, life-style, working environments, communication, and the distribution of goods and services.
It doesn't appear to me that the AV problem has a satisfactory solution within the context of our current economic culture and life styles. We would have to revampt our entire world economy in order for AVs to be safe in a way that doesn't put our individual lives up against the lives of others of less personal value to us.
PaulBC · 26 June 2016
eric · 27 June 2016
TomS · 27 June 2016
eric · 27 June 2016
Mike Elzinga · 27 June 2016
Just Bob · 27 June 2016
eric · 28 June 2016
RJ · 28 June 2016
Uber is not any kind of 'sharing' service. It's an unlicensed taxi company that attempts to force municipal governments to withdraw safety and labour regulations without a fight.
I'll never take Uber, and won't deal with any company that relies on it. We already have too many anti-democratic elements with municipal governments in an armbar.
No, not the way forward. Disutilitarian.
By the way, the assertion that massive car reduction is impossible is a self-fulfilling prophecy. We need to reduce carbon emissions in a big way; why not make it a lot easier to travel with shared resources?
harold · 5 July 2016