Extended trolley problem

Posted 28 October 2015 by

I just ran across this article, Should a self-driving car kill its passengers in a "greater good" scenario? Though the article does not say so, it is the trolley problem, but with a twist: You are the driver of the trolley, and you have to ask whether you ought to be sacrificed for the greater good. That is, there are now three possibilities, not two: do nothing and kill five people; swerve and kill one; or (the added possibility) swerve and kill yourself. Any thoughts?

33 Comments

eric · 28 October 2015

The strictures on the trolley problem are so completely unrealistic as to render it a meaningless question (IMO) in terms of real ethical issues. We never have certainty about what will happen. And we don't judge people on pure outcome, but also on what they could realistically have known or estimated as to outcome as well as the time they had to make a decision (limited time decisions tend to be assigned less moral culpability; since the trolley problem gives you essentially no time to make a decision - if there were lots of time, the people would move out of the way rendering the problem void - IMO if it really happened, we would not assign *any* moral culpability to the driver *regardless* of their decision).

However in the spirit of stupid 'truth or dare' type questions, my answer is: before I had kids, #3. Now that I have them, #2.

John Harshman · 28 October 2015

What is my degree of relatedness to the people who might be killed? Clearly I must act to maximize my inclusive fitness. Actually, we must consider both relatedness and expected future reproductive output (counting any remaining parental care).

There, back on topic for Panda's Thumb.

Marilyn · 29 October 2015

Everyone should be aware they are built to save the occupants.

DS · 29 October 2015

Everyone should be aware that they are evolved, not built.

Kevin B · 29 October 2015

DS said: Everyone should be aware that they are evolved, not built.
Actually, Marilyn's statement is sufficiently ambiguous that she might have been referring to the "self-driving cars", or indeed the invisible pink unicorns.....

Just Bob · 29 October 2015

What... isn't it egg cartons?

John Harshman · 29 October 2015

Kevin B said:
DS said: Everyone should be aware that they are evolved, not built.
Actually, Marilyn's statement is sufficiently ambiguous that she might have been referring to the "self-driving cars", or indeed the invisible pink unicorns.....
If Marilyn was referring to everyone by "they", what would the occupants be? Would she be alleging that you were built to save your bacteria, intestinal biota, and tapeworms?

Matt Young · 29 October 2015

What is my degree of relatedness to the people who might be killed? Clearly I must act to maximize my inclusive fitness. Actually, we must consider both relatedness and expected future reproductive output (counting any remaining parental care). There, back on topic for Panda’s Thumb.

I described some objective research (not mine) on the trolley problem here. See Slides 12 ff. and especially 15-18. The gist: People's willingness to throw the switch decreases with their relatedness to the "target" and increases with the age of the target. I have unaccountably never reported this material on PT. See April Bleske-Rechek, Lyndsay A. Nelson, Jonathan P. Baker, Mark W. Remiker, Sarah J. Brandt, “Evolution and the Trolley Problem,” Journal of Social, Evolutionary, and Cultural Psychology 4 (3), 115-127 (2010) for details.

Michael Fugate · 29 October 2015

I always opt swerve and everyone survives.
Otherwise doing nothing is easier to justify.

eric · 29 October 2015

Kevin B said:
DS said: Everyone should be aware that they are evolved, not built.
Actually, Marilyn's statement is sufficiently ambiguous that she might have been referring to the "self-driving cars", or indeed the invisible pink unicorns.....
I took her comment as referring to the trolley. I.e., the trolley is built to save the occupants. I'm not sure that's actually true, but it sounded to me like her point was 'avoid both bystanders and trust that the trolley may prevent you, the operator, from dying.' Its a valid position IRL. Its not in the philosophy version because in that version you know with absolute certainty that option #3 leads to your death. Which just highlights the unrealistic nature of the philosophy version.

TomS · 29 October 2015

One thing to take into account is the capability of the other actors in the case.

I mean that I can expect that some others can take care of themselves, so I can focus my attention on those who cannot. Children rather than adults; pedestrians and bicyclists rather than motorized vehicles; those who are unaware of the danger and those whose options are limited merit more care on my part.

I think that the rules of the sea and aircraft take account of something like that. Those craft which are less maneuverable have a hiigher priority.

Also, I think that doing nothing, or at least, making no unpredictable changes, allows others to react to what you are doing.

Marilyn · 29 October 2015

It, that is the AV, should not be travelling a corner that fast that it could not stop. They should be built... robust and have proper sensors and warning signals and if someone is going to be at the wheel they should be able to override the automatic. The same situation could occur in an ordinary car then it would be the driver who was responsible for the decision not the computer program that was programmed by a person. Cars now have sensors that enable them to park themselves safely. They shouldn't be aloud to swerve they should be made to stop or fly or something.

John Harshman · 29 October 2015

Matt Young said:

What is my degree of relatedness to the people who might be killed? Clearly I must act to maximize my inclusive fitness. Actually, we must consider both relatedness and expected future reproductive output (counting any remaining parental care). There, back on topic for Panda’s Thumb.

I described some objective research (not mine) on the trolley problem here. See Slides 12 ff. and especially 15-18. The gist: People's willingness to throw the switch decreases with their relatedness to the "target" and increases with the age of the target. I have unaccountably never reported this material on PT. See April Bleske-Rechek, Lyndsay A. Nelson, Jonathan P. Baker, Mark W. Remiker, Sarah J. Brandt, “Evolution and the Trolley Problem,” Journal of Social, Evolutionary, and Cultural Psychology 4 (3), 115-127 (2010) for details.
I hope everyone realizes that I was joking. Kin selection is not the best basis for morality, though it may go some way toward an evolutionary explanation.

Matt Young · 29 October 2015

I hope everyone realizes that I was joking. Kin selection is not the best basis for morality, though it may go some way toward an evolutionary explanation.

I took it as half-joking. I think it can be argued that your first responsibility is to yourself, and you have no ethical duty to drive your car over a cliff on behalf of someone else. I have not ever researched this question, but if I had been Haldane, I would have sacrificed myself for no less than 3 brothers.

Pierce R. Butler · 29 October 2015

The first time I heard the trolley car problem, it had already been (further) elaborated to specify that "you" were physically too small to make a difference by throwing yourself on the rails, but were somehow strong enough to knock down a conveniently located fat man with enough bulk to save the day.

In terms of idealistic armchair morality, self-sacrificing heroes always "win".

Marilyn · 31 October 2015

DS said: Everyone should be aware that they are evolved, not built.
When they put the V W software right that was programmed by a person will it have evolved, or will they have put the software build right.

DS · 31 October 2015

Marilyn said:
DS said: Everyone should be aware that they are evolved, not built.
When they put the V W software right that was programmed by a person will it have evolved, or will they have put the software build right.
They will have evolved, the software not so much.

harold · 1 November 2015

Matt Young said:

What is my degree of relatedness to the people who might be killed? Clearly I must act to maximize my inclusive fitness. Actually, we must consider both relatedness and expected future reproductive output (counting any remaining parental care). There, back on topic for Panda’s Thumb.

I described some objective research (not mine) on the trolley problem here. See Slides 12 ff. and especially 15-18. The gist: People's willingness to throw the switch decreases with their relatedness to the "target" and increases with the age of the target. I have unaccountably never reported this material on PT. See April Bleske-Rechek, Lyndsay A. Nelson, Jonathan P. Baker, Mark W. Remiker, Sarah J. Brandt, “Evolution and the Trolley Problem,” Journal of Social, Evolutionary, and Cultural Psychology 4 (3), 115-127 (2010) for details.
My answer to the car problem is that the only legally and ethically feasible way to do it is to make it the driver's choice. The default must be to operate to the benefit of the occupants. Car owners who wish can sign a release, allowing the car to use "greater good" algorithms in rare situations. It would be madness to make "greater good" the default, because it would cause most people to reject use of the car. Telling people to be more altruistic than they wish to be would eliminate all the common good that adoption of the cars would bring. The car was manufactured for use as private property. The implied contract should be that it is designed, within economic constraints, to prioritize occupants' safety. The operator may modify it to make it more self-sacrificing if they wish. This is 100% the way it already works. Driving one's own car, if a situation is encountered in which a person legally maneuvers to evade a crash, rather than sacrificing themselves to save others, this is not a crime. There is no requirement that operating a human-driven car be done with "greater good" algorithm methods. Why should an automatically driving car be operated under a different set of rules? Sacrificing oneself so that others may live is a noble gesture which may be chosen, not a basic legal or ethical requirement. The soldier who jumps away from a grenade is not punished. The soldier who hurls themself on it may be posthumously rewarded. But there is no requirement that grenades be jumped on.

Kevin B · 2 November 2015

harold said:
Matt Young said:

What is my degree of relatedness to the people who might be killed? Clearly I must act to maximize my inclusive fitness. Actually, we must consider both relatedness and expected future reproductive output (counting any remaining parental care). There, back on topic for Panda’s Thumb.

I described some objective research (not mine) on the trolley problem here. See Slides 12 ff. and especially 15-18. The gist: People's willingness to throw the switch decreases with their relatedness to the "target" and increases with the age of the target. I have unaccountably never reported this material on PT. See April Bleske-Rechek, Lyndsay A. Nelson, Jonathan P. Baker, Mark W. Remiker, Sarah J. Brandt, “Evolution and the Trolley Problem,” Journal of Social, Evolutionary, and Cultural Psychology 4 (3), 115-127 (2010) for details.
My answer to the car problem is that the only legally and ethically feasible way to do it is to make it the driver's choice. The default must be to operate to the benefit of the occupants. Car owners who wish can sign a release, allowing the car to use "greater good" algorithms in rare situations. It would be madness to make "greater good" the default, because it would cause most people to reject use of the car. Telling people to be more altruistic than they wish to be would eliminate all the common good that adoption of the cars would bring. The car was manufactured for use as private property. The implied contract should be that it is designed, within economic constraints, to prioritize occupants' safety. The operator may modify it to make it more self-sacrificing if they wish.
This does sound rather like Utilitarianism. May I propose this as a possible proof-of-concept prototype? http://unrealfacts.com/jeremy-benthams-corpse-attended-ucl-board-meeting/

blueindy1 · 2 November 2015

As long as we're engaging in a kind of "Kobayashi Maru" no-win scenario, simply program the vehicle upon entering such a situation to eject the driver and any passengers upwards, away from the vehicle and, upon ejection, the car swiftly sef-destructs directing as much of the blast as possible downward and away from anybody in danger.

There might still be some injuries and even some death, but nothing like the wholesale destruction of a full vehicle impact on a crowd of people.

Matt Young · 2 November 2015

My answer to the car problem is that the only legally and ethically feasible way to do it is to make it the driver’s choice. The default must be to operate to the benefit of the occupants. Car owners who wish can sign a release, allowing the car to use “greater good” algorithms in rare situations. It would be madness to make “greater good” the default, because it would cause most people to reject use of the car. Telling people to be more altruistic than they wish to be would eliminate all the common good that adoption of the cars would bring. The car was manufactured for use as private property. The implied contract should be that it is designed, within economic constraints, to prioritize occupants’ safety. The operator may modify it to make it more self-sacrificing if they wish.

This does sound rather like Utilitarianism. I do not now whether it sounds like utilitarianism, but I think we are missing the point that self-driving cars are being designed right now, and someone is de facto providing an answer to the extended trolley problem. What is that answer, and what should be that answer?

harold · 2 November 2015

Kevin B said:
harold said:
Matt Young said:

What is my degree of relatedness to the people who might be killed? Clearly I must act to maximize my inclusive fitness. Actually, we must consider both relatedness and expected future reproductive output (counting any remaining parental care). There, back on topic for Panda’s Thumb.

I described some objective research (not mine) on the trolley problem here. See Slides 12 ff. and especially 15-18. The gist: People's willingness to throw the switch decreases with their relatedness to the "target" and increases with the age of the target. I have unaccountably never reported this material on PT. See April Bleske-Rechek, Lyndsay A. Nelson, Jonathan P. Baker, Mark W. Remiker, Sarah J. Brandt, “Evolution and the Trolley Problem,” Journal of Social, Evolutionary, and Cultural Psychology 4 (3), 115-127 (2010) for details.
My answer to the car problem is that the only legally and ethically feasible way to do it is to make it the driver's choice. The default must be to operate to the benefit of the occupants. Car owners who wish can sign a release, allowing the car to use "greater good" algorithms in rare situations. It would be madness to make "greater good" the default, because it would cause most people to reject use of the car. Telling people to be more altruistic than they wish to be would eliminate all the common good that adoption of the cars would bring. The car was manufactured for use as private property. The implied contract should be that it is designed, within economic constraints, to prioritize occupants' safety. The operator may modify it to make it more self-sacrificing if they wish.
This does sound rather like Utilitarianism. May I propose this as a possible proof-of-concept prototype? http://unrealfacts.com/jeremy-benthams-corpse-attended-ucl-board-meeting/
I don't know whether it's "utilitarianism" or not, and certainly would not accept a standard that "something is always wrong if can label it (whatever)ism". It's certainly pragmatic. There are two levels of goodness. The level that's required by law and the level that goes beyond that, which can be aspired to, and is defined by the individual. For example, it would be noble of me to offer to donate a kidney to whoever has a sufficient genetic match and is on the waiting list. It would be nice of me to donate a kidney if someone I know needs one. The law does not require me to donate a kidney at all. We already have a standard for operating a car. The standard is, follow the law so that you don't endanger other drivers. Beyond that, any self-sacrifice is a matter of personal choice. An automatic car is not operating itself any more than a blender is operating itself because it keeps blending while turned on. A human sets the destination and turns on the car, and can, I believe, take over the driving if they want. It's a machine operated by a human. The car has no consciousness whatsoever. There is no need for a different standard for automatically steering cars, any more than there is a need for a different standard for cars with automatic transmissions. The standard is, the operator must operate the car within the bounds of the law, and beyond that required level of social altruism, it is the choice of the operator whether or not to be more altruistic. I'm an altruistic driver; I let people change lanes, and don't dangerously cut in front of people to get to a Costco gas pump one minute sooner, as I have observed others do (the latter may have been technically illegal). I would say, rather than utilitarian, I'm existential. There is the law, you can break it but you may be punished. But it is a handy guide for decision making. Beyond that, it's between you and the universe.

Matt Young · 2 November 2015

An automatic car is not operating itself any more than a blender is operating itself because it keeps blending while turned on. A human sets the destination and turns on the car, and can, I believe, take over the driving if they want. It’s a machine operated by a human. The car has no consciousness whatsoever. There is no need for a different standard for automatically steering cars, any more than there is a need for a different standard for cars with automatic transmissions. The standard is, the operator must operate the car within the bounds of the law, and beyond that required level of social altruism, it is the choice of the operator whether or not to be more altruistic.

No. The operator may not be fast enough to override the control system or may decide not to do so. In such a case, the car is programmed to behave -- how? It is an important question.

Just Bob · 2 November 2015

Legal worms in the can: Will the "driver" of a self-driving car be liable for any damages, just as though he were driving 'hands-on'? IOW, must he be as alert to road conditions, hazards, and the actions of other drivers and pedestrians as he would be if he were driving himself? Or will he be allowed some 'slack', some allowance for inattention while the car is driving itself? If he is expected, legally required, to be as alert and instantly ready to take over if danger looms, then it would seem he would be taking on a greater risk by going 'hands-off', thus allowing his attention to wander from the immediate mechanics of driving.

Alternatively, if statistics show automatic control is safer, will a driver who maintains control himself, and gets into an accident, be held liable for NOT going automatic?

harold · 2 November 2015

Matt Young said:

An automatic car is not operating itself any more than a blender is operating itself because it keeps blending while turned on. A human sets the destination and turns on the car, and can, I believe, take over the driving if they want. It’s a machine operated by a human. The car has no consciousness whatsoever. There is no need for a different standard for automatically steering cars, any more than there is a need for a different standard for cars with automatic transmissions. The standard is, the operator must operate the car within the bounds of the law, and beyond that required level of social altruism, it is the choice of the operator whether or not to be more altruistic.

No. The operator may not be fast enough to override the control system or may decide not to do so. In such a case, the car is programmed to behave -- how? It is an important question.
I agree that it's an important question, I am just proposing an answer - start with the default that the car drives legally, but protects its own operator. If that becomes a problem, the system can be reviewed. I doubt that it will be a problem. I'm not laissez faire ideologue, but then again, neither was Adam Smith. This is probably a good example of an "invisible hand" system. Program the cars to protect the driver, and mutual self interest will probably result in a maximally, or at least highly acceptably, safe system.

harold · 2 November 2015

harold said:
Matt Young said:

An automatic car is not operating itself any more than a blender is operating itself because it keeps blending while turned on. A human sets the destination and turns on the car, and can, I believe, take over the driving if they want. It’s a machine operated by a human. The car has no consciousness whatsoever. There is no need for a different standard for automatically steering cars, any more than there is a need for a different standard for cars with automatic transmissions. The standard is, the operator must operate the car within the bounds of the law, and beyond that required level of social altruism, it is the choice of the operator whether or not to be more altruistic.

No. The operator may not be fast enough to override the control system or may decide not to do so. In such a case, the car is programmed to behave -- how? It is an important question.
I agree that it's an important question, I am just proposing an answer - start with the default that the car drives legally, but protects its own operator. If that becomes a problem, the system can be reviewed. I doubt that it will be a problem. I'm not laissez faire ideologue, but then again, neither was Adam Smith. This is probably a good example of an "invisible hand" system. Program the cars to protect the driver, and mutual self interest will probably result in a maximally, or at least highly acceptably, safe system.
Let's make the question more specific. An automatically driving car with a single passenger has somehow gotten onto railroad tracks and needs to exit or be hit by a massive freight train. An adult pedestrian couple on a tandem bicycle is passing in front of the car. The train crew are extremely unlikely to be injured in a collision with the car, which is virtually guaranteed to kill the car's single occupant, but the delay of the train, if the car is struck, will have economic impact. Severe trauma to the couple on the tandem bicycle is likely if the car pulls moves away from the train (it can't back up). There is no perfect answer but a car that is programmed to get the fuck away from the train, although in this case possibly taking two lives while saving one, is most likely the optimal type of car to cause the least problems in the long run. My main point is that the problem isn't much different if it's the driver who has to decide to hit the gas or die a noble death.

Marilyn · 3 November 2015

Perhaps there should be more than one option to chose from at the start of the journey. If you are by yourself choose a switch of your preference, if you have more people in the car chose an alternative switch. Hopefully something of more importance than just a mood of the day switch. But legal and insurance people should play their part. There is also an element of possibility that the least number of people swerved into would not be fatally harmed, where as the people in the car would be if evasive manoeuvres weren't done. These cars should be made safe enough for a person to even sleep in as they are travelling, for them to be futuristic and sci fi.

eric · 3 November 2015

Marilyn said: Perhaps there should be more than one option to chose from at the start of the journey. If you are by yourself choose a switch of your preference, if you have more people in the car chose an alternative switch.
Heh, on a pragmatic basis I expect this would result in practically everyone leaving the switch in Harold's suggested position ('all else being equal, the occupants get priority') practically all the time.
There is also an element of possibility that the least number of people swerved into would not be fatally harmed, where as the people in the car would be if evasive manoeuvres weren't done.
AIUI, nope. This is intended as a philosophical problem where all the parameters are known exactly - your options do not include saying there's a possibility that some won't die or blueindy's alternate programming. The point of the exercise is to have you think about which of these choices one ought to make, not to think "I don't like this set of choices so I'll invent a different one." In this respect, the problem is completely artificial and unrealistic. And that's a flaw worth pointing out. But once we've pointed it out, the philosopher can still ask the conditional question "okay, but given this artificial and unrealistic situation, what should we do?"

DS · 3 November 2015

I think you should read the bible, trying to find the answer, until it is too late to do anything. What? that seems to be the response to climate change.

Marilyn · 5 November 2015

They seem to be making more of a headway with the environment issue if they make these cars solar powered or use hydrogen cell. If they are safe, apart from this last problem that has to be sorted out accurately, they have made a good progress with that too. If the pedestrian was to carry a bleeper signal for the car to pick up and so know if there are pedestrians around a corner that would help.

DS · 5 November 2015

Yes, if only we didn't have to make any moral choices. Life would be so much simpler. We certainly wouldn't need religion.

Kevin Kirkpatrick · 5 November 2015

My Freshman year in college. My roommate leaves for class. 15 minutes goes by. The prior night, my roommate had received "care package" from his parents, including a large bag of M&Ms, which he has put in a bowl on his desk.

My line of reasoning (okay, not really a line so much as cumulative series of justifications):
1) His class is 50 minutes, and on other side of campus.
2) He probably has thousands of M&Ms there - he won't miss a few.
3) I'm hungry, but studying for quiz that's in < 1 hour, and don't have time to get anything from rec room.
4) I'll get a crappy grade if I'm trying to study on a sugar-low.
5) The benefit I'd receive from a handful of M&Ms far outweighs any harm he'd receive.
6) I gave him a pen two days ago; a handful of M&s is about the same value... in a way, this balances out.

Notably missing from the list:
* Any indication that he's cool with me helping myself to his shit.

After a couple minutes of hemming and hawing, and checking/double-checking my math on the odds, "knowing" I'd get away with it, I made my choice.

So I stand up, walk over, and grab a *small* handful of M&Ms. Literally in the 1-2 second "crucial interval" of "clandestine scoopage", the door flies open, and my roommate rushes in to grab a forgotten homework assignment due that day. Only to see me, with deer-in-headlights look on my face, stealing his M&Ms.

Suffice to say, he was in too much of a hurry to talk at the time. My hurried "explanation" was, at best, incoherent. The subject never came up again, directly, but we had a much colder and distrustful relationship from that point forward. His friends became likewise cold and distant. Unbeknownst to me, he requested a transfer at the semester, and my second semester was spent sans-roommate, with dorm-wide reputation: untrustworthy; will go through and steal your shit.

Time heals wounds and, thankfully, reputations. I can't say I felt any impact of this event beyond my Freshman year. But I took away a deep moral lesson: there are no certainties. There is no knowing. My moral code, you might say, grew 10 times that year. To this day, the moral choices I make are *never* allowed to factor in, "... assuming I won't get caught."

Of course, my moral code is foundationally empathy-based: most of the M&M rationalizations were empathy-based (no harm, no foul); hence my roommate caught me with my hand in the candy-jar, not sifting through his wallet. But the matured moral code I now live by [i.e. the moral code which led me to cut short a "free" version of Jurasssic Park my 10-year-old found online last night with a stern lecture, removal of $3 from his allowance, and $2.99 Amazon.com rental] has the fundamental principle "You can't know everything; you can't know you'll 'get away' with something" baked into it. If you ask me "If you know you'd get away with it...would you do X?", what I really feel I'm answering is, "If you had a moral code that was fundamentally different from the one you have now, would you do X?"

Not much different than, "Imagine you were a Nazi soldier in 1941 and honestly believed Jews were inhuman monsters intent on wiping you and everyone you love off the planet - would you lead a group of them into the gas chamber?" WTF would any "Yes" or "No" answer to that tell you about me? Beyond, "how lucidly can you assess hypothetical conditions?"

The "trolley problem" and uncountable philosophical/moral "puzzles" of that flavor have the same pitfall - generally speaking the underlying assumptions undercut the very same assumptions that are intrinsic to the aspect of my personality that's being probed...

Marilyn · 7 November 2015

Though with hydrogen cells there may be more fog, leading to more clouds and more rain, could end up needing an ark......