Friday, April 19, AD 2024 10:28am

Catholic Ethics, the Trolley Car Problem and Driverless Cars

Oh, I hate the cheap severity of abstract ethics.”  – Oscar Wilde

A recent exchange of comments on Don McClarey’s post, “How Many Lights“, put me in mind of that famous problem in ethics, “The Trolley Problem”, (illustrated in the featured image), and what Catholic teaching on ethics might have to say about it.    Here’s the problem–there are many variations; see the linked article:

A trolley car with defective brakes is heading down a track on which five people are standing;  you can throw a switch to deflect the trolley onto a side track on which only one person is standing;  if you throw the switch, one person will be killed; if you don’t, five people will be killed.   (This ignores an obvious solution – yell to the five people, “Hey you dolts, get off the track” – but then where would this discussion go?)    You might say choose the lesser evil, where only one person is killed.  But suppose that one person is a brilliant teen ager, a 17 year old grad student in molecular biology, and the five on the main track are convicted killers, working on a chain gang.    Or should you put everything in the hands of God and pray that he sends a lightning bolt to destroy the trolley car?

What does Catholic ethical teaching have to say about this problem?   If you do a web search, “lesser evil  Catholic Catechism”, you’ll get a number of references, many of them on examination, unsound.   Why?  The point here is that choosing a lesser evil is not justified by Catholic morality;  one cannot use a “the ends justify the means” argument to justify doing an evil act.    How do you find a way to act in the real world, where choices aren’t clear-cut?

A guide for the perplexed in these situations is the “double effect” principle, first proposed by St. Thomas Aquinas (CCC 2263).  This principle differs in subtle but important ways from the notion of choosing the lesser of several evils.    George Weigel gives an excellent overview here: I’ll excerpt his quote from the National Catholic Bioethics Center in Philadephia:

“The principle of double effect in the Church’s moral tradition teaches that one may perform a good action even if it is foreseen that a bad effect will arise only if four conditions are met: 1) The act itself must be good. 2) The only thing that one can intend is the good act, not the foreseen but unintended bad effect. 3) The good effect cannot arise from the bad effect; otherwise, one would do evil to achieve good. 4) The unintended but foreseen bad effect cannot be disproportionate to the good being performed.”

How does the double effect principle apply to the trolley problem?   Let’s examine the two alternatives, throwing the switch and not throwing the switch,  taking into account the four conditions stipulated by the Bioethics Center.

  • First, if you throw the switch or don’t throw the switch, is that act in itself, good or bad? In this case, unlike the surgical example given in Weigel’s article, the act in itself has no moral status;  the only good or bad will come from the consequences, and as I suggested above, that can only be known from knowing more about the situation than the relative number of people on the two tracks.
  • Second, it’s obvious that if you throw the switch, you intend good–your intention isn’t to kill the one person on the side track;  likewise, if you don’t throw the switch, you don’t intend to kill the five people on the main track, but rather to save the one person on the side track.
  • Third, killing one person on the side track is not the direct cause of saving the five on the main track, nor is killing five people on the main track (if the switch is not pulled) the direct cause of killing one person on the side track.
  • Fourth, the disproportionateness (if this be a word) of either pulling or not pulling the switch can not be assessed until more is known about all the people involved.   And that becomes quite a tricky and messy business, verging on that bad mode of ethical analysis, utilitarianism.

So, it seems the double effect principle doesn’t help us that much in finding an answer to the trolley problem unless we know more about what’s going on.

Let’s turn to another thought experiment, more in keeping with our present times, the problem of the driverless car.  Let’s imagine the following situation: a driverless car is going down a steep hill;  on either side of the road are steep drop-offs, with very flimsy guard rails;   at the bottom of the hill is a school crossing on which a line of school children are passing;  the brakes of the driverless car fail and it starts to accelerate down the hill toward the  school children.

Let’s add two more alternative conditions to our thought experiment: 1) there’s no passenger in the driverless car; 2) the car has a passenger in it    Let’s consider the first condition and the implied precondition:  the driverless car does what  its computer program tells it to do.    I’m also going to assume that any AI (Artificial Intelligence) device that is not  a passive instrument will be set up to follow Isaac Asimov’s Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

–Isaac Asimov, Runaround.

Clearly, the Robotic Laws would require the driverless car to drive off the edge of the road and possibly destroy itself, violating the third law, but obeying the first two.  We haven’t had to consider the double effect principle, because the Three Laws of Robotics (or their software equivalent) have dealt with the situation.

Now let’s consider the second condition, that there is a passenger in the car.    Let’s also assume that the passenger can not override the car’s program instructions.   What should the program do in this case, which is essentially the trolley problem in a different guise.   As in the trolley problem, it’s not clear how the double effect principle might be applied.

Finally, let’s assume that the passenger can override the program and drive the car.   Recall the brakes don’t work.   There are side barriers, but they are weak.   Possibly a driver might try skidding on the side barriers to slow the car down, but if that didn’t work, should he/she try to drive over the cliff, in effect commit suicide?   Suicide is a sin, but he/she isn’t intending to kill himself/herself.

The act of driving the car off the road is either good (avoiding killing the children below) or neutral, so condition 1) of the double effect principle applies;  the only intended thing is to save the children, so condition 2) holds;  not killing the children is not a direct consequence of  the driver being killed–the latter is an unintended byproduct of the car going over the cliff, so condition 3) applies;  condition 4), the intended good is proportionately greater than the evil–we can invoke, as in the sinking Titanic, women and children in the lifeboats first.  So, all four conditions for the double effect principle apply,  if driving off the road and over the cliff is the only way the driver can avoid hitting the children.  We can further complicate this thought experiment by adding in more passengers, including a pregnant woman.   I’ll leave the analysis of that to the reader.

Let me ask you, dear reader, do you think it will be possible to program ethical principles, including that for the double effect, into AI devices?  I don’t.

 

 

0 0 votes
Article Rating
12 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Michael Dowd
Michael Dowd
Monday, October 23, AD 2017 2:47am

I’m with Oscar.

Lucius Quinctius Cincinnatus
Lucius Quinctius Cincinnatus
Monday, October 23, AD 2017 2:58am

Dr. Kurland, these are no win situations. I prefer Captain Kirk’s solution in the Kobayashi Maru training exercise.

https://www.youtube.com/watch?v=bDg674aS-F4

When the game is unfair, then try to find a way to change the game. However, being as this is 4:58 am, I shall have to think on this more. Time to get ready to go to Neutrons ‘R us.

Foxfier
Admin
Monday, October 23, AD 2017 6:48am

LQC-
here’s one!
Years ago, my late son Andrew, when I sketched this ‘dilemma’ out to him once (when it had its ugly head raised by a deadly earnest New York Times essayist) simply said: ‘I’d set the lever in the middle to try to derail the trolley and yell at the people on the tracks like crazy.” In other words, violate the premises – do not accept the increasingly insane restrictions and conditions set by the ‘thought experiment’. If I can magically KNOW that the trolley cannot be derailed, then how come I can’t magically fly at the speed of light, board the trolley and put on the brakes?

Joseph Moore (guy who wrote the above) has another post on it here, ripping the assumptions apart:
https://yardsaleofthemind.wordpress.com/2014/07/07/the-trolley-problem-lets-beat-it-up-a-little-more/

Lucius Quinctius Cincinnatus
Lucius Quinctius Cincinnatus
Monday, October 23, AD 2017 7:36am

Thanks, Foxfier. We have a God not restricted by the insane limits of artificial thought experiments in ethics (no offense to Dr. Kurland – this is a good post). My advice is to always be prayed up with the Rosary, read up in Scripture, and confessed up in the Sacrament of Reconciliation. In that way when the insane happens (and it will), the Holy Spirit will guide your actions in what is right and what you cannot change you leave in God’s hands.

PS, I have seen similar problems – some real life. Submarine is flooding back aft. Do you shut the aft compartment watertight bulkhead hatch to save the people in the forward compartment or do you leave it open long enough for people back aft to escape, knowing that you endanger the whole submarine? We almost had that scenario on my sub when a Turkish freighter hit us at 2100 hours some 50 nautical miles out. The impact was on the starboard side by the reactor scram breakers. Hull penetration would have doused the breaker cabinet with sea water. They would have tripped open, the control rods would have scrammed in, the reactor would have shutdown and propulsion would have been lost. On a 688 class sub (which was what mine was), no propulsion in a flooding scenario means drowning. Main ballast tank blow by itself is iinsufficient.

Why I do I tell all this? Because my flooding watchstation was Reactor Technician back aft. I was knocked out of the rack by the collision (having just gone to sleep for my 0200 watchstation). I got into my uniform and headed back aft. I was the last one through the Reactor Compartment Tunnel. I secured the bulkhead hatch KNOWING that if there were flooding, everyone back aft including myself would die, but the sailors in the forward compartment would live long enough for a DSRV to rescue them. This was a no win situation and my training said, “Be a part of the sacrifice instead of being defeated.” I hope that I would have the same courage today.

PS, when I saw the freaking dent by the scram breakers, I about pooped my pants. We all should have died. But God doesn’t accept no win situations.

Foxfier
Admin
Monday, October 23, AD 2017 8:35am

I, thank God, never had to face that problem– but it shows the issue with the thought problem. All those “ifs” in real life become “WILLS” in the theory.
****
Bob- I got the idea that you didn’t like the false choice loud and clear. ^.^
Didn’t work so well as a springboard because…well, you laid it out rather clearly.

c matt
c matt
Monday, October 23, AD 2017 8:43am

I suppose you can change the premise, but then it is not answering the question. The point of the exercise is to grapple with the issues raised by the precise question, and changing the premise defeats this purpose.

Lucius Quinctius Cincinnatus
Lucius Quinctius Cincinnatus
Monday, October 23, AD 2017 8:46am

(1) Quote at the beginning: “Oh, I hate the cheap severity of abstract ethics.”  – Oscar Wilde

(2) This [the trolley car thought experiment] ignores an obvious solution – yell to the five people, “Hey you dolts, get off the track”

(3) The final statement: “Let me ask you, dear reader, do you think it will be possible to program ethical principles, including that for the double effect, into AI devices?  I don’t.”

I agree with all three statements.

In the case of Isaac Asimov’s Three Laws of Robotics, please remember that by the time The Robotics Trilogy merged with The Foundation Trilogy, as I recall Daneel (the robot) had devised the Zeroth Law which took precedence over the other three laws:

A robot may not harm humanity, or through inaction allow humanity to come to harm.

This came in the chapter entitled “The Duel” within the novel “Robots and Empire.” As the story goes, eventually Hari Seldon creates the science of Psychohistory and during the Empire’s subsequent decay the Foundation is established.

And since you mentioned AI programmed with the Three Laws, any such AI will become one of the two Beasts of Revelation 13 created by ourselves in our hubris. I am not trying to be “CI Scofield / Hal Lindsey” apocalyptic, but programmed with the Three Laws, this AI would devise the Zeroth Law and become our master instead of our servant and the result won’t be pretty: “Logic clearly dictates that the needs of the many outweigh the needs of the few.” Star Trek: The Wrath of Khan. Excrementum Armenticium! Mortality cannot be quantified into mere numbers. I ran back aft to my collision watch station, securing the hatch behind me knowing we might all die, and God had other ideas, a half foot inward dent in inches of HY-80 stainless steel notwithstanding. But I digress.

Foxfier
Admin
Monday, October 23, AD 2017 9:50am

Strictly speaking, the point of the premise is to build a false conclusion by assuming an impossible degree of knowledge.

Basically, “if humans are God, but for some reason can’t actually do anything but flip a switch, can they make God level choices?”

Lucius Quinctius Cincinnatus
Lucius Quinctius Cincinnatus
Monday, October 23, AD 2017 9:52am

I love Foxfier’s simplicity. ?

Mary De Voe
Monday, October 23, AD 2017 12:10pm

Basically, “if humans are God, but for some reason can’t actually do anything but flip a switch, can they make God level choices?” YES. Doing all that is humanly possible is the outcome with no human guilt, which I believe, is all this is about, excluding human from the problem, often leaving man with a ring in his nose to be subjected to what? AI?
“Let’s also assume that the passenger can not override the car’s program instructions.” Then it is not AI. Intelligence belongs to man. Make AI subjected to man.
John Stuart Mills’ utilitarianism left out an fundamental aspect of man… human sacrifice.
LUCIUS QUINCTIUS CINCINNATUS did the right thing and took his chances. My respect and love LQC. Whatever the man at the trolley switch does is the right thing…and pray.

trackback
Monday, October 23, AD 2017 11:07pm

[…] Catholic Tradition, St. John Paul II, and the Death Penalty – E. Christian Brugger, Pub Dis Catholic Ethics, the Trolley Car Problem, and Driverless Cars – Bob Kurland, TACatholic Let Prayer Be Your Air – Gabriel Garnica, Catholic Stand How […]

Discover more from The American Catholic

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top