Everything’s A Trolley Problem … Or is It?

A while back, Richard Carrier made another one of his posts talking about philosophy and strongly interpreting it in light of his own personal views, as he has done before for morality and consciousness.  As usual, his main argument ends up being that everything can reduce to his specific worldview and interests, even if most of philosophy wouldn’t really agree with that.  Here, he starts from the Trolley Problem invented by Philippa Foot — he loves to use female philosophers so that he can chide people for ignoring female philosophers — and ends up reducing it to his own position, ironically in such a way that it doesn’t align with what she used it for and in a way that ends up contradicting his own characterization of it.

Most people have probably heard of the Trolley Problem by now, but in essence it’s a philosophical thought experiment where someone is standing by a trolley track, and by a switch that will switch an oncoming trolley to another track.  If the switch is not switched, the trolley will barrel down the track and kill five people who cannot get out of the way in time (the most common example of this is that they have been tied to the track by someone who is trying to kill them).  But if the switch is switched, the trolley will barrel down the new track and kill one person who cannot get out of the way.  The question is:  do you think that it’s morally permissible or even demanded to activate the switch and kill that person in order to save the five people on the other track?

While looking it up it doesn’t look like this was the original intent, how it is most commonly used in philosophical circles — and how I was introduced to it — is as a way to test our intuitions about consequentialism vs deontology.  If you would argue that someone ought to activate the switch because it would save five people at the cost of one person, then you’re using consequentialist reasoning, arguing that the consequences are better for activating the switch which justifies doing it even though you’d kill someone.  If you say that it isn’t morally permissible, then it looks like you’re following some kind of absolute rule that says that you are not allowed to take an action that kills someone even if it would save the lives of more people to do that.  In terms of specific views, Utilitarians should always argue that you ought to activate the switch on the basis that outside specific situations that will increase utility, while Kantians would probably have to argue that it isn’t morally permissible in that you would be using the person on the other track as merely a means to the end of saving the other five people.  You could also argue for it on the basis of moral responsibility, as I actually did when I was deciding what I myself should do, following my interpretation of Stoicism:  I am not responsible for the actions of someone else, and so the person who set up that situation in order to kill those people would be morally responsible for their deaths if I left things alone, whereas if I activated the switch then that would be something I chose to do and so would be my responsibility, and so I would be morally responsible for killing that person, and so would be committing murder.  You can ask how that would apply for something that’s an accident, but since I’m also not responsible for the whims of fate the same reasoning would apply.  Now, this might seem a bit harsh because it can sound like I’m saying that I wouldn’t try to save those five people by having one other person die simply because I didn’t want to be considered morally responsible for killing someone, but the idea is that given that everyone is an innocent if I act then I definitely am responsible and morally responsible for killing someone, and it’s only consequentialist intuitions that make it look like that is a harsh thing to do, based entirely on the idea that it’s better to kill one person than to allow five other people to die.  When that one person would be the direct cause of their deaths, the reasoning works, but it doesn’t work out as obviously when that person is innocent.  Although, of course, there’s a lot of philosophical ink that can be spilt and has been split arguing over these points.

For most people, though, in the base Trolley Problem the consequentialist intuitions win out.  When this experiment was run as an empirical/psychological experiment, most people chose to activate the switch, which pleased consequentialists because it looked like most people’s moral intuitions were clearly consequentialist.  Unfortunately for them, further experiments were run that cast that conclusion into doubt.  The big one was the “Fat Man on a Platform” experiment, where instead of activating a switch you have to push someone with more mass than you in front of the train to get it to stop.  When this experiment was run — even if it was run directly after the original Trolley Problem — most people in this case said that it wasn’t morally permissible to do that, a contradiction that ensured that philosophically speaking the Trolley Problem was going to be something talked about for a long, long time.

Okay, so those are the basics.  How does Carrier characterize it?

Trolley Problems have two particular attributes: one is that they force the experimenter to compare the outcomes of positive action and inaction; the other is that they force the experimenter to face the fact that either choice bears costs. As such, Foot’s Trolley only puts into stark relief a fundamental truth of all moral reasoning: every choice has a cost (There Ain’t No Such Thing as a Free Lunchand doing nothing is a choice. Both of those principles are so counter-intuitive that quite a lot of people don’t want them to be true, and will twist themselves into all sorts of knots trying to deny them.

The first thing to point out is that Carrier is arguing that the basic principles of the experiment are so counter-intuitive that quite a lot of people don’t want them to be true and spend lots of effort to deny them, which is rather puzzling given that, as noted above, most people answer the thought experiment in the way that would align with Carrier’s reasoning here.  Most people say that they should activate the switch, which means that they would have to accept that doing nothing is a choice and that doing nothing has a cost, if Carrier is right about the experiment.  If anyone is arguing against it, from what I’ve seen it’s generally on the basis that the experiment is too abstract to really get at what our own intuitions about morality would really say.  It’s too artificial to prove anything, which might be why we get those contradictory results.  But again it hardly seems like Carrier’s claim that it’s so counter-intuitive makes sense given that when we are at least supposed to be relying on our intuitions as opposed to our intellectual moral reasoning most people would agree with what his analysis says we should do.  The idea that doing nothing is a choice is probably what makes my response seem harsh, and it’s also pretty common in most media and in many places.  So that doesn’t seem to work at all.

Second, it doesn’t seem like that was Foot’s purpose for the experiment either:

In “The Problem of Abortion and the Doctrine of Double Effect”, (1967) Foot raises a related case that has been the subject of much subsequent discussion: a runaway trolley is headed toward five people who will be killed by the collision, but it could be steered onto a track on which there is only one person (1967 [VV 23]). Intuitively, it seems permissible to turn the trolley to hit and kill one person, but the problem is that it does not seem permissible to kill one to save five in cases like Rescue II. Why, Foot asks, can we not argue for the permissibility of killing one to save five in those cases by appealing to the Trolley case? As we have seen, Foot argues that negative rights are generally stronger than positive rights. In Rescue II, we must violate someone’s negative rights to meet the positive rights of others, and this is impermissible because the negative rights have priority over the positive rights that is not outweighed by five people’s need for assistance. In Trolley, by contrast, we are not violating negative rights to meet positive rights; the situation pits the negative rights of the five against the negative rights of one, and both choices involve violating someone’s negative rights. In such a case, it seems clearly preferable to minimize the violation of negative rights by turning the trolley (1967 [VV 27]).

On Foot’s view, we are generally not permitted to do something to someone that would interfere with someone’s negative rights, for example, we may not steal someone’s property; yet we may not be required to actively secure their possession of it, that is, we may allow them to lose their property. Foot thereby defends a principle that draws a moral distinction between doing and allowing; she also defends a version of the doctrine of double effect, which states that it is sometimes permissible to bring about a result that one foresees as a consequence of one’s action but does not intend that it would be impermissible to aim at either as a means or an end (1985 [MD 91]).

This analysis does not reference not choosing being really a choice, nor does it claim that each choice always has a cost.  Surprisingly, it doesn’t even reference consequentialism, which is how philosophy commonly uses it.  It focuses instead on whether we can violate negative rights or positive rights, and notes that the Trolley Problem’s answer doesn’t actually generalize to all cases (her “Rescue II” is a case where you run someone over to rescue five people.  So it doesn’t seem like either philosopher or the originator see its purpose as being what Carrier thinks it is, which obviously would make his interpretation suspect.

So let’s return to why people might argue against the experiment.  As I noted, it’s usually because they see it as overly artificial … a criticism that can be made for Carrier’s comparison to Game Theory as well, that it doesn’t work because it’s too artificial:

 I once showed a class of Christian high school students the scene in Beautiful Mind where John Nash explains his revelation of (what would become) Game Theory to his bar buddies, using the “dude” example of how to score with women at the bar: if they all go for the most attractive one, they all block each other and lose, but if they all cooperate to divvy up approaching her friends, they all get dates (yes, not that enlightened an example, but neither is a trolley rolling over people). The students couldn’t get off the notion that the scene meant Game Theory was about getting laid. But getting a date was entirely incidental, just a silly (and intentionally comic) barroom example; they missed the point.

So, let’s look at this Game Theory thought experiment:

You and three male friends are at a bar trying to pick up women. Suddenly one blonde and four brunettes enter in a group. What’s the individual strategy?

Here are the rules. Each of you wants to talk to the blonde. If more than one of you tries to talk to her, however, she will be put off and talk to no one. At that point it will also be too late to talk to a brunette, as no one likes being second choice. Assume anyone who starts out talking to a brunette will succeed.

The Movie

Nash suggests the group should cooperate. If everyone goes for the blonde, they block each other and no one wins. The brunettes will feel hurt as a second choice and categorically reject advances. Everyone loses.

But what if everyone goes for a brunette? Then each person will succeed, and everyone ends up with a good option.

It’s a good thought, except for one question: what about the blonde?

As an example of what I believe is called a “Nash Equilibrium”, this isn’t a bad example.  However, if it’s used as an example for how people should act in the real world, I think it would be fair to argue that it’s too artificial to work.  For one thing, it ignores that if the blonde really is so desirable that everyone wants her above all of the others, then she can afford to be selective and so some of the men wouldn’t be able to succeed even if they approached her on their own.  So in terms of reality the men who are pretty certain that they won’t get selected should bow out on their own, which might solve the entire problem.  Also, as noted at the end the strategy means that the blonde — the most desirable by definition — doesn’t get anyone.  If we can ignore what this means for her — and in a Game Theoretical context everyone is trying to make the best choice for themselves so we can do that — we still run into the problem that someone in the group could have succeeded with her and by definition for all of them that was a better outcome than what they had.  Now, Game Theoretical analyses are all about the argument that everyone pursuing their highest goal means that they end up worse off than if they sacrificed that goal for a lesser one, but if we are going to think about co-operation then we definitely might want to think about ways where at least someone gets the blonde.  Which opens up new options, such as waiting for signals to see who might be attracted to whom.  Even if I thought I might be able to get the blonde — which is highly unlikely at the best of times, but let’s go with that — if a brunette was expressing subtle interest in me and the blonde was apathetic I might indeed be more likely to go for the “sure thing” than try for only a chance at something that was arguably better.  And if she was showing signs of interest in someone else in my group instead of towards me, I’d definitely let them go for it and take a run at a brunette.  And once the group gets down to figuring out who has the best chance at what, then we run into the idea in real life that some members of the group might actually prefer brunettes to blondes anyway, making the division easier to manage and ending up with the group being overall more happy than they would be with this supposed recommendation.  Once we eliminate the men who don’t have a chance with the blonde, the men who have a brunette interested in them, and the men who are more interested in brunettes anyway, there might not be only one person left who’s in the running for a blonde, meaning that he would succeed and the others would succeed with the brunettes.  And if there’s more than one?  They can flip for it or something, because the overall contentment is still better than it would be taking the advice:  one man gets the blonde that everyone wants and the others get brunettes that had to be at least acceptable to them for the original recommendation to work.  This works out at least as well and usually better for everyone except the leftover brunette, who is balanced by the left out blonde in the original recommendation.

This, then, is one of the main criticisms I have of applying Game Theory to real life situations:  there are always far more factors involved that tend to make their recommendations not really apply.  We would almost always have more information than they provide and more wishes and desires to appeal to.  Even in the Prisoner’s Dilemma we likely would have reasons to trust or distrust our partner that would guide us more than that simple analysis.  The video game “Knights of the Old Republic” highlights this, with an AI questioning the main character to assess that character’s personality, and using an example of this where the MC is “playing” this against Zaalbar, who has sworn a life debt to them.  The AI doesn’t accept the argument that Zaalbar can be trusted, but anyone who knows what a life debt is and knows how honourable Zaalbar is — and understands what his “treachery” was actually about — knows that Zaalbar won’t take the deal and so can act accordingly.  Game Theory is often far too abstract to be used, at least, in everyday decisions.

The same, then, is the common objection against the Trolley Problem, although Carrier seems to miss that:

So, too, are people missing the point who act like Foot’s Trolley is a philosophical question about killing people; or who think that even when it is about killing people, that it’s about how to find a way in which all the deaths could be avoided somehow, and so people “respond” to Trolley Problems by inventing a bunch of “what ifs” that allow them to “win the game” as it were, which is again missing the point—because the Trolley Problem is designed to model specifically those scenarios where they can’t all be saved.

Carrier’s right that the Trolley Problem is designed to make it so that not everyone can be saved, to test our intuitions in such cases.  What I think most people get wrong about it is thinking of it as a judgement of one’s moral character or as a recommendation for what people should do in such real life situations.  And so they note that in such circumstances there’d always be other options that the experiment isn’t accounting for and so it isn’t “realistic”.  But as an empirical thought experiment — and it’s far more empirical than philosophical since it doesn’t really highlight any real underlying philosophical principle that we didn’t already know about — it’s meant to be unrealistic and artificial and abstract, because what it’s trying to is test our intuitions without relying on ingrained or societal responses.  Given that, that it’s not realistic is a benefit, because people will have to engage their intuitions to figure out a response rather than just giving what they’ve been taught is the right response.  So criticisms that it isn’t realistic and can’t be applied to real life don’t work because it isn’t supposed to.

Except for Carrier, seemingly, as he thinks that all of our moral questions are really Trolley Problems, which is … dubious, to say the least.  Especially when he gets to these examples:

Consider three examples of failed Trolley Problems:

  • “Doing nothing” to fix the levies whose failure devastated Louisiana in the face of Hurricane Katrina ended up costing Louisiana and the Federal government (and thus every taxpayer in the nation) vastly more than fixing the levies in the first place would have. Hence doing “something” instead would have been far cheaper. Inaction ended up outrageously more expensive—and outrageously deadlier, for those who want a lot of “killing” in their thought experiments. This was a Trolley Problem. In money or bodies. “Flipping the switch” would have killed fewer people—and cost us vastly less in resources. We chose to stand there and do nothing, and then claim it wasn’t our fault.
  • “Doing nothing” to fund the cold-weathering of equipment caused the 2021 Texas Powergrid Disaster, which killed hundreds of people and cost tens of billions of dollars, and immeasurable headache and ruination. While Republicans disingenuously complained about “wind power” not being up to snuff, to push their gas lobby, such that the story soon became how in fact most of Texas’s failed power came from natural gas plants not having been adequately fitted for cold weather, the same truth actually still underlies both: New England and Canada and Alaska and Colorado, for example, have tons of wind and gas plants that don’t get knocked out by cold snaps—because they kitted them out to handle it. Texas was warned repeatedly that a Trolley was coming to kill “twenty billion dollars”; they chose to do nothing and let it. They could instead have done something—in fact, what nearly every other state’s energy sector did—and saved billions and billions of dollars. There would still be a cost. Like, say, the few billion cost to weather-prep gas plants and wind farms; but it would amount to maybe ten times less what doing nothing ended up costing them. Likewise, far fewer deaths. While hundreds died from the disaster they did nothing to avert, we can expect one or two would have died in, for example, workplace accidents in kitting out the equipment (windfarms in particular have a steady death rate associated with their maintenance; but so does the fossil fuel industry, or in fact any relevant industry). So even counting deaths and not money, this was a straightforward Trolley Problem. That Texas lost.
  • “Doing nothing” in the face of a global coronavirus pandemic similarly led to many more hospitalizations and deaths, and far more harm to the economy and national security, than the “mask mandates” and “vaccinations” that millions of lunatics ran about like crazed zombies denouncing and avoiding. Even counting the minuscule threats created by those mitigations (the odd person who might have died from a vaccine reaction or breathing problem), the differential in deaths was vast (hundreds, even thousands to one). Anti-vaxxers suck at Trolley Problems. Even by their own internal logic-–never mind in factual reality.

Carrier, however, contradicts his own interpretation here by using these examples, because these are not examples where you’re forced to choose between two outcomes that you can’t avoid, nor are these cases where the key thing is inaction.  These are all budgeting cases, and budgeting is not a case of inaction vs action.  Budgets get spent in their entirety, so they decided that the money that could have been used to do that was better spent elsewhere, and it’s obvious that in the Trolley Problem we aren’t deciding between immediate considerations and between future considerations, which is commonly how these things get decided when it comes to budgets.  This also leads into another difference in that these cases are about risks rather than about certainties.  In all of these cases, the problems that happened in the future might never have happened, at least not for those individuals, so it’s about deciding what risks someone is willing to take.  In the Trolley Problem, as Carrier himself notes it’s all about choosing between two options where you know what the outcome will be and have to decide which one you’d rather take and which one is better.  And finally, these aren’t actually moral questions at all, but are practical ones, as we can see when we look at some more of his examples:

Even in your own personal life, who to date, what job to take, what school to go to, what hobby to allocate time and money to: it’s all Trolley Problems, all the way down. Do nothing, and date no one, get no job, go to no school, enjoy no hobby. “Nothing” has costs. Nothing is a decision. Often, again, the worst one.

These are all practical considerations about what might make you more happy, and of course there are many more options than the two and you have no idea how they will work out.  You can try to date people and fail, or avoid dating and get a relationship anyway.  These decisions have no actual moral content.  They are as pragmatic as deciding what to eat for supper, and involve just as many costs.  And just like deciding what to eat for supper there is no reasonable choice that is merely doing “Nothing”.  If someone decides not to date, they aren’t deciding to just “do nothing” but are deciding to do something else instead of dating.  Maybe they think that they’re going to fail and don’t try.  Maybe they get tired of it.  Maybe they try and succeed.  Maybe they try to date the wrong people.  And so on and so forth.  There is no simple binary choice here between “Do something” and “Do nothing”, and as already noted we don’t know what the outcome will be when we choose it and are risking costs when we don’t know the outcome.  We are not running Trolley Problems here, but are placing bets and gambling.  And the Trolley Problem is deliberately not a gamble.

So while Carrier wants to make the Trolley Problem the foundation of all of our moral decisions, his examples are not examples of moral decisions and, even if they were, they wouldn’t be Trolley Problems anyway.  At the same time, Carrier uses the thought experiment in the precise way that generates opposition to it by insisting that it must apply to real life despite people noting that it’s too artificial to be used that way.  He doesn’t use it the way philosophy does nor does he use it the way Foot herself used it.  Ultimately, it really looks like he’s contorting it to fit his view rather than analyzing his views to see how they fit with it.

2 Responses to “Everything’s A Trolley Problem … Or is It?”

  1. malcolmthecynic Says:

    I’d ask why people take Carrier seriously but I am happy to report almost nobody does.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: