So, a while ago someone, somewhere posted a link to a FAQ about consequentialism by Scott Siskind. Siskind is in favour of consequentialism, and number of the entries in the FAQ are aimed at demonstrating that consequentialism is superior to other moral theories.
One thought experiment that he tosses out is this:
3.3: What do you mean by a desire to avoid guilt?
Suppose an evil king decides to do a twisted moral experiment on you. He tells you to kick a small child really hard, right in the face. If you do, he will end the experiment with no further damage. If you refuse, he will kick the child himself, and then execute that child plus a hundred innocent people.
There are certain moral philosophers who would tell you to refuse. Sure, the child would get hurt and lots of innocent people would die, but it wouldn’t, technically, be your fault. But if you kicked the child, well, that would be your fault, and then you’d have to feel bad about it.
But this excessive concern about whether something is your fault or not is a form of selfishness. If you sided with those philosophers, it wouldn’t be out of a concern for the child’s welfare – the child’s getting kicked anyway, not to mention executed – it would be out of concern with whether you might feel bad about it later. The desire involved is the desire to avoid guilt, not the desire to help others.
The problem with his analysis of this thought experiment is that while there are a number of philosophers who would indeed say to not kick the child, most of them won’t say that it’s because you might feel bad afterwards, or that it might make you feel guilty. No, what they’d say is instead that if you kick the child, you yourself would have taken an immoral action, while if you don’t then you have, in fact, refused to take an immoral action. So, if someone feels guilty for choosing to kick the child, the philosophers will argue that they ought to, that the guilt in this case is a reflection of their actually doing something immoral, and not just some misfiring emotional reaction.
Let me highlight this by referring to one of the things I find most compelling about Stoicism: the idea that you are responsible for your own actions and reactions, and not for the actions or reactions or others, or for any consequences of your actions that are not under your direct control. In this case, the evil king is saying that they will commit a horribly evil act unless you commit an evil act. But they are responsible for their own evil acts, and their attempt to make you responsible for their actions by claiming that they’ll only do those evil things if you don’t do evil things is a invalid move. What they are trying to do is make you responsible for their evil and evil intentions, but you are not responsible for their choice to take an evil action just because you refused to take an evil action.
So, in this case, if you kick the child, then you have taken an immoral action or, to put it better, you will have caused harm to that child. Siskind’s counter is that if you don’t kick the child, the child will be kicked anyway, and will then be killed, and then more people will be killed. Under the consequentialist model, because the consequences of not kicking the child are worse than those that come from your kicking the child, you are morally obligated to kick that child. But these consequences are not natural consequences — ie following without the intervention of moral agents — of your actions, and they only follow from the moral choice of the king. Even if you refuse to kick the child, the king is still free to choose to not follow through and so to not kick the child, or kill the child and those people. The consequentialist approach, therefore, makes you responsible for the immoral actions of someone else. Under the Stoic model, that’s simply wrong; you are responsible for the your own morality, not the morality of others.
What’s interesting is that this comes in a section dedicated to assigning value to other people, but the thought experiment reveals how consequentialism doesn’t, in fact, do that … at least, it doesn’t as it is presented here. Let’s compare Siskind’s answer to the thought experiment to that of a hypothetical Kantian who insists on treating others not merely as means, but as ends in themselves. To make this more clear, let’s change the thought experiment slightly, so that either you kick the child or the king will kill 100 innocent people, but that the child will neither be kicked nor killed. Surely, the consequentialist will argue that you should indeed kick the child; it’s hard to see a consequentialism where the case described in the original thought experiment demands that you kick the child but that in this case you shouldn’t. So, what you should do is decide to cause the child some small harm to prevent greater harm to more people.
The Kantian, however, will object that what you are doing is using the child as a means to an end: the end of less harm. It is certainly true that less harm is a desirable end, but you are treating the child as a means to that end nonetheless. And how, then, can you claim to value that child as an individual person if you are willing to use them as merely a means to achieving more utility? All consequentialist theories that calculate the consequences based on overall harm or utility treat individual persons as nothing more than a number, a utility value to be aggregated to achieve the end of overall greater utility. Which is why they all allow their adherents to sacrifice one individual for the greater good, and potentially to do so even against the will of that individual if there’s enough utility in it. On the other hand, the moral views that refuse to allow you to harm the child simply to prevent worse consequences actually treat the child as someone with value in their own right, as having value as an individual person, and so are not willing to harm them as an individual just to make the total happiness numbers come up beneficial. So, it seems to me, if you want to respect and assign value to other people, you have to consider them as having value not just as one figure in an overall spreadsheet of utility, but as having value in and of themselves. As a trump card, not as a point in the game.
So, then, if you really assign value to people, can you really accept consequentialism, that lets you harm people if, overall, there’s less harm overall from it? Or should you instead choose a morality that says that you can’t harm them in order to achieve any end, even one as good as less harm overall?