Adam Lee’s Universal Utilitarianism (Part 5)

So, after getting through the preliminaries, we’re finally going to get to see what Universal Utilitarianism actually is:

I believe I have come up with a new variant of utilitarian moral theory that avoids the flaws of both act and rule utilitarianism, which I call universal utilitarianism. It can be summed up in a single sentence, and without further ado, here it is:

Always minimize both actual and potential suffering; always maximize both actual and potential happiness.

There’s a lot to talk about here, and as with Richard Carrier’s stuff the points are interrelated and so it’s hard to decide how to organize the discussion to build from each point to the successive points. So I’m going to start with some small clarifications, then a whole host of objections to its main and most unique principle, and then finally pick up anything left over from Lee’s discussion of it directly.

So, first, what we need to look at is what makes Lee’s view different from other forms of Utiltarianism. Despite his claims to the contrary — I’ll discuss this point in more detail later — the idea that we would have to care about both actual and potential suffering isn’t at all new. All Act Utilitarianisms — as they are still Consequentialist theories — insist that you need to consider the reasonable consequences of your actions and so the harm and benefit that might happen. Generally, you can’t cheat the system by claiming that the bad consequences of your actions might not happen. You could argue that they are sufficiently improbable, but Lee’s view is going to have to consider that as well or else be unworkable, considering all possible harms no matter how improbable. So on that point there are only going to be minor differences between Universal Utilitarianism and other Utilitarianisms.

No, the meat is the idea that we first minimize actual and potential suffering, and then maximize actual and potential happiness. Which is a bit of a misstatement, since as we’ve seen Lee is defining everything on the basis of happiness. So we should probably translate this as meaning that we first minimize harm to ourselves and others, and then look to maximize the benefits to ourselves and others. The combination of this, in line with Utilitarianism as a whole, will lead to the action that maximizes happiness.

Lee makes this move to avoid the common issue in Utilitarianism that if we get enough benefit from an action, we can then justify harming others by taking that action. Lee is quite explicit on this:

One final point of importance is that the two halves of universal utilitarianism, as given above, should be considered to be in logical order. That is, actual and potential suffering should be minimized before maximizing actual and potential happiness.

Another value of phrasing the imperatives in this order is that it eliminates the problem, endemic to other forms of utilitarianism, of whether it is morally good to obtain pleasure through acts that cause suffering to others. Universal utilitarianism answers that question without hesitation: such behavior is categorically wrong. It should also be noted, as a consequence, that the ways of obtaining personal happiness most valued by universal utilitarianism are the ways that also increase the happiness of others.

So this order has to be very strong, and is very important to Lee’s view. We can assume, again by appealing to the pragmatic principle that Lee stands on, that this principle cannot mean that we must only select options that do no harm to people. Most of the choices we have to make in a day involve someone being harmed in some way, so we really must be forced to minimize harm as opposed to avoiding it altogether. So, again, the summary of this principle is to minimize harm, and then after that is done to maximize benefit. This is indeed different from most other forms of Utilitarianism and could solve an issue that they tend to have if it works.

The first problem with this, though, is that the strong insistence on minimizing harm before looking to maximize benefit seems to make the latter pointless. How we generally use morality in our daily lives — and how Utilitarianism usually presents it — is that we are considering which of a number of actions to take in the world, and use the determination of utility to decide which of those actions to actually take. So we would look at each option and eliminate the ones that would cause the most harm. Once we’ve done that, we’re probably going to only be left with, obviously, the option that would cause the least harm. And thus, only one option. It is going to be rare that we will end up with multiple options that are close enough in the harm that they would cause so that we have to break the tie by appealing to the benefits of one or the other. If our moral lives were generally built around building policies, then one could argue that this is what we would do: remove the harmful elements and then add back in beneficial elements as long as we don’t add in any harm while doing so. But this is not how we function in the world. We are generally deciding from a small and specific set of options. In order to make his view immune to causing harm in order to gain benefits, we have to be very strict in preferring the options that cause the least harm. Therefore, we aren’t going to be doing much, if any, maximizing of benefits, making that stipulation mostly pointless. We can therefore summarize Lee’s view as minimizing harms and not lose much in that description.

Focusing on that proves it vulnerable to an objection that I raised against regular Utilitarianism, as a point against Utilitarianism requiring martyrdom, in effect if not in theory. Imagine that you go to your local corner store to get a chocolate bar. You know that it’s a very popular chocolate bar. When you get there, there’s only one left. If you buy it, you’ll enjoy it and so get a benefit that will increase your happiness. But you know that others will come to the store for the chocolate bar, see that there are none left, and be disappointed, which will harm them in a way that decreases their happiness. My argument was that their unhappiness would trump your happiness, and so by Utilitarianism you shouldn’t buy the chocolate bar, but this seems to be asking someone to give up far too much. The professor I mentioned it to ended up retreating to passing the responsibility on to the store owner, but it is easy to find cases where they really couldn’t avoid that situation and so the responsibility reverts right back to you. This is a trivial case, but that’s rather the point: a moral system should be able to easily cover all cases, great and small, and it seems to be unacceptably pedantic in this small case.

Lee’s view is actually even more vulnerable to this argument that regular Utilitarianism. Those views, at least, can eliminate some cases if I really want the chocolate bar and the disappointment of others is small. But Lee’s view is, as noted above, all about eliminating harms, and I cannot trade their disappointment for my satisfaction. You could argue that my own disappointment needs to be considered as well, but my disappointment is probably going to be roughly the same as theirs — we are both being deprived of this great chocolate bar — and they will outnumber me, so when we add them all up my disappointment will lose.

Lee can argue that disappointment of that sort doesn’t really count as a harm, and so doesn’t need to be considered. This is in fact consistent with one of his clarifying points:

At this point, some other clarifications are called for. The term “suffering”, in the context of this system, should be construed to mean actual pain or harm resulting as a direct consequence of an action. Mere distaste or annoyance do not count as suffering for these purposes …

But why not? If someone, say, gets a feeling of distaste from smelling a certain type of food, and if someone is going to eat lunch with them, shouldn’t they consider the feeling of distaste it will cause them while deciding what to bring to lunch? Surely they shouldn’t be able to argue that their desire to eat that thing outweighs the harm that the other person will get from their distaste for the smell from it. And if you know that something annoys someone and you know that you are going to be in their area, surely your desire to do that thing would not, all things being equal, justify you doing it anyway. Surely someone should avoid doing things that they know will annoy someone, and surely that should be justified with Lee’s moral system. It’s only if he wants to consider such things too trivial to bother with that he could escape it … but then he’d be declaring by fiat what matters and what doesn’t, and so would need to justify it.

And this would break the biggest benefit of Utilitarian views: the idea that we don’t have to judge or justify which things count as harm or as benefit. Utilitarians have tried to avoid problematic issues by bringing in ideas of what counts as being reasonable suffering or happiness before, most famously Mill’s attempt to avoid the very issue Lee focuses on harm for by introducing a notion of “quality” to argue that the actions that would be problematic are lower “quality” and so wouldn’t count as much as avoiding them would. But the issue that Mill and all moves of this type have is attempting to justify deciding what count as the “better” or “worse” or “unreasonable” harms and benefits. Who decides what goes into which category? What is the underlying justification for that move? Do we decide that based on what most people want?

What is nice about basic Utilitarianism is that it doesn’t try to judge what people find upsetting or harmful and what makes them happy. So we don’t have to worry about harming others or depriving them of what makes them happy because it isn’t what we would find harmful or what would make us happy. We simply add up what actually does make each person under consideration happy and go from there. Once we get into ideas of what is reasonable or unreasonable, then we run the risk of judging these things on the basis of what we like instead of what they like, making them miserable while making ourselves happier. For example, I’m about to go on vacation for two weeks, the first week of which I’m going to stay home and watch curling. Dan Ariely, however, has argued that the best reward/vacation for people is to travel. So using that psychologically scientific proof, someone might instead try to force me to travel somewhere warm because it would be better for me. But I look forward to watching the curling and enjoy it far more than traveling. Ariely advocated for people being forced to do so because otherwise they would irrationally choose money as a reward instead of the trip, which would make them less happy, and so someone might do the same for me. If we consider things from my personal perspective, it would be obvious that this is a bad idea. If we consider it from the “objective” perspective, my unhappiness would be unreasonable and so shouldn’t be considered.

So trying to dismiss this challenge by dismissing the feelings is risky. Lee could try to make a universal principle out of “You can take the last chocolate bar because the converse is ridiculous and so doesn’t increase happiness”, but he wouldn’t be able to do that simply because he considers the example trivial, and it will always be easy to find some case where someone would gain from something but can’t take it because of relatively minor harms to everyone else, and attempts to universal rule that away runs the risk of overturning the idea of utility calculations in favour of universal rules that we cannot violate … which then would the worst form of Rule Utilitarianism and, dare I say it, deontological views.

Which leads into another problem, and a specifically problematic one for Lee: this sort of system is not a natural one for us, and this, as noted last time, would risk violating Lee’s pragmatic principle. We naturally think that we can cause harms in order to achieve greater gains. This sort of calculation is captured neatly by regular Utilitarianisms, but is precisely what causes the issue of harming someone in order to get greater gains for yourself. Lee is attempting to avoid this by strongly prioritizing not causing harm, but then cannot ever have it be the case where an option will cause harm — or, at least, more harm than other options — in the service of greater benefits or gains. Lee can argue that no one should cause harm to someone outside of the group that benefits in order to gain benefits for that group, but we would also argue that inside a group some members might have to sacrifice for the benefit of the group as a whole, for which the only justification would be that they will gain more once it pays off. So not being able to do this doesn’t seem to align with how we think, and that risks violating Lee’s pragmatic principle.

And this only gets more problematic for Lee, because it would seem like it even eliminates some of the things that he most agrees with. Take Affirmative Action. Someone not getting a job that they would have gotten without the policy seems to have been harmed. Lee also wants to use his view to justify ideas of fairness and justice, and it would seem that someone who didn’t cause, say, a racist or sexist system would be being treated unfairly to not get a job that they would have gotten otherwise in an attempt to reverse such policies. Lee could appeal to the harm caused by racism and sexism to justify a more minor harm to those other people — although he’d still have an issue with breaking his universal principle — but that only works if AA is in the fact the least harmful way to address racism and sexism. And it seems like the least harmful way is to simply ensure that all companies hire completely fairly and don’t discriminate at all. After all, AA requires enforcement as well, so what we are left with is a system where no one is discriminated against and everyone gets the job they deserve. If there are issues around women and minorities not going into the field or having issues getting education or seeing the benefits of it, then other policies should be built to address that without forcing companies to hire people who are not qualified. And if one wants to argue that diversity is a benefit for companies, an easy counter is that the companies should decide that for themselves and be provided with all of the evidence showing that, so that they can add it to the qualifications. Under Lee’s view, Affirmative Action seems to be an immoral policy. And let’s not get into the issues around “Punch a Nazi” …

Still, Lee may be able to justify these things with his system, but it does seem like he probably should have made the attempt and examined his principles to see if they worked. But at this point let’s move on to another ridiculous consequence of this view. All Utilitarianisms hold that everyone’s concerns are to be considered equally. So a person doesn’t think of their own concerns as being more important than those of anyone else’s, but by the same token their concerns are no less important than anyone else’s. Lee’s view is no exception:

Another important corollary is that, although some people will undoubtedly suffer more than others at certain times and places, no one person is more or less capable of experiencing suffering than any other. All pleasure and pain are considered to be of equal value in this system, including that of the person making an ethical decision. Universal utilitarianism does not command that the decision-maker must value his own happiness less than that of other people; rather, everyone should consider all people’s happiness equal.

What this means, then, is that Universal Utilitarianism has to work when the only person impacted is the decision-maker themselves, or at least when they’re the only person significantly impacted. This, then, would suggest that the decision-maker cannot choose an option that harms them in order to achieve a greater benefit later. This … seems nonsensical. We do this all the time. Going to university causes short-term harms in terms of time and money spent and yet the only justification for it is the greater benefit someone will get after completing it. In my case, taking my Philosophy degree again cost time and money and yet there wasn’t even really a greater benefit argument to justify it. While I enjoyed it, even enjoyment doesn’t seem to be able to outweigh harms under Lee’s strict Universal Utilitarianism. So applying the strong principle to my own life and to actions that don’t impact me leads to some rather ridiculous results.

Lee is probably after the idea that we shouldn’t sacrifice others for our own benefit, and so shouldn’t cause harm to others in order to get gains for ourselves. He could probably try to universalize that rule, leading to us not being able to cause losses to others for our own gain, but being able to sacrifice our own interests for later benefit, or even sacrifice our own interests to benefit others. However, one of the main benefits of Utilitarianism is indeed that the decision-maker is treated exactly the same under it as everyone else, and there’s no differentiation between the interests of the decision-maker and everyone else. If Lee wants to say that the decision-maker can think of themselves differently from others when making moral determinations, then it opens the door for them to do that in more cases, such as cases where they want to preserve their own family members over other people even if there is more harm done in the process. While not having such a mechanism is indeed an objection to Utiltarianism, Lee strongly rejects this idea. While all Utilitarian views would have trouble using that as an out for this specific problem, pretty much all other Utilitarianisms don’t have this specific problem because they allow for greater benefit to outweigh small harms/sacrifices. Lee doesn’t, but his only way out is to give up something fundamental to Utilitarianism, and something that he himself wants to rely on.

And after all of these problems that come from Lee’s attempt to avoid utility causing us to cause great harm to others in the name of utility … it doesn’t work. The traditional argument here is a thought experiment that asks if you could torture a small child for eternity if it would bring eternal happiness to everyone else. Lee’s view does block this, but all that means is that we need to recast the example in terms of comparing harms. So, could you torture a small child for eternity if it would avoid eternal torture for everyone else? The harms to the child are clearly outweighed by the harms to everyone else, but we still think that there’s something off about making such a choice. We can also ask how Lee feels about Scott Siskind’s consequentialist thought experiment, if we’re looking for something more realistic:

Suppose an evil king decides to do a twisted moral experiment on you. He tells you to kick a small child really hard, right in the face. If you do, he will end the experiment with no further damage. If you refuse, he will kick the child himself, and then execute that child plus a hundred innocent people.

Lee might have more success avoiding the idea of sacrificing someone else’s interests for the sake of a benefit to yourself, but we still find something off about such decisions, and Lee’s attempt to minimize harm is clearly aimed at avoiding cases like this. So he still has to deal with the fact that his view will allow us to do horrible things to people as long as the overall harm to everyone else works out to be more than that. Sure, he can argue that these would be sadistic choices with no good answer, but still he doesn’t escape from the trap as we can add up more minor harms to more people to outweigh strong harms to one person, leading to the exact problem that he wants to avoid.

As promised, let me talk a bit about how actual and potential suffering isn’t new. Lee says this about it:

The “potential” part of the formulation is one of the most important parts of universal utilitarianism, and so I believe it bears further explanation. First, it asks us to consider the moral value of our actions as if all relevant parties were fully aware of them. Second, it asks us to gauge the morality of our actions based on the broad and immediate implications if the principle guiding that action, in its broadest possible form, were consistently and universally followed. What would be the effect on human happiness if everyone in this situation did what I am about to do? That is the question universal utilitarianism asks us to consider. We need only consider consequences that would be a likely and direct result of universalizing an action, not consequences that might hypothetically result from a long chain of intermediate contingent causes which no human being can reliably predict.

The problem with both parts here is that they always allow for exceptions (and remember that Lee has objected to other views on the grounds that they don’t allow for exceptions). In the first case, if it worked out to cause less harm if some parties were kept in the dark about our actions, how could Lee argue that we nevertheless should tell them? Think of the case where a spouse is keeping financial problems from their spouse and children because it will only worry them and they think they have a way out that won’t impact them. Should they tell them anyway? So the first principle doesn’t actually follow from the potential part of the formulation; it only follows when there is less harm in them knowing than them not knowing. Lee can argue that that sort of consideration is exactly what he means, but then his view is no different from any other view of Utilitarianism — which would have to consider whether or not to tell people and what would happen if people found out about it — and leads to the issue shared by Utilitarian views that it becomes acceptable to lie to people if that will increase utility. In Lee’s case, if the only harm to them is in them finding out and they are unlikely to find out, then that would be sufficient to justify not telling them about the action and doing it anyway, even if they wouldn’t approve. For the second, again not everyone will be equally situated, and so it’s quite possible that someone is in a position where they are the only person who can take that action. Thus, they would justify the action on the basis that since everyone can’t do that, they don’t need to consider the impact of that thing that can’t happen. This would include cases where someone can get away with doing things without impacting society. Thus, this would include Russell from Angel, who can use his wealth, influence and powers to do what he wants because it won’t impact society. While Lee can escape that trap, it’s not by appealing to potential harms, but instead just by appealing to known harms and, well, harming people. So the second principle again doesn’t follow from that view, and thus this isn’t really anything new either.

Finally, Lee tries to justify rights:

Finally, this section will consider the question of the origin of human rights. The idea that human beings have certain inalienable rights has become the keystone of many moral systems around the world, and while this alone does not establish that human rights are a good idea, universal utilitarianism can indeed confirm this intuition. Under this system, human rights emerge as those universally applicable principles which most reduce actual and potential suffering and most increase actual and potential happiness. In short, universal utilitarianism derives human rights by taking into account the concept of precedent. While these principles may seem restrictive at times, a society in which they are strictly adhered to embodies far less potential for unnecessary suffering than one in which they are violated regularly, and thus is better overall.

A main problem for Rule Utilitarianism is that it can end up like a deontological system: set rules that we can’t violate even when not violating them makes us miserable. Lee’s view doesn’t provide any way to escape that. Rights themselves allow for us to have exceptions when things don’t work out well without them, but Lee’s view here requires them to be more absolute and not allow for exceptions. Or, at least, he hasn’t justified that. But as noted earlier he has a rather shallow view of these things anyway:

There are many different human rights. The right to just treatment, discussed above, is one. Two other important rights are freedom of conscience and freedom of expression – essentially, the freedom to think and believe as one sees fit, and the freedom to state one’s convictions publicly in peaceful, non-disruptive ways. These rights increase the actual happiness of all those granted them without causing harm to anyone else; as well, they play a major role in reducing potential suffering by discovering and publicizing future wrongs so that they can be corrected. This process would ideally occur through another right that is fundamental to universal utilitarianism – democracy, the right of people to have a voice in selecting the government they are to live under. When a government is not accountable to those it governs, the potential for harm is far too great. While these do not constitute all those that could be derived from universal utilitarianism, they will hopefully at least represent a good start and show the potential this system possesses.

Freedom of conscience can cause harm as it is always assumed to allow people to also act on those convictions, and so even by, say, refusing to act out of violence one can unintentionally cause harm to those when violence would be a way to stop harm for them. Freedom of expression can indeed cause harm, even if peaceful, by convincing people of bad or harmful ideas. So Lee is too blithe in considering those rights to increase happiness but not harming anyone else. And Lee’s adoption of democracy as fundamental is way too quick, as governments can be held accountable in other ways and it can be argued that democracies don’t really hold them accountable in any strong way. Thus, Lee looks like he’s rationalizing what he already believes rather than seeing what follows from his moral system, which is not a good approach when developing a moral system.

Next time, I’ll look at why Lee thinks this is universal and some objections that he himself raises against it and tries to address.

2 Responses to “Adam Lee’s Universal Utilitarianism (Part 5)”

  1. Andrew Says:

    Interesting. It strikes me that modern “rights” language doesn’t really make sense. Historically speaking, a “right” is something granted by an authority that one can claim. It might be claimable against the authority itself or the authority will back your claim against another subject to the authority. It’s generally not “moral”, although some may argue that an authority is morally obligated to grant certain rights.

    Modern “human rights” language is much closer to “deserve” – things I should get by virtue of being human. And they are broad (and often changeable) claims issued against society rather than specific grants.

    This language makes a sort of sense under Utilitarianism. If everyone has a moral obligation to maximise everyone else’s happiness (or alleviate their suffering) then me having a need does indeed create a moral obligation on everyone else to meet that need (up to some ill-defined median). If something is good then everyone is morally obligated to contribute to making it happen. Yes, there is an obvious glide-path from here to totalitarianism, but it’s consistent with a Utilitarian world view.

  2. Adam Lee’s Universal Utilitarianism (Part 6) | The Verbose Stoic Says:

    […] last time, I talked about all of the issues that arise from the one unique quality that Lee’s view has. […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: