Richard Carrier is launching/repeating a course on the science and philosophy of morality. He’s responding to some comments on that post, and in doing so has made a link to a very old post talking about his “Goal Theory” in the context of a debate with someone advocating for Desire Utilitarianism. Since he’s posting it as if it’s still relevant, it’s worthwhile going through his clarifications and answers and showing how the morality and his answers somewhat break down. But as is typical for Carrier, the post is rather long, so I’m going to break it up into parts to respond to. Today, I’ll respond to some things in his opening statement that he posted on Facebook as well as the first section of seven threads that Carrier thought were not closed during the debate.
First we must define morality: I’ll be using the broadest definition, that morality is that which we ought most to do (i.e. that which we ought to do above all else). I affirm that this is a universally agreed definition, even by people who don’t realize they are using it. It is, in actual practice, what everyone really means by morality.
If you’ve followed my other comments on his morality — including when I directly challenged his stated premises — you’ll know the issue with this: while he’s right that we all think that moral values are the values that we value more than all other values, we don’t think that that is simply what it means for something to be a moral value. The end of moral values is not the end of being what we ought to value more than anything else, but by their nature — which, again, we’re having trouble figuring out — they end up being the things that we would value more than anything else if we were properly moral. And note that even in that case we reject notions that the moral values are moral values because they work out best for us or give us the best life. Any attempt to phrase it that way ends in Egoism and pretty much all non-Egoistic moral systems try very hard to avoid that implication. Morality’s definition, as best we understand it, seems best described as values that place the needs and desires of others ahead of our own.
The problem with this definition also has a more technical problem, as revealed in a comment on the recruitment post:
The only oughts that can supersede all other oughts are oughts that serve a goal that supersedes all other goals. This is a logically necessary truth. Therefore once you locate the goal everyone places above all other goals, all moral oughts follow as a matter of empirical fact.
But the goal that we ought to place above all other goals is not, in fact, necessarily the goal that we currently do place above all else. This is the heart of the complaint that you cannot get an ought from an is, properly understood: you cannot look at the way things currently are and conclude, just from that fact, that that is the way things ought to be. You need another premise to do that. By starting from the claim that the moral is what we ought to value more than anything else but then immediately concluding that we can figure that out by using nothing more than what we currently value more than anything else, Carrier makes that mistake. And, even worse, Carrier needs to point out that much that we do happen to value more than anything else is wrong and that we ought to value other things more than those, which of course directly refutes the idea that we can determine what we ought to value more than anything else simply by looking at what we currently do value more than anything else. More on this later.
He also tries to define desire. I’ll ignore the scientific definition since he himself seems to ignore it much of the time, and go to the analytic one which is more relevant for the philosophical arguments anyway:
It can be defined analytically as: To desire a thing, is simply to prefer that thing to something else instead, i.e. to prefer having it to not having it. This can be manifested by any phenomenology, any mechanism. Even desktop computers have desires in this sense, in the same way ants do or lizards or mice. They just aren’t conscious of it. And their computational abilities are vastly less than ours. But the basic idea of desiring as preferring, and of preferring as choosing, is the same.
Um, no, desktop computers do not have preferences in that sense. They do not prefer a state and aim to enter it. They get an input and respond to it. That’s it. This is even true for neural networks, as they literally take an input and spit an output out at the end. In order to have a preference, you have to be able to think about future states and have a mechanism for deciding that that future state is a better one by a set criteria than the one you currently have. So inference engine type systems — like those inference engines that rank possibly chess moves and take the one at the top of the list — can look like they do that, but even then they really don’t: they don’t decide better or worse — or even care about that — but instead merely take the one with the largest number. You could really screw up such a system by simply altering it so that the highest percentages were the worst moves and the computer would still happily choose those and never re-examine it, even upon losing the game. And the animals Carrier cites may not really have preferences either, not because they don’t prefer certain states to other states, but because their ability to look to the future might be too limited to count as establishing a preference. Otherwise, it’s not an unreasonable definition of desire, but his broad application of it seems to suggest he’s thinking of it in a different way than we do.
Carrier then tries to talk about motivation, arguing that we always do that which we most want to do:
Even when you say “I don’t want to do this, but I have to, so I’m doing it anyway,” that’s not exactly what’s really going on. Because if you really didn’t want to do it, you wouldn’t do it at all. The only reason you are doing it is that your desire to do it was greater than your desire not to. In the sense of the term “desire” I am using, it would be logically impossible for any other result to occur (other than a mad scientist moving your body like a puppet, but we’re talking about your own choices here, so that scenario isn’t relevant). So when you say “I don’t want to do this” you really just mean that you have a desire not to do it, not that that desire is your strongest desire. Your strongest desire is obviously to nevertheless do it anyway.
For example, exercising to stay fit. You might not feel like exercising, but you desire to get fit, and you know you can’t get fit unless you stick to your exercise schedule, and when you make this desire present in your mind, it becomes the stronger desire and thus overrides the other.
So, an issue that will arise here is that this makes sense when we talk about a case where we “don’t want” to do something but decide to do it anyway, but then has a difficult time explaining the cases where we have a failure of will and then don’t do it. Carrier can argue that in that case we still do what we most want to do and in those cases the desire manages to overcome that, but then what we ultimately decide to do isn’t going to reflect that which we most ought to do, making this sort of mechanism mostly irrelevant to the debate here. Or he can argue that it’s just an error in our biological system — as he does at times — but then he needs a complete and separate criteria for finding these errors — he tends to argue that it is what a rational and informed person would do — but then at a minimum we’ll argue that that is the real criteria and what we do want is, again, pretty much irrelevant to what morality is, costing him his empirical data and his “is”. And he’d still need to justify that without appealing to the psychological facts of what we happen to prefer, as they can clearly be wrong for some people and, therefore, for everyone.
Carrier seems to end up conflating morality with pragmatics:
For example, when you are visiting a naval ship, you will want to follow all the safety directions posted everywhere, even though you won’t necessarily know why. But you do know the people who developed and posted those instructions know why, and almost certainly have very good reasons, and so you trust them. And even when you don’t trust them, you can inquire, and find out the reasons, and confirm their validity yourself.
Morality is like that: a system of instructions for how to live, as determined by people in the know whom you trust, because you have analyzed their methods and know them to be rational and sound …
The problem here is that in general following safety instructions simply so that you don’t get yourself hurt doesn’t seem like any kind of moral consideration at all. It seems purely pragmatic. It seems like even people that are clearly incapable of understanding morality or that are even actively evil in the moral sense could, in fact, decide to follow that. How can Carrier justify that decision as being a moral one, in the sense that it’s related to morality at all? If the answer is nothing more than it follows from his definition, then turning decisions that we don’t think have anything to do with morality into moral decisions is a huge strike against his view.
I believe that conclusion was established already by Aristotle. Aristotle defined happiness in a particular way, he called it eudaimonia, a feeling that all is right with yourself and the world, a state of contentment or higher bliss, which was more desirable than mere pleasure or joy or anything else you might define happiness as. But he was hung up on that peculiar phenomenology and mechanics of the mammalian brain. I’m going beyond that to the more fundamental, and more universal truth of the matter, which is that the happiness-state Aristotle was trying to describe is what I shall more generically call a state of satisfaction.
But, like pretty much all of those who argue for objective morality, Aristotle did not say that we should be moral because it provides eudaimonia, but that we should be moral and one of the main things it does is result in that. The moral does satisfy us more than the immoral or non-moral, but that’s not what makes the moral the moral, and Aristotle allows for some things that he doesn’t consider virtues to be instead required non-virtues — and not vices — because they are required for us to be properly moral and so achieve proper eudaimonia. They’d have to be virtues if the moral was simply defined as that which produces or is required to produce eudaimonia. So, again, Carrier elevates the effect to the status of definition or end.
That ends the Facebook post. So let’s look at the seven threads:
(1a) The Problem of Infinite Regress. The competing views were GT (Goal Theory) and DU (Desire Utilitarianism), and I argued GT is a subset of DU, and thus they are not at odds, but rather GT is what you end up with when you perfect DU, whereas DU is in effect “unfinished.” For example, one key reason to prefer GT over DU is that our desire to be satisfied is the end of infinite regress. Without GT, DU is stuck in infinite regress, incapable of deciding which values should be held supreme, i.e. which should override others. The question we must ask is what criterion identifies one desire as “more important / more desirable” than another, in any given situation? And what is it that all such overriding desires have in common? The desire for the most satisfactory outcome. Hence GT.
Carrier’s view itself has that sort of regress, though, as it leads to this circularity: What am I most satisfied by? That which is moral. How do I determine what is moral? By what I am most satisfied by. If Carrier would accept that what we happen to be most satisfied by just is what is the moral, we’d be able to break the circle, but since he wants to dictate to a large degree what are the things that really satisfy us he can’t do that. So then he needs to justify that those things really will satisfy us even if we don’t think so, which requires a criteria beyond actual satisfaction.
(2a) There Is Always a Reason to Prefer One Desire to Another. Thus, for example, when McKay says near the end of our debate that he wants his family to be well for no reason, I believe that’s untrue. He has a reason: it makes him happy, contented, satisfied to know they are or will be happy, and it satisfies him to want those things for his family; if it didn’t, he wouldn’t want it. In other words, having this desire satisfies him more than not having that desire, or even having the opposite desire. There can be no other explanation for why he would prefer that desire to another.
But the answer to that is that he feels that only because he thinks he ought to feel that, based on at least social conditioning. But since what does satisfy him and what he does desire could be wrong even by Carrier’s arguments, we can ask the question ought McKay actually value that.
To be fair, Carrier does indeed try to justify these sorts of things. But that runs up against the issues that any system has justifying these things, and Carrier’s attempts to mix looking at our actual desires and goals and what goals he thinks we should have often clash.
(3a) Satisfaction Maximization Is the Only Thing We Desire for Itself Alone. So when McKay says my criterion of satisfaction is just synonymous with “the sum total value,” he is making my point for me. Satisfaction maximization is the one thing that all supreme values share. It’s how we decide what values to keep, and what desires to let override other desires; what desires, in other words, to put in that set of supreme desires. “It satisfies us more to put that desire in that box of supreme values than not to.” Or more exactly, “it would satisfy us more, were we drawing logically valid conclusions from what we know, and if we knew all we’d need to know to align our conclusions with the true facts of the world.”
But the sum of all of the things that would satisfy us is not a thing that we desire, and so is not something itself with intrinsic value. The state follows from the things we value, and can’t be valued in and of itself. If we achieve what we value — or at much as we can — we will maximize our satisfaction, but again that’s an effect of what we value and not something that we can value for itself at all.
(4a) Deception Doesn’t Work. As to the idea floated near the end of the debate, of deception-as-means: pretending to be good is of no value to ultimate satisfaction because (a) you can’t “pretend” to enjoy the satisfactions that come from experiencing and feeling compassion, for example (only by actually being compassionate can one achieve that state), likewise other virtues (see Sense and Goodness without God, pp. 300-02, 316-23); and (b) the statistical reliability of the behavior (and thus all its benefit) is reduced when that behavior is a constant labor and a fragile commitment, rather than habitual and automatic (i.e. pretenders have a lower probability of maintaining the behavior required to enjoy sustained benefits than genuine good folk have, hence even in respect to external returns on investment it’s safer and less of a risk to actually be good than to pretend to be).
If someone didn’t get satisfaction from being compassionate, or was incapable of being compassionate, then they would be incapable of being moral by Carrier here. Carrier would likely argue that such a person would be “insane” and so not relevant to morality, but it does seem like someone could still act compassionate and so achieve many of the purported advantages of being compassionate. It certainly seems like they could value their own maximal satisfaction and work to achieve it. So then they could indeed act “good” without being automatically attached to it. In fact, it seems obvious that the best approach is to carefully consider every action to determine if acting on that really will result in advancing your satisfaction, and only acting “good” when it does and instead acting badly if it doesn’t. One can always condition in “compassionate” actions if they will always result in better results but leave open ones that aren’t so obvious. Restricting yourself to always being good seems to be a good way to fail to maximize your own satisfaction.
(5a) Why GT Perfects DU. So what KcKay never answered in the debate is this: How do we decide what values go into the box of supreme, overriding desires? There must be some standard S that all those values hold in common. Once you decide on what S is, then you must prove that values conforming to S really are supreme overriding desires, i.e. that we really ought to treat them so; if you cannot connect those two, then S is arbitrary and produces a false value system no rational person will have any actual reason to obey (and thus no “true” moral system can result). But if you connect them up, then you end up appealing to some S that is a supreme desire state, that which we desire above all other things, which one will find is satisfaction. (And I already explained in my opening statement why identifying a supreme desire state entails true moral propositions.)
Because one must ask this: why ought we pursue the values in your box of supreme values? If you cannot give an answer, and one that genuinely compels agreement, then your value system is false, or in no meaningful sense “true” (it is not what we “ought” to value); but if you can give such an answer, it will be my answer. In other words: why do we put S values into that box? Because it satisfies us more to do so than not to do so. There must be some true sense in which we ought to put S values into that supreme values box. And our greater satisfaction in doing so is the only available reason.
Carrier again runs into the problem that we can, to a large extent, choose what satisfies us. So we can decide on a set of supreme values by another criteria and then decide to be satisfied achieving those. Thus, I don’t have to place values in that box because they, at least at the moment, most satisfy me. This is, in fact, the Stoic approach: we define the virtues according to reason, and then condition ourselves to be satisfied with what we get from the virtues. Moreover, Carrier’s own view doesn’t genuinely compel agreement to anyone who doesn’t share his idealized set of desires, but that’s what he needs to justify so that we’ll agree with the values in the box. So we need an overarching principle that we can use to decide what to put in that box of supreme values, and it can’t be simple satisfaction because even Carrier would have to admit that if what we place in Carrier’s box doesn’t satisfy us we should change our desires and values so that it does.
(6a) Why There Are Always Reasons for Desires. McKay claimed near the end that “some desires do not have reasons” for them, but that isn’t true. We always have conflicting desires (actual or potential), so how do we decide which desires will and ought to prevail? The answer is a greater state of satisfaction: you will be more satisfied choosing one way rather than the other. Otherwise you are just choosing at random, from which no “true” moral system can derive (because then no choice can ever be wrong, even in principle).
Thus, “it will satisfy us most to pursue that desire over any other” is the ultimate reason for every desire we ultimately pursue, and likewise for every desire we ought to ultimately pursue: the former are the desires we happen to have, while the latter are the desires we would have if we knew better; the former desires can be based on errors of reasoning and false beliefs, while the latter are by definition the desires that are not. Thus the ultimate reason for all desires, that “it will satisfy us most to pursue that desire over any other,” can be false: you might believe pursuing that desire will satisfy you most, when in fact it will not, and some other desire will. Thus empirical facts trump private subjective belief. And that’s why we need science and scientific methods involved in ascertaining true moral facts. (See my earlier blog on the ontology of this moral theory.)
I think what McKay is after here is the idea that we have desires that we just happen to have, either because of a biological imperative or because of a cultural conditioning or any other sort of reasons. But once we have those desires, then our life satisfaction will depend on us achieving them as well, and so we won’t be satisfied unless we satisfy them as well. There may not be a contradiction between them and other desires, and nevertheless we will never be fully satisfied until we do. So these aren’t supreme values, perhaps, but they are desires that we need to satisfy but that neither themselves have intrinsic value nor are instrumental in the sense that they are justified as being desires that we have just to give us a satisfying life. What is the state of such desires in Carrier’s view? Should they exist? Should we eliminate then if they don’t follow from the set of supreme values? If we do that, and if the supreme values are the same for everyone, then won’t we all have to have the same values (this will come up again in the next section)?
(7a) We Mustn’t Confuse Criteria of Truth with Criteria of Effective Persuasion. Finally, a confusion I think McKay never escaped from despite my trying to disentangle it in Q&A, is that we must distinguish how we determine which values are true, from how to motivate or persuade people to adopt those values. Those are not the same thing (and I do think I made this point well enough in Q&A). Because we are not perfectly rational, the latter is not the same process as the former.
The problem here is that since Carrier’s view is based entirely around us being motivated by our maximal life satisfaction and it all following from that, if he simply can’t convince people that his values are the ones that will most satisfy them it’s a big strike against his view. He tries to dismiss people who can’t be convinced as being in error or being insane, but this is a weak rebuttal. In essence, since the only benefit of his view is that it is what could or should actually motivate people, failing to motivate people weakens his stance and removes its only strength.
So, for example, when someone asks the question “But what would you say to a ruthless dictator?” the answer is, if he is irrational or insane, that that question is moot, since there is no statement you can make that will cause him to become a moral person (which is why generally we kill them). Whereas a rational and sane person would not be a “ruthless dictator” in the first place. They would already recognize there are much better lives to be had.
So let me restate my Tarquin example. It’s not obvious that Tarquin’s reasoning is wrong. He seems to get a reasonably satisfying life and it isn’t clear that given him and his personality that he’d have a better one. His reasoning is sound and his desires are not in-and-of-themselves insane. So the only reason Carrier can give is that Tarquin is flawed because he doesn’t share the same desires as Carrier. And if Carrier makes that move, then he opens himself up to the objection that anyone who does not want the same things as him must be irrational, insane, and immoral. So, again, we need a criteria beyond “Satisfies” that settles this, and that criteria cannot be settled by an appeal to satisfaction.
It’s the same thing as asking “But what would you say to a paranoid schizophrenic?” in matters of ordinary nonmoral belief (such as whether it was a fact that the CIA was out to get her): the answer is, there simply isn’t anything you can say that will convince them of the truth, because they’re crazy–i.e., not functioning rationally. That you can’t convince a crazy person of the truth doesn’t make that truth all of a sudden not true.
The thing is, we can see a path by which that person could convince us that they are sane and that the CIA really are out to get her. It’s unclear what anyone who disagrees with Carrier’s set of supreme values could do to prove themselves sane.
That’s the first section. Next time, ten other issues raised by someone else.
November 16, 2019 at 6:50 pm |
“But, like pretty much all of those who argue for objective morality, Aristotle did not say that we should be moral because it provides eudaimonia, but that we should be moral and one of the main things it does is result in that. The moral does satisfy us more than the immoral or non-moral, but that’s not what makes the moral the moral, and Aristotle allows for some things that he doesn’t consider virtues to be instead required non-virtues — and not vices — because they are required for us to be properly moral and so achieve proper eudaimonia. They’d have to be virtues if the moral was simply defined as that which produces or is required to produce eudaimonia. So, again, Carrier elevates the effect to the status of definition or end.”
That seems right to me.
Carrier thinks of moral virtues as a set of maximes, that lead to his ‘state of satisfaction’.
These virtues are also the virtues, that are benefitial to other people, in one way or the other. And that strikes me, if true, as a very unlikely coincidence, given that it depends on the kind of creatures we are.
If we are the kind of beings that achieve the highest state of satisfaction by behaving in ways that are conducive to others of their kind finding their own happiness, then that reeks of, dare i say, intelligent design.
For i can easily conceive of creatures that are not like that.
I think, your Tarquin-Example drives the point home quite elegantly.
But even amongst actual people, you can quite easily find examples of immoral people (according to Carrier’s Goal Theory) that are quite content.
I think Carrier might claim, that they could have been way happier or self-satisfied if they were ‘moral’ but that’s an empirical question that Carrier can hardly just answer from the armchair.
November 18, 2019 at 6:12 am |
Carrier takes the same line here as Hobbes and Rand do: if we are going to interact with and/or rely on other people, it’s in our best interests to not only not screw them over, but to put a system in place where they are discouraged from screwing us over, even if that also means that we are equally discouraged from screwing them over. So it’s rational and not necessarily natural.
Yeah, Carrier flips quite a bit between the empirical data of what we like and the philosophical position of what we really ought to like. While the obvious cases work out, when he gets into the less common cases he starts to struggle making his defenses empirical while still insisting that what someone happens to like isn’t determinate.
November 19, 2019 at 2:50 pm |
“Carrier takes the same line here as Hobbes and Rand do: if we are going to interact with and/or rely on other people, it’s in our best interests to not only not screw them over, but to put a system in place where they are discouraged from screwing us over, even if that also means that we are equally discouraged from screwing them over. So it’s rational and not necessarily natural.”
Does this not also collaps into pragmatism?
As you state:
“It seems purely pragmatic. It seems like even people that are clearly incapable of understanding morality or that are even actively evil in the moral sense could, in fact, decide to follow that.”
November 19, 2019 at 3:04 pm |
Yeah, that’s the risk. Hobbes works around it by not being an Ethical Egoist but instead insisting on a moral contract to limit our inherent pragmatic thinking, but Rand doesn’t really escape that challenge. Pretty much any Ethical Egoist runs into that problem, and it’s really hard to them to escape it.
November 22, 2019 at 6:02 am |
[…] I had originally planned to continue talking about Richard Carrier’s post on Goal Theory that I started here but the next section is incredibly long. I’ve also run into a number of things that have […]
November 25, 2019 at 7:04 am |
[…] going to focus here on a question that I had never really thought of before and is relevant to both Richard Carrier’s Goal Theory and my my ongoing discussions of morality with Coel: how do we get a view of morality that has a […]
December 27, 2019 at 11:04 am |
[…] continuing on with my critique of Richard Carrier’s Goal Theory approach started from here. I went through seven threads last time, and here I’ll go through another ten threads that […]
January 31, 2020 at 6:00 am |
[…] the ones I normally use (although I think them still valid). The issue is that people like Lee and Richard Carrier take the very simple line that our desires are determined and justified by what makes us happy, and […]
February 5, 2021 at 4:51 am |
[…] a strongly objective principle that they can’t deny. As such, he ends up at the same place that Richard Carrier does, by arguing that what is moral is that which all rational people can agree is moral, which means […]