Answering Carrier’s Premises

So, over the past few posts I’ve been using Richard Carrier’s views on objective morality as a framework, at least, for discussing objective morality and whether or not that can be linked to science in the way Carrier — and others, like Sam Harris — want to link it. However, I recall a comment on another blog a while ago that called out another philosopher for criticizing the view without dealing with the fleshed out example of the premises that Carrier had given. And, in fact, had given here, in a defense of Sam Harris. So let me attempt to address that.

Carrier, at the end of the post, lists his premises and then asks us which of those premises we disagree with. The problem is that Carrier commits one of the two major mistakes that philosophical amateurs tend to make: start from premises that at least seem reasonable if not obviously true, but then try to carry the premises far further than they can reasonably go. Having true premises is a good start, but you can’t overreach from them to conclusions that you think obvious but that the premises don’t make obvious. Carrier’s big issue is that pretty much all of his premises are true in a sense, but not in a sense that really works to support his contentions.

So let’s start with the first premise:

1. Moral truth must be based on the truth.

Well, anyone who thinks that moral propositions don’t actually have truth values will immediately disagree with this, but I suppose Carrier can claim that they don’t believe in moral truth at all, and so it’s not relevant to his premise. We only need to be concerned with those who think that moral propositions have objective truth values. But even limited to that, we have some issues. Sure, it’s obvious that true statements must be based on true premises, but Carrier wants to push it further, as he says later:

In both cases (irrational and mal-informed decisions) a decision was made in violation of our first premise (“Moral truth must be based on the truth”) generalized to all domains (“Prudential truth must be based on the truth”).

So it looks like Carrier wants to base morality not just on moral facts, but also on non-moral facts, which is how he makes the link to science. The uncontroversial interpretation of this is that once we know what it means to be moral, then non-moral facts may come into play in determining what it means to be moral. So, for Utilitarianism, the overall happiness — which may depend on things like biological facts — will matter, which it won’t for, say, Kantianism or Stocism. But for all moral systems, what it is biologically possible for us to do will matter, because of the “Ought implies can” principle: we cannot make a normative statement about someone that demands they do something it is impossible for them to do. So, that non-moral facts may be relevant in determining what action to take is uncontroversial. But that doesn’t get Carrier very far at all. And to take it any further would make it potentially a very controversial statement, as we will see.

There’s also an issue where we can ask if moral truth does, in fact, depend on what is objectively and independently true, or if instead it depends on what the individual moral agent can reasonably know. We can see this best if we move from asking questions like “Is slavery moral?” and instead ask questions like “Did that person act immorally in that instance?”. If someone is trying to act morally and reasonably believes that the action they are taking is moral, then can we say that their action is really immoral? Especially if that action is based on the best information that they can reasonably be expected to have?

We can make a reasonable — if still somewhat controversial — comment about missing moral facts, by insisting that we can’t call their action moral — because it is based on incorrect moral facts — but can’t call it immoral either — because the intent is moral — and so can call it amoral: right moral intention, wrong moral facts, overall amoral. But of course we’d still consider that sort of amorality better than an amorality based on a complete lack of concern for morality. That doesn’t seem to work as well, though, when the facts are non-moral. Imagine a case where someone has agreed to turn the heat on in someone’s house to keep the pipes from freezing. Unbeknownst to them, a serial killer has arranged their latest victim so that when the heat is turned on they will be suffocated. The person goes in, turns the heat on … and kills that person. Did they do something morally wrong? Amoral? Or morally right?

The person was acting on what we would have to consider to be a moral obligation: to fulfill their promise. Based on the best information we could expect them to have, there were no other considerations that they needed to consider. If they had known that the person would be killed by that and ignored it, then at best their action would be considered amoral — unconcerned with morality — and likely we would consider it immoral. And if they had known that a serial killer was committing murders like that it might be reasonable to claim that they were morally obligated to check. But in this case none of that seems reasonable.

The reason for this is that we run, again, into “Ought implies can”. We can only act on the beliefs that we actually have, not on the beliefs that we ought to have. For moral facts, it is reasonable to say that we can’t be said to be acting morally if we are acting on moral falsehoods — even if it would be unreasonable to believe that we could know otherwise — but for non-moral facts that doesn’t seem to be the case. And it seems to me that Carrier, in order to make the link to science that he wants to make, needs to make non-moral facts more important than that, and more determinate, arguing that if we are wrong about the non-moral facts about what, say, satisfies us then we can’t be acting morally.

To be fair, Carrier does see the issue:

(There is the question at this point of impossible knowledge or knowledge one cannot reasonably have obtained, but when we accept that all imperatives, even moral imperatives, are situational, this problem dissolves–I explain what it dissolves into in my chapter on Moral Facts).

The problem is that accepting that moral imperatives are situational doesn’t seem to solve the problem. I accepted that they were situational above, and then still noted that we have a bit of a controversy here over what to consider actions that are based on mal-informed non-moral facts. I don’t seem to have the work that has that chapter, but to me either Carrier has to accept that the actions are still moral in the case I described or that they are not moral (either amoral or immoral). If he accepts the latter, then he risks violating “Ought implies can”. If he accepts the former, then I’m not sure the non-moral facts can ever rise to the level he needs them to to make science.

So, in conclusion, this gets us as far as the rather trivial and obvious statement that morally true statements must, in fact, be true. Uncontroversial, but hardly something that we can use to do any hard work later.

The second premise:

2. The moral is that which you ought to do above all else.

At first blush, this does seem obviously true. However, Carrier makes this definitional, in short saying that the moral is defined by whatever it is that you ought to do most. But the sense in which I take this is that if one is to be a moral agent, what one ought to do most is that which is moral. Carrier wants to start from some kind of objective interpretation of what we ought to do above all else and then say that achieving that is what it means to be moral. I would, on the other hand, that the obviously true way to interpret this premise is that as a moral agent what I ought to do above all else is defined by what is moral. So, in terms of practice, what Carrier wants to do here is figure out what we ought to do above all else, and then relate morality to that, while I want to do is figure out what it means to be moral and then insist that as a moral agent that is what we ought to do more than anything else. As such, Carrier’s argument isn’t, in fact, a tautology:

It is a tautology (as all definitions are), but is valuable and meaningful precisely because of that. If you mean by “moral” something other than this, then you are wasting everyone’s time talking about nothing of any importance. Because if you mean something else by “moral,” I will have this other thing, this thing which you really ought to do above all else, which means above your thing, too, whatever it is. So I will have something even more imperative than yours, and if mine is factually true (it really is that which you ought to do above all else), yours cannot be (it cannot be that which we ought to do…because I can prove we ought to do something else instead).

The tautology gets busted by my pointing out that my interpretation is not, in fact, necessarily conceptually false. If I insist that as a moral agent what we ought to do is be moral above everything else, as I have already pointed out Carrier’s definition becomes viciously circular. Since my interpretation is a conceptually valid — if possible incorrect — idea of what we ought to do above all else, Carrier’s definition here falters. Thus, he’ll need an argument to establish that we can move from whatever it is that Carrier thinks we most ought to do above all else to that being what it means to be moral. It is, in fact, conceptually possible for it to end up that what it ought most to do is, in fact, not act morally. We couldn’t call ourselves moral for doing that, but it might in fact turn out to be the case.

In short, Carrier doesn’t seem to be using this premise in the way that most people would use it when they consider it self-evidently true. So, then, I clearly disagree with this premise as Carrier interprets it.

Third premise:

3. All imperatives (all ‘ought’ statements) are hypothetical imperatives.

I have read — and should comment on at some point — the view of Foote’s that Carrier harps on here, and admit that I haven’t dug into the Kant to the level I probably should to answer this question. However, carrying on from the above premise, my argument here is that if Carrier and Foote mean a hypothetical in the sense of “If you want to be a surgeon, do X”, and more relevantly “If you want to be moral, do X” then I agree. However, I would then claim that the work in determining what is moral — and what most of the moral theories are trying to do — is fill in what that X is. For Utiltiarians, it translates to “If you want to be moral, maximize utility”. For Virtue Theorists, it translates to “If you want to be moral, act virtuously”. For Kantians, it might be “If you want to be moral, act according to duty”. How each of these shake out depends on the moral theory in terms of the details, but the important thing to note is that if there is a hypothetical structure here, it’s only the first part, the part that the moral theories aren’t attempting to address. And this is true even if we take Carrier’s formulation, because it translates to “If you want to be moral, do whatever it is that you ought most of to above all else”. I don’t see room to insert hypothetical imperatives into the second part of the if, and in clashes with other moral theories that’s where Foote’s “hypothetical imperatives” would have to be to matter. I even suspect that Kant’s categorical imperatives can fit into that second part of the if. So, at best, Carrier and Foote might get to the point of saying that all imperatives can only be applied to a specific context or concept or domain. This is fine, as far as it goes, but isn’t as strong as Carrier portrays it. It’s certainly not a “fourth way” in philosophy, as Carrier insists it is.

Fourth premise:

4. All human decisions are made in the hopes of being as satisfied with one’s life as one can be in the circumstances they find themselves in.

Let’s grant this, despite it being absolutely meaningless without having an idea of what “satisfied” actually means (if it means pure physical pleasure, then it is clearly false, for example). Let’s take it the way Carrier seems to me mean it, in a very general sense of at best having a sense of satisfaction. The problem we run into here is that there are two ways to have that sense of satisfaction. The first is by actually satisfying our desires. The second is by conditioning our desires so that we only have the desires that we have satisfied. So, if let’s imagine that someone has a desire to play baseball in the Major Leagues that is unfulfilled. As the desire is unfulfilled, then they wouldn’t be at least ideally satisfied with their life. They can thus increase their satisfaction by making it to the Major Leagues. However, they can also increase their satisfaction by giving up the desire to make it to the Major Leagues. Carrier ignores the second option for the most part. Sure, he will later argue loosely for something like that when he argues about rationality and irrationality and being mal-informed, but my counter would be that he simply doesn’t consider how strong our ability to condition our satisfaction really is.

Because of this, we are in the same situation here as we were above when it came to “ought to do above all else”: with completely reversed interpretations. Carrier will use this to argue that we should be satisfying our desires, and doing that is what is moral. I counter that what we should be doing is figuring out what is moral, and then conditioning our desires so that we are satisfied when we act morally. But the evidence for my position is that someone can indeed desire to do things that we would generally consider to be immoral, and so couldn’t be satisfied with their lives unless they could do that as we saw with the Belkar example last time. Carrier would be forced into trying to argue why Belkar’s desires are themselves self-defeating or prevent him from achieving something that Belkar really does want more in order to make this case. However, Belkar’s can make his entire system non-contradictory by relating everything to wanting to, say, stay alive longer to be able to kill more, and so on. Because Carrier’s argument defines morality only by what the agent does or rationally would want and not as an independent concept, he has no way to say that Belkar’s desires or actions are immoral.

This, I think, highlights the main issue with Carrier’s presentation here: he defines morality — or, at least, acting moral — as only having instrumental value, while I consider morality to have intrinsic value. Starting from the second premise, he would claim that morality is only valuable if it leads you to achieve what you ought most to do, which here he defines as being to achieve a satisfying life. I, other hand, claim that morality has value in and of itself and not merely to achieve some other, non-moral end. And it is here, then, that I — and a number of other moral theories — and Carrier irreconcilably part ways.

Fifth premise:

5. What will maximize the satisfaction of any human being in any particular set of circumstances is an empirical fact that science can discover.

This is true if we take satisfaction as merely being what the person does want, in the first sense noted above. In the second sense, where we are in fact looking at what the person ought to be satisfied with and ought to condition themselves to want given what it means to be moral, this is clearly false. Or at least false in the sense Carrier wants it to be true.

Sixth premise:

6. There are many fundamentals of our biology, neurology, psychology, and environment that are the same for all human beings.

Sure, and this can be relevant to determining the morality of a specific action, given a specific concept of morality. But once the fundamental divide outlined above is discovered, it is irrelevant to the discussion, because none of that will determine if a) there is an independent concept of morality that we can appeal to to determine what we, as moral agents, ought to value most of all and b) if morality has intrinsic value or only instrumental value.

In conclusion:

Hence I do not believe anyone can make a valid argument against it.

Here’s the short form of my valid argument: Morality cannot be something that only has instrumental value if we are to be proper moral agents. Morality, as a concept, includes having intrinsic value. We may choose not to value morality, but that does not mean that morality, in and of itself, doesn’t have intrinsic value that we could and as moral agents ought to value. Thus, instead of using morality merely as a means to achieve life satisfaction, we instead ought to condition ourselves to be satisfied with whatever acting moral will give to us. At a minimum, Carrier’s premises are not necessarily true and, even if they were, his conclusion doesn’t follow from them. Can science deal with this? Maybe — I’m skeptical — but it is certainly the case that his views of how to use science to don’t work if I’m right. And I definitely think I’m right.



3 Responses to “Answering Carrier’s Premises”

  1. Andrew Says:

    Based just on the above:

    One of the most universal human attributes is selfishness. In fact, one could over-simplify moral instruction as “teaching people how not to be selfish”.

    By your presentation of Carrier’s argument above, selfishness epitomises morality, perhaps excepting only the situations where it can be demonstrated that selfish behaviour actually produces negative survival outcomes.

    But in practice, a degree of selfishness generally produces positive survival outcomes for the selfish individual / group, usually at the expense of some other individual or group. One can argue that a more altruistic approach benefits everyone, but that assumes that it’s somehow a net win for me if I benefit less in order that some nebulous “everyone” benefits more than they otherwise would, which comes awfully close to begging the question. Generally, as long as the alpha group is large enough that they do not end up lonely, being on top is good for the members of the group, even at the expense of others.

    In practice, there is typically a wide gulf between “what I desire” and “what is moral”, and “rational, enlightened self interest” isn’t sufficient to close it.

    • verbosestoic Says:

      Those who push this line generally DEFINE “what is moral” as “rational, enlightened self-interest”. Carrier is closer to outright saying it than a lot of those who advocate for basing morality on achieving your desires, but ultimately it all collapses to that.

  2. Andrew Says:

    What I usually see is an attempt to reach something that looks similar to traditional morality without transcendence. They do not start with “rational, enlightened self-interest” and see where that leads them, because it can lead to all sorts of places that will not be accepted by most. Rather, they attempt to show that a reasonable facsimile of commonly accepted morality can be justified by “rational, enlightened self-interest”.

    It’s a bit like “going wherever the path leads us” while stopping every five minutes to check your GPS to make sure you’re choosing a particular path. I call it a form of begging the question because you only end up with a traditional morality because at each decision point you filter the options so as to aim for that particular destination, while excluding other options that on the face of it are equally rational and enlightened (but end up somewhere quite different).

    As an example of this form of biased thinking, consider the principle of making rules without knowing where you’ll end up in the structure. Sounds nice, right? You don’t know whether you’ll be a rich human or a cow, so structure the rules so you’ll be happy either way.

    But based on all evidence available to me, there’s exactly zero chance that I’ll ever be a cow. Nor will my children, nor my friends, nor anyone who I might consider a peer nor seek patronage from. Considering how I might interact with the world as a cow is a purely hypothetical exercise, and one which I should feel free to sacrifice to my self-interest in the case of a conflict.

    So let’s ignore the cows, and instead consider starving kids in Africa. But I’m no more likely to become a starving child in Africa than a cow. And, broadly speaking, it’s more to my enlightened self-interest to make sure my kids can’t either than it is to sacrifice benefits for my kids for the sake of a people group with whom I have negligible interaction.

    Enlightened self-interest only leads to altruism if we a-priori assume that altruism is the best path to self-interest, and that seems to pull in a mass of assumptions that only work if we already assume the truth of the conclusion.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: