Harris replies to Blackford via Coyne …



I’m going to go through his responses, but before I do let me say that a lot of my replies will focus on pointing out that Harris is missing the key underlying thread of all the criticisms: that he hasn’t defined his own view of it all being dependent on well-being and how that will work well enough for anyone to understand how it is actually supposed to work.



  • How do we actually measure well being?; for that is what we must do to make moral judgments.  The metric for well being of a person, or an animal, must differ from that of groups or societies, yet they’re to be put on a single scale. In some cases, of course, it’s easy; in others, seemingly impossible.
This is simply not a problem for my thesis (recall my “answers in practice vs. answers in principle” argument). There is a difference between how we verify the truth of a proposition and what makes a proposition true. How many breaths did I take last Tuesday? I don’t know, and there is no way to find out. But there is a correct, numerical answer to this question (and you can bet the farm that it falls between 5 and 5 million).”
The issue here is that I don’t think that Coyne was, in fact, really making a comment about it being hard to figure out how well-being shakes out in all cases.  I think he was really getting at — and this is likely true of most of the points — how it’s hard to imagine even in theory how to balance all of those states of well-being.  To say that something promotes a person’s well-being and what does that is likely to be totally different from how you’d do that for society, or a group in a society, or an animal.  How to balance all of these when seemingly the scientific terms and units will completely differ?  Harris may be able to eliminate considering some of these concerns, but he would have to say that explicitly … and he never does.
To put it another way, we all know that breaths are measured in number of breaths.  The practical issue in his analogy is one of information, but we know exactly what information we need and exactly how to go about getting it in his analogy.  The complaint that Coyne is making is that he has no idea how to go about reconciling all of these things that he thinks are important to Harris’ theory into one set unified set of units so that we can do proper comparisons between the different types of well being, and is clearly asking about what information we need and how we’ll get it.  It simply isn’t an issue of “principle vs practice” as Harris asserts; the underlying concern is completely different.
Harris continues to not actually reply to criticisms in the next section:
  • Given that, how do we trade off different types of well-being? How do you determine, for example, whether torture is moral? In some case, as Harris pointed out in The End of Faith, torture may save innumerable lives, but there’s a societal effect in sanctioning it.  How do you weigh these?  How do you determine whether the well-being of animals outweighs the well-being we experience when eating meat?
These are all interesting questions. Some might admit of clear answers, while others might be impossible to resolve. But this is not my problem. The case I make in the book is that morality entirely depends on the existence of conscious minds; minds are natural phenomena; and, therefore, moral truths exist (and can be determined by science in principle, if not always in practice). The fact that we can easily come up with questions that are hard or impossible to answer does not challenge my thesis.”
Look closely at what most of what he claims his point is:  morality depends on the existence of conscious minds,  minds are natural phenomena, and therefore moral truths exist.  While this is quite shaky — I actually disagree with the first premise, and Harris makes a poor argument for that in the book — note what you don’t see in it.  You don’t see any relation to “well-being”.  When I did my long discussion of Harris’ replies to Sean Carroll, I noted this problem:
“Harris seems to be trying to derive his entire view from a relation to consciousness.  That should mean that he takes a well-defined and fairly strong view of what it means for values to relate to consciousness.  But note that here he uses all sorts of vague words around this.  He starts from “consciousness is the only intelligible domain of value” which says nothing about what that means but seems nicely strong, and then asks what would happen if someone said “No, morality is not related to consciousness at all”.  And this just dismisses that as not being of interest to anyone, as if he doesn’t even need to argue for that.  What gives?  Does he simply mean that, say, any moral code has to be something that conscious beings can think and understand?  Well, duh, but so what?  That doesn’t mean that the properties of consciousness itself  have any relevance to what is or isn’t moral, or that we should start from consciousness as opposed to starting from, say, the definition of morality to figure out what is or isn’t moral.  In short, his last sentence isn’t supported by his claim, unless he can actually argue for and explain in what sense morals have to relate to consciousness.  Which he never does.  So I’m going to go with “Conscious beings have to be able to understand it” and then claim that that can relate to no actual consciousness at all of anything in the world, but only of rules and reason.  Which will, of course, demolish the idea that somehow “well-being” is the determining factor, but if he doesn’t like it he’s free to define his terms and argue for them.”


Which he doesn’t do here.  He tosses out “consciousness” and expects it to answer questions about well-being, but he doesn’t show how to get to well-being from consciousness and there’s no necessary reason to think you can.  And so it is a complete non-answer to try to address issues people are raising about the specifics of well-being to claim that he defined it all by consciousness, even though he’s still missing the step to well-being.


“There may be many equivalent peaks on the moral landscape: on some everyone might favor their friends and family to a degree that is compatible with universal well-being; perhaps on others everyone is truly impartial. No doubt there will be other regions lower down on the ML where people are highly biased towards their nearest and dearest, at a significant cost to everyone. Perhaps there are also regions where everyone is truly impartial, but their impartiality functions in concert with other factors so as to degrade the well-being of everyone. Every possible weighting of us-vs.-them can be represented in this space, along with all other relevant variables — and each will have consequences in terms of the well-being of everyone involved. Yes, there will be worlds in which some very selfish people make out rather well while causing great misery to others. And yes, it could be impossible to convince these people that life would be better if they behaved differently. But so what? These won’t be peaks on the landscape, and it will still be true to say that movement upwards toward a peak will be constrained by the laws of nature.”


Okay, quick question:  why won’t they be peaks on the landscape?  And another: what does it actually mean to say that movement towards a “peak” will be constrained by the laws of nature?

The objection he’s flailing at here is basically the standard question asked about Utilitarianism — which Harris is shamelessly stealing from — that asks “Is it really immoral or wrong to put the concerns of those closest to me over the global welfare?”.  Harris doesn’t answer that here.  Instead, he waxes eloquently about there being many equivalent peaks, and then about there being some things that aren’t peaks.  But he’s still casting it all in light of his insistence on global welfare and well-being being the ultimate consideration.  Unfortunately, that’s what’s being questioned here, but asking if it would be really bad to consider your own family ahead of someone else even if that would objectively cause less well-being at a global level.  Harris’s reply is a long-winded version of “Yes”, but he doesn’t justify it.  At all.  And that’s really what’s being demanded here.


“Blackford (along with everyone else) has gotten bogged down in the concepts of “should” and “ought.” We simply don’t have to think about morality in these terms. Yes, we feel certain moral imperatives — I can be overcome by remorse, for instance, and feel that I “should” apologize for something that I’ve done. But this is just a folk-psychological way of talking about my experience in relationship to others. What if my apologizing in this instance would create an immensity of suffering for everyone on earth? Well, then, I “shouldn’t” do it. And if I still felt a nagging sense that I still should apologize, I “should” ignore this very feeling. Whether we feel that we should do something, or can convince others that they should do it, is all but irrelevant to the question of whether we will be moving up or down on the ML (modulo the psychological cost of living with nagging feelings of “should”).”

Note that Harris sticks strictly to “should” after introducing “should” and “ought” in the beginning.  This unfortunately allows for an equivocation to sneak in here.  “Ought”, by philosophical definition, is not “… a folk-psychological way of talking about my experience in relationship to others.”  It, in fact, denotes the actual moral responsibility one has, as Blackford absolutely knows (and Harris doesn’t seem to understand).  Thus, if one accepts Harris’ view, Harris would be saying that they ought to not apologize if doing so would cause more suffering, and that if they thought they ought to apologize they would be wrong.   That’s the statement that he needs to defend and prove:  that these people are, in fact, wrong.  For his point to work, he must take ought in precisely the same way as Blackford takes it, and he surely does.  That, then, is not the actual problem that he needs to address, and it boggles my mind that he’s missed this so completely.
“Humans draw strong moral distinctions between different situations that have seemingly identical consequences (e.g., the trolley problem and the organ-donation problem).  But perhaps Harris would respond that our morality is simply misguided here.
I’ve discussed this a fair amount in my public talks. Yes, it is possible for our moral intuitions to be misguided — and we need to learn to ignore certain framing effects. In this case, however, it is also possible that we are responding to the fact that the situations are not actually the same. If pushing a person is just BOUND to have a much bigger effect on us than flipping a switch–well, then, we have to take this effect into account. Needless to say, we could concoct a trolley problem that made this nonequivalence undeniable: just imagine a version in which the man you were being asked to push had the opportunity to plead for his life and show you pictures of his wife and children…”
Here, though, is where Harris reveals the split in his work.  He is not using science at all to determine the underlying moral principle of well-being (he actually admits this in the book).  That he’s doing philosophically.  It’s only at the level of determining what counts as well-being that he allows science to have a role.  So … that whole “morality is natural” thing doesn’t seem all that important now, does it?  After all, if it was the trolley cases wouldn’t just be cases where our moral intuitions are wrong, because they’d relate to conscious beings and their actual conscious states wrt morality.  So he’s not looking at morality “naturally”, or at least not as a natural product of human minds.  He’s not doing an “is” for his underlying moral principle.  He’s starting straight from an ought, and putting science to the side.  And that’s not an interesting use of science in morality.
“Again, this totally misses the point of my argument. And the same annihilating claim could be made about any branch of science. There are no scientific values that command assent in the way that Blackford worries morality should. Why value human well-being? Well, why value logic, or evidence, or understanding the universe? Some people don’t, and there’s no talking to them. The fact that some people cannot be reached on the subject of physics — or use the term “physics” in ways that we cannot sanction — says absolutely nothing about the limitations of physics or about the nature of physical truth. Why should differences of opinion hold any more weight on the subject of good and evil?”
See, here’s why:  if I have a goal of understanding the universe or, in fact, doing science, it can be objectively proven that the best way to do that is, in fact, to use logic and evidence to settle that.  If I don’t have either of those two goals, then I won’t care about the entire field, and so the question of my doing science or understanding the universe is moot.  In short, I won’t be doing physics, and I won’t care about that.  At all.
Now, in the case of morality all of us — Harris, myself, Coyne, Blackford and many, many others — care about doing morality.  We want to be moral, and want to do morality the right way.  Harris is saying that the right way to do morality is to care solely about global well-being.  And the reply from all of us is a quite justified “Why?”.  Harris, then, owes us the same proofs for that principle that he has access to for logic and reason and the scientific method when doing science … that it works better.  And to do that, he needs to define “Better at doing what?”.  That means he needs to know the goal and purpose of morality.  And he doesn’t have any of those things.  Mere differences of opinion aren’t the problem, but the fact that when he faces a difference of opinion he has no way to settle it is.
As a final aside/shot, many people have been saying that his book is worth reading.  I can’t see how anyone could possibly think so.  The book doesn’t give enough detail on his own theory to make that theory clear to anyone reading it, or give them any real sense of how science fits into it and why we should buy it.  If this was done as an actual paper, I can’t imagine any other outcome than it being sent back with “Needs work”.  And he spends so much time on his own theory and ideas that he doesn’t give any sort of good overall view of the state of morality, even as he sees it.
If you want a book on the subject that’s worth reading, read Jesse J. Prinz’s “The Emotional Construction of Morals”.  He still has problems, but he has a clear theory and spends a lot of time relating it to directly opposing viewpoints.
Skip “The Moral Landscape”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: