Moral confusion indeed (Part 1) …

Sam Harris has posted a decently long attempt to clarify his position as expressed as TED.  Since I’ve been criticized for not getting it right and working on a simplified version of it, I thought it’d be worth checking it out.  And there are a number of issues in it that Harris simply doesn’t get right.

Link to the full thing is here:

http://www.project-reason.org/newsfeed/item/moral_confusion_in_the_name_of_science3/

I had originally meant to do this in one part (and have it all posted by now) but it’s already long and the last part is likely to be about as long as what I have now.  So here’s part 1, and I’ll try to have the second and last part up sometime after the weekend.

So, let’s start off with a fairly minor point:

——————————-

Some of my critics got off the train before it even left the station, by defining “science” in exceedingly narrow terms. Many think that science is synonymous with mathematical modeling, or with immediate access to experimental data. However, this is to mistake science for a few of its tools. Science simply represents our best effort to understand what is going on in this universe, and the boundary between it and the rest of rational thought cannot always be drawn. There are many tools one must get in hand to think scientifically—ideas about cause and effect, respect for evidence and logical coherence, a dash of curiosity and intellectual honesty, the inclination to make falsifiable predictions, etc.—and many come long before one starts worrying about mathematical models or specific data.

—————————

This is in fact, a common problem when people with a scientific background or leanings try to get involved in things like this:  railing agaisnt overly narrow conceptions of science and countering with overly broad ones.  He says that science represents our best effort to understand what’s going on in our universe.  I claim that he’s wrong, and that it’s  philosophy that does that, and that there are distinct methods and methodologies between science and philosophy.  I would also disagree that when I walk about in every day life or when I’m sitting here, at work, trying to figure out why my code isn’t working the way I thought it should that I’m doing science, even though I’m certainly trying to figure things out in all cases.  Science, like it or not, has a particular methodology and a methodology that is formalized.  Peer review and having experiments be available for peer review and repetition are key parts of that.  And that’s what makes science good, and makes it superior to every day reasoning and makes it superior at studying the world itself — as opposed to concepts — than philosophy.  Whenever a scientist wants to imfringe on some other field and make it science, they almost always do so by taking away from science that which gives it its greatest successes.

(Some have objected in the past that my view would seem to make things like astronomy not sciences.  I don’t really see that as an objection, but would note that they rely on repeatable empirical observations.  Every day reasoning and philosophy don’t; everyday reasoning is empirical but doesn’t require repetition or validation by anyone else, and philosophy is not limited to the empirical.)

Harris also starts off with a comment on the objective/subjective distinction:

———————–

“There is also much confusion about what it means to speak with scientific “objectivity.” As the philosopher John Searle once pointed out, there are two very different senses of the terms “objective” and “subjective.” The first relates to how we know (i.e. epistemology), the second to what there is to know (i.e. ontology). When we say that we are reasoning or speaking “objectively,” we mean that we are free of obvious bias, open to counter-arguments, cognizant of the relevant facts, etc. There is no impediment to our doing this with regard to subjective (i.e. third-person) facts. It is, for instance, true to say that I am experiencing tinnitus (ringing in my ears) at this moment. This is a subjective fact about me. I am not lying about it. I have been to an otologist and had the associated hearing loss in the upper frequencies in my right ear confirmed. There is simply no question that I can speak about my tinnitus in the spirit of scientific objectivity. And, no doubt, this experience must have some objective (third-person) correlates, like damage to my cochlea.  Many people seem to think that because moral facts relate entirely to our experience (and are, therefore, ontologically “subjective”), all talk of morality must be “subjective” in the epistemological sense (i.e. biased, merely personal, etc.). This is simply untrue.”

————————-

At first, I wasn’t sure why this bothered me, and on re-reading it I was going to pass it by.  But then I read Carroll’s comments and figured it out: the claims here are, in fact, missing the point about why people are upset about his linking morality to something that seems subjective in the first place.

Carroll actually defines what he means when he worries about it not being objective:

—————————-

http://blogs.discovermagazine.com/cosmicvariance/2010/03/24/the-moral-equivalent-of-the-parallel-postulate/

“There are not objective moral truths (where “objective” means “existing independently of human invention”) …”

——————–

I disagree with Carroll about there not being objective truths and disagree about his precise way of talking about “objective”.  However, since Harris spends most of the article talking about Carroll, it would have been a good idea for him to address this idea of “objective”, or even the idea of “objective” that causes problems for morality.  And the idea is essentially this: we don’t want it to be the case that morality is determined by the individual.  If I do A and believe it moral, and someone else does ~A and believes it moral, it had better be the case — if we’re going to be objectivists — that there is a fact of the matter about that, and that that fact is that at most one of us is right.  Tying morality to well-being or consciousness — which are both subjective and personal — risks making moral judgements personal, in the sense that only I can judge whether or not the action is moral.  Harris clearly doesn’t want to go that road, but his comments here don’t in any way say anything.  Even relativists allow that there may be an objective fact about morality: that it is relative.  That’s not what the debate is over.  The debate is over whether or not a rule like “Don’t lie” can ever be an objective fact.  Relativists say “No”.  Objectivists — and, I repeat, I am one — say “Yes”.  Both of us think that Harris might have — and probably does have — a problem.  The relativists think he might have a problem because he’s taking an objectivist stance but seems to be relying on a mechanism that proves them right.  The objectivists think he might have a problem because he’s taking an objectivist stance but seems to be relying on a mechanism that is subjective in a way that leads to relativism.

You don’t get around that by talking about epistemological objectivity and refusing to actually show how his “well-being” doesn’t lead to the problems the opponents seem to be worried about.

——————–

I’ve now had these basic objections hurled at me a thousand different ways—from YouTube comments that end by calling me “a Mossad agent” to scarcely more serious efforts by scientists like Sean Carroll which attempt to debunk my reasoning as circular or otherwise based on unwarranted assumptions. Many of my critics piously cite Hume’s is/ought distinction as though it were well known to be the last word on the subject of morality until the end of time.  Indeed, Carroll appears to think that Hume’s lazy analysis of facts and values is so compelling that he elevates it to the status of mathematical truth: 

Attempts to derive ought from is [values from facts] are like attempts to reach an odd number by adding together even numbers. If someone claims that they’ve done it, you don’t have to check their math; you know that they’ve made a mistake.

This is an amazingly wrongheaded response coming from a very smart scientist. I wonder how Carroll would react if I breezily dismissed his physics with a reference to something Robert Oppenheimer once wrote, on the assumption that it was now an unmovable object around which all future human thought must flow. Happily, that’s not how physics works. But neither is it how philosophy works. Frankly, it’s not how anything that works, works.”

——————–

Wait.  So Harris thinks that the “is/ought” distinction is just a left-over from Hume and so, what, that he doesn’t even have to address it?  That if someone says “You’re trying to use an is to get an ought” he can just say “And that’s perfectly acceptable, and if you don’t like it you’re just clinging to Hume”?  The is/ought distinction is a major piece of philosophy.  It’s brought up in multiple fields and is, in fact, generally considered reasonable.  Yes, some argue that it isn’t true, but for the most part if you walk into a discussion about morals and simply assume that is can imply ought, at the very least all of the philosophers will want you to have an explanation for how that can work.  If you don’t have one — and Harris doesn’t seem to have one, since he simply asserts that it’s bad philosophy — then be prepared to have then claim that you don’t know what you’re talking about.  It’d be like someone walking into a biology conference with a lovely theory on how things develop that completely ignores evolution, and then when challenged on that says “Well, evolution could be wrong”.  Yeah, maybe it could, and maybe the is/ought distinction is wrong, too, but simply asserting that it is doesn’t in any way help your case.

See, here’s why the is/ought distinction is credible.  Imagine that I’ve built a deck.  And you go and look at it, after it’s done.  Can you tell, just by looking at it, if I meant to build it that way, or if there were places where I made mistakes?  Is the play in the seats deliberate or a reflection of my poor building skills?  You don’t know.  You’d have to ask me.  But that’s going beyond an is to an ought, since I can tell you what it should have been because how it ought to be is judged — in that case — precisely by what I intended it to be.  So, you are no longer appealing to is, but are appealing to ought.

Just as you can’t determine ought from is for my deck, you can’t determine what ought to be considered moral from what is considered moral.  We can be wrong.  Society thought that slavery was okay at some point, and now we all think it immoral.  Which is is the right one?  You can’t tell by looking at what we thought, but instead have to reference a real, objective, ought standard.  So Harris owes us an explanation of where he gets his ought from, or how he manages to get that from is.  He hasn’t done so, and simply flatly denying the problem is not the way to defend himself on that score.

For your education, here’s Carroll’s actual point in context, where he definitely says more than Harris is addressing:

———————

Harris is doing exactly what Hume warned against, in a move that is at least as old as Plato: he’s noticing that most people are, as a matter of empirical fact, more concerned about the fate of primates than the fate of insects, and taking that as evidence that we ought to be more concerned about them; that it is morally correct to have those feelings. But that’s a non sequitur. After all, not everyone is all that concerned about the happiness and suffering of primates, or even of other human beings; some people take pleasure in torturing them. And even if they didn’t, again, so what? We are simply stating facts about how human beings feel, from which we have no warrant whatsoever to conclude things about how they should feel.

Attempts to derive ought from is are like attempts to reach an odd number by adding together even numbers. If someone claims that they’ve done it, you don’t have to check their math; you know that they’ve made a mistake. Or, to choose a different mathematical analogy, any particular judgment about right and wrong is like Euclid’s parallel postulate in geometry; there is not a unique choice that is compatible with the other axioms, and different choices could in principle give different interesting moral philosophies.”

———————–

Two times where Harris doesn’t address his opponent to address his supposed points. 

————————-

Carroll appears to be confused about the foundations of human knowledge. For instance, he clearly misunderstands the relationship between scientific truth and scientific consensus. He imagines that scientific consensus signifies the existence of scientific truth (while scientific controversy just means that there is more work to be done). And yet, he takes moral controversy to mean that there is no such thing as moral truth (while moral consensus just means that people are deeply conditioned for certain preferences). This is a double standard that I pointed out in my talk, and it clearly rigs the game against moral truth. The deeper issue, however, is that truth has nothing, in principle, to do with consensus: It is, after all, quite possible for everyone to be wrong, or for one lone person to be right. Consensus is surely a guide to discovering what is going on in the world, but that is all that it is. Its presence or absence in no way constrains what may or may not be true.”

———————–

Okay, so my question is: where did he say that?  The closest I get on a skimming is this:

———————

Morality and science operate in very different ways. In science, our judgments are ultimately grounded in data; when it comes to values we have no such recourse. If I believe in the Big Bang model and you believe in the Steady State cosmology, I can point to the successful predictions of the cosmic background radiation, light element nucleosynthesis, evolution of large-scale structure, and so on. Eventually you would either agree or be relegated to crackpot status. But what if I believe that the highest moral good is to be found in the autonomy of the individual, while you believe that the highest good is to maximize the utility of some societal group? What are the data we can point to in order to adjudicate this disagreement? We might use empirical means to measure whether one preference or the other leads to systems that give people more successful lives on some particular scale — but that’s presuming the answer, not deriving it. Who decides what is a successful life? It’s ultimately a personal choice, not an objective truth to be found simply by looking closely at the world. How are we to balance individual rights against the collective good? You can do all the experiments you like and never find an answer to that question.”

———————-

This seems pretty reasonable.  Tying it back to the “is/ought” distinction that Harris pointedly ignores, Carroll’s claim can, I think, be summarized as “Science wants is, and thus once we get is we’re done.  Morality wants ought, and so getting is does not mean getting ought.”  Harris’ comments don’t address that.  At all.

This is totally unacceptable in any sort of rational reply.  I shouldn’t have to read Carroll’s post to get Caroll’s objections, or at least a good idea of them (there could still be some misunderstandings).  But it doesn’t seem to me that Carroll is actually saying what Harris thinks he’s saying.  So far, this response is utterly vacuous and misses all the critical, key points that Harris does need to address.  And it doesn’t seem to be the fault of who he’s replying to.

I’m trying to be kind, but it is frustrating when he seems to not only be ignoring thousands of years of philosophy, but even what his opponents really are saying.  And at least the first one will not get any better.

————————–

Strangely, Carroll also imagines that there is greater consensus about scientific truth than about moral truth.  Taking humanity as a whole, I am quite certain that he is mistaken about this. There is no question that there is a greater consensus that cruelty is generally wrong (a common moral intuition) than that the passage of time varies with velocity (special relativity) or that humans and lobsters share an ancestor (evolution). Needless to say, I’m not inclined to make too much of this consensus, but it is worth noting that scientists like Carroll imagine far more moral diversity than actually exists. While certain people believe some very weird things about morality, principles like the Golden Rule are very well subscribed. If we wanted to ground the epistemology of science on democratic principles, as Carroll suggests we might, the science of morality would have an impressive head start over the science of physics. [1]”

——————————————-

The footnote is more interesting:

——————————–

Perhaps Carroll will want to say that scientists agree about science more than ordinary people agree about morality (I’m not even sure this is true). But this is an empty claim, for at least two reasons: 1) it is circular, because anyone who insufficiently agrees with the principles of science as Carroll knows them, won’t count as a scientist in his book (so the definition of “scientist” is question begging). 2) Scientists are an elite group, by definition. “Moral experts” would also constitute an elite group, and the existence of such experts is completely in line with my argument.”

————————

So, let’s test Harris’ theory by comparing scientific experts (scientists) with moral experts.  I’ll make the totally reasonable claim that the closest thing we have to moral experts are moral philosophers.  So, let’s look at this reasonably: how much do moral philosophers agree on philosophy?

Well, anyone who has ever taken an introductory class in moral philosophy should be able to answer this: hardly at all.  Just in my limited study, here is a list of all the moral systems that are contradictory and incompatible and yet are still considered potentially valid:  Aristotelean, Stoic, Hedonism, Epicureanism, Betham’s Utilitarianism, Mill’s Utilitarianism, Kantian, Rawlsian, Hobbesian Social Contract, and Evolutionary.  And I know I’m missing some.  And I have an idea for one that isn’t on the list yet.  So how in the world could you ever conclude that moral experts agree more than scientific experts?  The Golden Rule is not a generally accepted principle in moral philosophy, as it is disagreed with and in some cases considered inadequate.  While moral intuitions do play a role and are testable — and thus are things people can agree on — no one says that just because we have an intuition that X is moral it means that X really is moral and that any moral system where X is immoral is wrong.  And vice versa.

I’m not sure why Harris is going on about consensus, since it doesn’t even seem to be Carroll’s point and even if it was that’s a point that would be better left to the side as something that no one really need consider.  But if he’s going to take it seriously, it would be nice if he would seriously examine the actual work on the subject to realize that there really isn’t all that much consensus at the level that he wants.  And remember that he himself wants to appeal to moral expertise over every day moral reasoning, so he’s gonna havet’a address how it is that moral experts can’t agree on all of this.

Okay, so now we get into sections where Harris might actually start to, you know, prove his claims.  I’m really looking forward to this ….

————————

There are many things wrong with this approach. The deepest problem is that it strikes me as patently mistaken about the nature of reality and about what we can reasonably mean by words like “good,” “bad,” “right,” and “wrong.” In fact, I believe that we can know, through reason alone, that consciousness is the only intelligible domain of value. What’s the alternative? Imagine some genius comes forward and says, “I have found a source of value/morality that has absolutely nothing to do with the (actual or potential) experience of conscious beings.” Take a moment to think about what this claim actually means. Here’s the problem: whatever this person has found cannot, by definition, be of interest to anyone (in this life or in any other). Put this thing in a box, and what you have in that box is—again, by definition—the least interesting thing in the universe.

So how much time should we spend worrying about such a transcendent source of value? I think the time I will spend typing this sentence is already far too much. All other notions of value will bear some relationship to the actual or potential experience of conscious beings. So my claim that consciousness is the basis of values does not appear to me to be an arbitrary starting point.”

————-

Annnnnnd … I’m disappointed.

Harris seems to be trying to derive his entire view from a relation to consciousness.  That should mean that he takes a well-defined and fairly strong view of what it means for values to relate to consciousness.  But note that here he uses all sorts of vague words around this.  He starts from “consciousness is the only intelligible domain of value” which says nothing about what that means but seems nicely strong, and then asks what would happen if someone said “No, morality is not related to consciousness at all”.  And this just dismisses that as not being of interest to anyone, as if he doesn’t even need to argue for that.  What gives?  Does he simply mean that, say, any moral code has to be something that conscious beings can think and understand?  Well, duh, but so what?  That doesn’t mean that the properties of consciousness itself  have any relevance to what is or isn’t moral, or that we should start from consciousness as opposed to starting from, say, the definition of morality to figure out what is or isn’t moral.  In short, his last sentence isn’t supported by his claim, unless he can actually argue for and explain in what sense morals have to relate to consciousness.  Which he never does.  So I’m going to go with “Conscious beings have to be able to understand it” and then claim that that can relate to no actual consciousness at all of anything in the world, but only of rules and reason.  Which will, of course, demolish the idea that somehow “well-being” is the determining factor, but if he doesn’t like it he’s free to define his terms and argue for them.

———————-

Now that we have consciousness on the table, my further claim is that wellbeing is what we can intelligibly value—and “morality” (whatever people’s associations with this term happen to be) really relates to the intentions and behaviors that affect the wellbeing of conscious creatures. And, as I pointed out at TED, all the people who claim to have alternative sources of morality (like the Word of God) are, in every case that I am aware of, only concerned about wellbeing anyway: They just happen to believe that the universe functions in such a way as to place the really important changes in conscious experience after death (i.e. in heaven or hell). And those philosophical efforts that seek to put morality in terms of duty, fairness, justice, or some other principle that is not explicitly tied to the wellbeing of conscious creatures—are, nevertheless, parasitic on some notion of wellbeing in the end (I argue this point at greater length in my book. And yes, I’ve read Rawls, Nozick, and Parfit). The doubts that immediately erupt on this point seem to invariably depend on extremely unimaginative ideas about what the term “wellbeing” could mean, altogether, or on mistaken beliefs about what science is.”

———————

Well, we’ve still got the vagueness in play: what does he mean by “well-being”?  See, the Stoics could be said to have based their view on “well-being”, as they defined it as being the proper goal for humans.  From there, they got that it had to be reason itself and actually defined it so that emotion and actual benefit to the person — and even a happy life for that or any other person — were irrelevant to morality.  This is not a stance that, I think, Harris would buy … but it fits in with his poorly defined idea of “well-being”.  Aristotle was similar.  Kant did “duty” from reason.  And so on.  Before Harris can make any claims about how other notions are parasitic on “well-being”, he needs to tell us what “well-being” is.  This is what he should start with and beat us over the head with repeatedly; it is not something that he can leave for his book.

So, would a view like the Stoics be a candidate for being based on his view of “well-being” or not?  If yes, how?  If not, why not?

Defining your terms is the first step, not the last one.

———————–

“Similarly, there are people who claim to be highly concerned about “morality” and “human values,” but when we see that they are more concerned about condom use than they are about child rape (e.g. the Catholic Church), we should feel free to say that they are misusing the term “morality,” or that their values are distorted. As I asked at TED, how have we convinced ourselves that on the subject of morality, all views must count equally?”

————————

Here’s where my being an objectivist — but one that holds that the is/ought distinction is valid — comes into play.  I do not, of course, assert that all views must count equally.  However, I will state that if all you have to offer are your views and you have not proven that your views are correct, then your view is no better a priori than anyone else’s.  Your view needs to be justified by solid, rational argument, and when someone asks how you decide between cases of conflicting moral views your answer had better not be “Well, some aren’t right”.  Yes, some aren’t right.  Which ones?  And how do you that the view that you think is right is really right?  Why should anyone accept that their view is wrong if you consider it wrong?

You prove this by defining your terms and working through logical arguments to show that, yes, you’re right and they’re wrong. Harris bobs and weaves, picking out one example, judging it by his own standards, and then saying “Well, see, we can say they’re wrong”.  No, you can’t say they’re wrong unless you can prove it, and to do that you have to have the same moral goals and principles, and if you don’t then, well, be prepared for a long, pointless discussion on it.  This sort of problem is what makes people think relativism is true, not because people don’t agree but because no one can find moral principles that everyone rationally agrees upon.  I disagree with them, but to dismiss it so casually is to disregard, again, thousands of years of moral philosophy.

———————

“Everyone has an intuitive “physics,” but much of our intuitive physics is wrong (with respect to the goal of describing the behavior of matter), and only physicists have a deep understanding of the laws that govern the behavior of matter in our universe. Everyone also has an intuitive “morality,” but much intuitive morality is wrong (with respect to the goal of maximizing personal and collective wellbeing) and only genuine moral experts would have a deep understanding of the causes and conditions of human and animal wellbeing. Yes, we must have a goal to define what counts as “right” or “wrong” in a given domain, but this criterion is equally true in both domains.”

——————–

1) He misrepresents the goal of science, since the goal is, in fact, just to figure out how things are.  In fact, Kant argued rather well that even if it turned out that there really wasn’t matter, we could still do science as is, since it would describe what we experience and could apply happily to that domain. 

2) Moral experts would question Harris’ goal, so he needs to prove the goal first, and then he can appeal to intuitions being wrong to explain certain ideas.

—————————-

So what about people who think that morality has nothing to do with anyone’s wellbeing? I am saying that we need not worry about them—just as we don’t worry about the people who think that their “physics” is synonymous with astrology, or sympathetic magic, or Vedanta. We are free to define “physics” any way we want. Some definitions will be useless, or worse. We are free to define “morality” any way we want. Some definitions will be useless, or worse—and many are so bad that we can know, far in advance of any breakthrough in the sciences of mind, that they have no place in a serious conversation about human values.”

————————–

See, you have to prove that well-being the right goal before you can just dismiss people who don’t agree with your goal.  Harris is going about this backwards: when asked to prove or support his goal, he dismisses them out of hand.  Scientists at least should do the opposite: dismiss it only when they can’t prove their stance.

Would this mean that I can just dismiss Harris, if I wanted to be scientific about morality?

————————-

“One of my critics put the concern this way: “Why should human wellbeing matter to us?” Well, why should logical coherence matter to us? Why should historical veracity matter to us? Why should experimental evidence matter to us? These are profound and profoundly stupid questions. No framework of knowledge can withstand such skepticism, for none is perfectly self-justifying.”

———————

Well, see, for logical coherence, what we have are definitions that describe certain behaviour: they produce true statements if all the premises are true.  You can ask why that matters, and the existence of fuzzy logic, inductive logic, and even abduction are really good examples of what we get when we do.  But, if I want a deductively valid statement, then the laws are designed to fulfill that goal.  Yes, the goal is an axiom, but it’s both something that people can reject and is something that people can accept.  I’m not sure what he’s getting at with historical veracity; if you’re doing history, you care because you want to make true statements about history.  As for experimental evidence, we care because it is a method for getting at a goal that we, through science, agree is good.  But Harris is wading into an existing debate, that of morality, and simply asserting that “maximizing well-being is the defining goal”.  Many people have disagreed with him, and some have probably agreed with him (if he knew what he was talking about).  Why should we accept that his goal is the overarching, overwhelming, determining one?

Let me put it this way:  I accept that a good moral code will probably increase the well-being of everyone.  However, I consider that tangential; it is not the case that a good moral code will be a good moral code because it increases overall well-being, but that a good moral code will have as a side-effect the increase of overall well-being.  What can Harris muster against this claim?  All his is evidence will conform, and to attack my point is going to get into a debate over axioms … unless he can prove his case.  (Or I can prove mine, I suppose).  Harris, then, needs to prove his case.  But instead he merely dismisses all opposition.

Let’s look at transitivity.  It is, in fact, quite possible to define a mathematical system where = is not transitive, and A = B and B = C does not imply that A = C.  Someone who defined such a system would not be an imbecile; they would simply be using a different mathematical system.  The imbecile in this story would be the person who insists that their system isn’t the right one because it isn’t transitive.  Guess what side Harris is on?

(Note that the person with the intransitive system would be an imbecile if they tried to use their intransitivity in a system that defined = as transitive.  So it goes both ways.)

One Response to “Moral confusion indeed (Part 1) …”

  1. Harris replies to Blackford via Coyne … « The Verbose Stoic Says:

    […] to Sean Carroll, I noted this problem: ————————- http://verbosestoic.wordpress.com/2010/04/28/moral-confusion-indeed-part-1/ “Harris seems to be trying to derive his entire view from a relation to consciousness.  That […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 38 other followers

%d bloggers like this: