Would you commit genocide if you thought it was the right thing to do?

Almost a year ago, Adam Lee raised a challenge to theists based on the interaction of God and Abraham over the sacrifice of Isaac:

Going further with this, I have a question for every religious believer, based on the Abraham episode: Do you believe that violence in God’s name is wrong, or do you merely believe he hasn’t personally told you to do violence? If God appeared to you and spoke to you, commanding you to commit a violent act – to murder a child, say – how would you respond?

My reply was to return this challenge:

Before I answer your question, I need you to answer this one for me:

If you truly believed with the certainty you expressed in an earlier post about your principles that it would be moral to kill an innocent being EDIT ( in a particular case; it’s not likely to be a general rule) /EDIT, would you do it?

As far as I can recall, I have never seen a credible answer to my question, or even a credible attempt at one.

And this is important, because it seems to me that the whole “empathy-based morality” philosophy is leading at least the Gnu Atheists to answering that question with a resounding “No!”, which I find very, very frightening.

We can, for example, point to Sam Harris’ comments on Divine Command Theory in his debate with William Lane Craig:

Ok, well here we’re being offered—I’m glad he raised the issue of psychopathy—we are being offered a psychopathic and psychotic moral attitude. It’s psychotic because this is completely delusional. There’s no reason to believe that we live in a universe ruled by an invisible monster Yahweh. But it is, it is psychopathic because this is a total detachment from the, from the well-being of human beings. It, this so easily rationalizes the slaughter of children. Ok, just think about the Muslims at this moment who are blowing themselves up, convinced that they are agents of God’s will. There is absolutely nothing that Dr. Craig can s—can say against their behavior, in moral terms, apart from his own faith-based claim that they’re praying to the wrong God. If they had the right God, what they were doing would be good, on Divine Command theory.

Which Chris Hallquist seems to agree with, and refers to in a more recent post:

I almost have a hard time believing Randal is serious here. When he talks about “adherence to a divine command theory of meta-ethics,” what he means is believing that blowing up a bus full of children is right if that’s what God told you to do. That may not be explicitly listed in the Psychopathy Checklist, but neither are things like actually blowing up a bus full of children. And being willing to approve of such an act just because you think God approves certainly sounds like something that would require a shocking degree of callousness and lack of empathy.

Yet as Harris says in the debate, “this to me is the true horror of religion. It allows perfectly decent and sane people to believe by the billions, what only lunatics could believe on their own.” The horror here is in the fact that there may people with a perfectly normal helping of empathy, who would normally never think of hurting a child, but who would approve of blowing up a bus full of children if they thought God wanted it.

This quote isn’t a smoking gun, but it seems to very strongly imply that he thinks that the hurting of children is wrong in and of itself and that empathy should stop you from doing that.

And then there’s Jen McCreight from a few months back:

The reason this statement is so repugnant to liberals is that we base our system of morals on minimizing harm. Oddly I saw no blogs explaining this, probably because Warren’s source of morality isn’t exactly a secret. But I think it’s important to emphasize how repugnant it is to base your system of ethics on some random old book instead of the well being of others. Punching someone in the face causes harm; gay sex does not.

Putting aside that his is based on some random old book just like my morality is based on the random old books of Kant and Seneca, she sets up the idea that it’s based on minimizing harm, and finds his view relating punching someone or committing adultery repugnant because it isn’t based on the “minimize harm” argument, but is instead based on a completely different moral base completely. And the problem here is one that I’ve talked about before: minimizing harm can lead to some very repugnant decisions. And it is McCreight’s comments that, to me, highlight just how important the questions of “Would you commit genocide if you thought it was the morally right thing to do?” or “Would you kill children if you thought it was the morally right thing to do?” are, because if there are cases where the “minimize harm” morality would demand things that most people would find at least very, very difficult then I really want to know whether they will go with their morality, or their empathy.

So, that’s my challenge to, well, everyone who has any kind of defined moral system: Would you commit genocide if your moral code said that was the right thing to do? I don’t want to hear answers like “But my moral code wouldn’t say that!” because those are the equivalent of saying “But God would never ask me to do that!”. I’m interested it what you would do if your empathy and morality clashed, not whether you think that’s actually possible.

And the world of popular culture has given us a plethora of potential examples:

In the Marvel Comics miniseries Secret Wars II, the Beyonder has come to Earth. In his first contact with Earth, he kidnapped groups of heroes and villains and put them against each other in a war to examine Good and Evil. On coming to Earth, he clashes with various groups and kills off the New Mutants, and gives Pheonix incredible power to see if she’d use it to destroy the universe (and was disappointed when she didn’t). At the end of the series, he decides that he really, really wants to try being human and creates a machine to do it. Using it to become truly human, he discovers that enemies like Mephisto won’t let him be, so he recreates the machine so that he would be human but would still keep his power. It is during this transformation — and while he is still a baby — that the heroes come across him. This is their only chance to stop the Beyonder and keep him from doing what he has done in the past, but it would involve killing a baby. The heroes have problems with this … but the Molecule Man does it, believing it the right thing to do because it would minimize harm, as it trades one life — even that of a baby — for the many that might suffer or die at the hands of the rather unstable Beyonder. Was he right to do so? (This is similar to the “Would you kill Adolf Hitler as a baby to prevent the Holocaust?” question, which as alluded to when Pheonix was pondering killing everyone in the universe to stop the Beyonder by Magneto).

In the Wing Commander game “Heart of the Tiger”, the Terrans (read: humans) are fighting the Kilrathi, and are losing. The Kilrathi exterminate and enslave their conquered populations. Both Admiral Tolwyn and James Taggart invent devices that will destroy the Kilrathi homeworld, and Taggart’s device — the Templor Bomb — is deployed against it and destroys the planet, the royal family, and a host of innocents. Was it right to drop that bomb? (This is similar to Hiroshima and Nagasaki, except that in this case the humans are clearly on the ropes and are facing slaughter themselves).

In “Angel”, Jasmine brings peace to the world, at the cost of people losing a significant but not overwhelming part of their free will and her having to kill a small number of people in order to feed. Angel stops her. Was he right to do so?

So, there’s really no excuse for not being able to answer this really important question, summarized really as: would you do something that you found personally heinous or disturbing if you thought it was the morally right thing to do?

17 Responses to “Would you commit genocide if you thought it was the right thing to do?”

  1. Laurence Says:

    I don’t think it would be possible for me to commit genocide unless there were far worse consequences if I did not commit genocide. And even then, I wouldn’t have done the morally right thing, but the thing that was not morally worse. I would also probably be mentally scarred by the event.

  2. Héctor Muñoz Says:

    They are basically claiming that their moral system is free of conflicts and that other systems are inferior and “bad” because they lead to conflict.

    • verbosestoic Says:

      As usual, I’m either more or less charitable than that; I believe that they tend to make their decisions based on personal feelings — particularly empathy — that they paper over with the “minimize harm” rationale, not realizing that the two can come apart in certain circumstances.

  3. hf Says:

    I’ll give a fuller reply, but first: why would you find it “very, very frightening” that Chris seems unlikely to kill a baby? Most people can’t. Many soldiers, even with their special training, fail to aim properly when they first try to kill another human being. You can argue all you want that in principle morality could demand killing babies. But if you claim your philosophy better prepares you for that situation, I’ll laugh at your armchair-toughness and perhaps ask if it prepares you for situations you’ll actually face.

    • verbosestoic Says:

      You DO realize that that phrase was used as a reply to Adam, not to Chris, right?

      Anyway, the issue is not over capability here, but over what someone OUGHT to do. I find it very, very frightening that these people would say that if they found an action — like killing a baby — personally repugnant due to empathy, they wouldn’t do it even if they thought it was the morally right thing to do. To me, morality trumps personal discomfort, distaste or, as I said in the post linked in my comment reply to Laurence even your personal sanity. That someone might be actually unable to do what is morally right would be an understandable lapse, but a lapse nonetheless, and my claim is that I think that they would argue that it is not a lapse. Either that, or they don’t get the moral view that people take in replying to things like “The Abraham Test”, and are thus calling people immoral not because they are willing to perform necessarily heinously immoral actions, but because those people’s morality is different than theirs.

      • hf Says:

        Adam Lee is also human, and the same statistical reasoning applies to him. But on to the deeper flaw in your reasoning:

        You seem to think we should know for certain that if our morality proves (eating babies is mandatory), then eating babies really is mandatory. That sounds like exactly the conclusion we should not be able to prove, according to a physicalist or functionalist view, if we turn out to have a consistent morality that does not really demand we eat babies. Löb’s Theorem says that a self-consistent logical system cannot look at itself in a symbolic, causal representation, and recognize itself therein – not to the extent of knowing for sure that an encoded proof of X implies X. (This means in particular that it can never prove there exists no proof for X, unless it’s lying.) If we have a consistent system that includes morality as a part, it can’t simultaneously:

        *recognize that some part of itself definitely controls what the whole system says about statements in a given class, and

        *recognize that whenever that part of itself proves such a statement X, that means X definitely holds, and

        *fail to prove some statement in the given class.

        This seems like an odd result. It means that if a formal system of arithmetic (like the second-order Peano axioms) controls what we accept as proven for the natural numbers, then we can’t both know that for sure and trust the system absolutely. But then, we are talking about a particular formalization of arithmetic by flawed humans. If it told us that 1=3, we’d doubt the axioms rather than accepting the conclusion. So it seems quite possible that we only trust arithmetic on the subject of statements we’ve already proven (whether inside arithmetic or by other means), and we don’t give it a blanket endorsement for any claim it might prove in the future. We don’t know what the latter might entail. And this fact is consistent with the claim that, in practice, the axioms in question determine which statements about numbers we’ll accept.

        So. You can’t reasonably ask us to be certain that if our moral axioms tell us we should kill babies, we should kill babies. But I think Craig has put himself in a different position. He claims to already believe it was moral to kill babies when (not ‘if’) his fictional God commanded it. He claims that he already knows the essence of morality, that this essence sometimes wears a beard and dies for our sins, and that these claims give Craig a magical Löb-dodging advantage over people with a common-sense physicalist or functionalist view. I don’t think he does believe that. Even Sam Harris in the full quote (which you should really include if you plan to argue against his view) says that he does not think Craig is psychopathic, despite the psychopathic view Craig purports to defend. And in fact (according to a defender!) Craig does not seem to consider it a real possibility that his God commanded the murder of infants in the present, i.e. the real world, despite the lack of any obvious line between this and the fictional genocide he claims to believe in – except that one happened and the other did not.

        That said, Harris and possibly Adam Lee (as opposed to Christ Hallquist, who works with the guy linked above) may have made similar errors in describing morality.

      • verbosestoic Says:

        Okay, so let me try to shorten your answer down to something that I think might be relevant so that you can correct me if I’m wrong: are you arguing here that if someone was looking at their moral system and it came up with one of those answers, they should doubt that that was the real answer?

        This is problematic if you look at the examples I listed, where you need to give an answer and the standards as applied to the system seem fairly clear. They’d have to abandon a “well-being” view, likely, to hold that, and so questioning the axioms would mean tossing away the fundamentals of the system, which is fair but hardly a real way out of the system. Especially since I can quite reasonably demand to know what standard they’re using to overturn their own standard, and how they know that one is correct. For example, if they are using empathy how do they know that empathy is not what is at fault here?

        And that’s what I’m trying to get at here. If their moral reasoning says “Do X” and their empathy says “Don’t do X”, are they going to go with their moral reasoning or with their empathy? And thus the question also applies to you: Would you commit genocide or kill the baby if your morality said “Yes” but your empathy/sensibilities said “No”, and would you find that a failing or a correct decision?

      • hf Says:

        You don’t need to give an answer. Oh, wait, am I supposed to take your specific examples seriously? In that case, yes, we should be able to give some type of answer. But when you start from false premises (which may include humanly impossible states of knowledge), reaching a false conclusion is not interesting.

        In real situations, I would in fact say that when empathy says “No” you should take that as a big warning sign. This doesn’t strictly map to the situation of the axioms telling us that 2+2=5, but our feelings nevertheless seem more trustworthy than formal systems of ethics in the following sense. A principle like the one about well-being that I thought Harris proposed – though my first Google results didn’t show it, and I don’t feel like looking further right now – this could serve as a good guide for humans, precisely because we seem likely to fill in all sorts of unstated details through the urgings of our emotions. Using the explicit principle to bind a hostile genie or an Artificial Intelligence would likely kill us all. And a similar claim holds for all formal ethics.

        Would you agree with that last part as written? If not, please see this post on Hidden Complexity of Wishes, or the sequel Value is Fragile before responding.

      • hf Says:

        (I’ll assume my main response went to moderation.)

        Perhaps we’re talking about different subjects. I assumed we were starting from the question of whether Craig’s very specific deity could order human beings to do its baby-and-mother-slaughtering for it, and if so whether we can reasonably reject out of hand the claim that it gave orders to a random baby-killer today.

        Now one could try to argue that since human brains seem computable, Craig would need to have a computable way of recognizing God’s commands. If so, it would seem unreasonable to ask him to spell out what this method entails (by the same theorem linked above). But this means Craig has no magical advantage.

        Perhaps one might argue that he still has a philosophical advantage in defining morality, whether or not the definition helps us? But we’ve established that his deity seems as evil in its actions as Azathoth and Its divine intermediary, from the works of H.P. Lovecraft. Why not use Azathoth to define Good? I get the impression Lovecraft at least started from Greek philosophy, rather than taking a religion and forcing it into an Aristotelian mold. I see no prospect of an answer except by requiring the deity to have certain features, pleasing to certain human emotions, that appear logically separable from the deity’s other traits. So why not use the actually moral traits to define morality?

        Craig argues that (an informal perception of) the unnecessary Azathoth-like traits trump any way to judge actions by human morality. That sounds psychopathic to me. Happily, I don’t think Craig believes it.

      • verbosestoic Says:

        Well, the whole point of the question was that you are in a case where you need to take an action and your moral code says “Do it” and your feelings say “Don’t”. So not only do you need to have an answer, you actually had one from your moral system. It didn’t imply anything beyond the capacities of any person — unless the moral system is itself totally unworkable — and, to be honest, the answer itself isn’t that important, as what matters is your reaction to it. So the specific examples were given as a way to get you to think about such clashing cases, and move beyond “It can never be right to X”.

        So, onto the reply, and here is where we are going to disagree, and reading the name of the blog should make that obvious [grin]:

        …but our feelings nevertheless seem more trustworthy than formal systems of ethics in the following sense. A principle like the one about well-being that I thought Harris proposed – though my first Google results didn’t show it, and I don’t feel like looking further right now – this could serve as a good guide for humans, precisely because we seem likely to fill in all sorts of unstated details through the urgings of our emotions.

        My contention is that while it’s perfectly true that our emotions fill in unstated details, that doesn’t make them more trustworthy, but rather less so, because they get them WRONG an awful lot of the time. Empathy is particularly bad — I’ve talked about that a bit on this blog — because it fails for someone that you don’t really understand all that well. Thus, I find it rather odd that you could accept that you have a rational moral system that gives answers, but that your emotions ought to be considered more reliable than that. We’re on the same page as far as thinking that if my emotions are against it, I should consider the action carefully, byut I insist on using reason to check emotion, and not the other way around.

        So, what method do you have or propose for correcting your emotions based on what you are convinced is the right moral code, more or less? I want to condition my emotions to that more system, but what do you do?

        Oh, and then as a not-quite-aside, what IS your moral code?

        As for the second comment, the problem with it is that it is focusing on the Craig discussion … which I’m not talking about. I used it as an example of saying “This is just morally wrong because my emotions kick up against it” and not as a position that I was going after.

      • hf Says:

        In reverse order: I don’t have one, and ‘Try to make my current claims/demands self-consistent.’ (I do tentatively subscribe to some form of consequentialism.)

        Don’t confuse general rationality – esp. on factual questions, like how well I understand a person – with formal ethics.

  4. Fredereick Says:

    Of course Christians have been committing genocide for forever and a day

    http://www.dartmouth.edu/~spanmod/mural/panel13.html

    http://www.jesusneverexisted.com/cruelty.html

    Google:
    1. Columbus & Other Cannibals by Jack Forbes
    2. American Holocaust by David Stannard

  5. Steve Bowen Says:

    The problem with counterfactual examples like “would you kill Hitler” is that if you knew what Hitler would do there would be other ways acceptable to a utilitarian to change the future that would not involve killing him.
    I’m having trouble coming up with an example that would be a true zero sum dilemma to determine under what circumstances my empathy for innocents would allow me to kill them for a greater good. Although I accept in principle that such a scenario could exist and that it would cause me extreme anguish, it’s difficult wothout a concrete example.
    One observation is that as a utilitarian my own well being also figures into the calculation of harms and benefits. It is possible that killing an innocent would be so harmful to my own wellbeing that I could not make the calculation fall in that direction.

    • verbosestoic Says:

      The problem with counterfactual examples like “would you kill Hitler” is that if you knew what Hitler would do there would be other ways acceptable to a utilitarian to change the future that would not involve killing him.

      Part of the issue here is that utilitarian views aren’t supposed to merely be about “acceptable”, but are supposed to be about “maximal”, where you choose the option that provides the greatest utility, not merely a collection of ones that, say, increase it. Take the “Wing Commander” example. In that one, in-universe, there might have been other options, but they all would have cost far more lives and anguish overall than the one they took. Or even the “Angel” example, where there is no choice: in order to preserve the peace that Jasmine has brought them, they have to allow her to eat people in order to maintain her power. In that one, Jasmine argues that taking away freedom and sacrificing a small percentage of people is the reasonable price to pay for the benefits they gain … in short, that allowing her to survive is the option with the best utility. If you were convinced of that, how could it not be seen as a failing to not do what your morality should say is the moral thing to do?

      One observation is that as a utilitarian my own well being also figures into the calculation of harms and benefits. It is possible that killing an innocent would be so harmful to my own wellbeing that I could not make the calculation fall in that direction.

      Well, this ties into a couple of other problems with utilitarianism. First, image someone who doesn’t particularly have a problem killing innocents, but from whom morality is incredibly important and their main guide to proper behaviour. They’d have no problem killing the innocent, and so their calculation wouldn’t include that, and so it should come up solidly in favour of doing it. If they follow it, would that then be wrong, or right? We don’t want the rightness or wrongness of an action — especially big cases like these — to vary TOO much by individual, or else we’re back to being relativistic. The accusation could clearly be made here that it only comes out the way it does because there is something wrong with at least one of the two of you in the example, and utiltarianism doesn’t want to have to calculate that too much if it can help it.

      Essentially, this problem is the idea that if you count mental anguish, how much do you have to consider if their mental anguish is justified because it picks out true injustice, or is unjustified because they are selfish, deluded, or ignorant.

      The second problem is related to this, and is the classic “Well, if someone really wants to do X/would feel incredibly deprived if they couldn’t do X, then they can do it even if X is something that’s absolutely horrible”. If someone feels that not being able to murder that person who will destroy their life by telling the truth is an anguish they can’t bear, your out for you in these cases also risks allowing them that out in those cases as well … and to defend against that immediately opens your out up to that charge as well.

      I talk a little bit about being destroyed to do the morally right thing here:

      http://verbosestoic.wordpress.com/2012/06/07/why-doesnt-batman-kill-the-joker/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 36 other followers

%d bloggers like this: