Professor Plum, Richard Carrier, and Compatibilism

So I have a post from Jonathan MS Pearce to talk about in the “free will” theme, but this week let me focus on this post by Richard Carrier which takes on Derk Pereboom’s “Four Cases” argument against compatibilism.  Unfortunately, as usual Carrier combines his normal strongly aggressive approach criticizing philosophy with a lack of understanding of what he’s criticizing, so much so that after reading the post I’m not even sure that he’s really a compatibilist, despite saying that he’s absolutely one, claiming to have proven it, and attempting to defend it here.

So before I get into the specific case from Pereboom — that I myself have addressed here — I will have to address Carrier’s preamble to defining and justifying compatbilism.  The problem is that he seems awfully mechanistic for a compatibilist.  The best quick definition for compatibilism I can think of is in line with Sean Carroll’s, which is that our choices are free — and so reflect free will — if they were produced by our normal decision-making processes and thus our consciousness, under conditions where they are working properly.  This is to preserve the idea that we make choices and are responsible for them but that cases like where someone has a tumor or a lesion or other abnormal states are ones where we don’t make choices that we are responsible for.  This is because of the other quick definition of compatibilism which is that they want to prove that we can have a definition of free will that preserves everything important about free will while still working in a deterministic universe.  So any case where a choice is not produced by our normal decision-making processes is not a free choice, and things like direct manipulation of the brain will violate that, as we are supposed to go through the normal processes and convince someone through speech.  This will be important for Pereboom’s case because at least the first — and foundational, as Carrier defines it — is a case where a choice is changed not by convincing someone to change their choice but by directly manipulating the brain to do that.  Carrier will go further down this path later in the post.

So let’s look at the preamble.  First, Carrier takes a shot at Libertarian Free Will, in a way that sounds like it’s talking about the differences in causal path but ends up not actually doing that:

… but I also think free will could only exist as the output of a continuous chain of causes, such that any account of “responsibility-bearing” free will (the only kind anyone cares about) that involves any pertinent break in that chain of causes would actually eliminate free will. For example, if you insist free will is supposed to mean making decisions without being causally determined by one’s character, desires, and reasoning, then you have declared a self-contradiction. For that would mean your decisions are not only random, but causally disconnected from who you are.

The thing is, Libertarian free Will insists that our decisions must be determined by our character, desires and reasoning, meaning that reasoning process that we go through in order to produce the decision we make.  For Libertarians, the decision-making process is what actually determines our decisions and so is the crucial causal factor.  Our character and desires influence but do not determine that.  We can act against our desires and against our character, although that doesn’t happen all that often and probably isn’t a good thing if it does happen, but it can.  So Libertarians agree with compatibilists on that score.  What they dispute is whether that can actually happen to the extent it needs to for a choice to be properly free in a deterministic universe.  The issue is that if each event is fully determined by the events that preceded it then every step in our decision-making process was determined by the previous step, and ultimately determined by the events that kicked off that decision-making process.  Thus, the outcome of that process was determined before the process started, and so the process itself doesn’t have any real way to impact it.  It seems like it’s just going through the motions.  It gets even worse if you take a materialist stance based on the brain, because we can trace the causation through the neural activations and so don’t even need to have the experiences of our decision-making process matter to the outcome.  They can be reflecting completely different reasoning than what is represented at the neural level and yet the outcome would be the same.  In principle, those experiences could be missing and the decision would turn out to be the same, and the same ultimate action would be taken.  So what both Libertarians and Compatibilists need is for the decision-making process iterating over our beliefs, desires and character to ultimately determine what decision we make.  Libertarians deny that we can get that in a deterministic universe.

So while Carrier seems to be addressing this with the causal chain argument, he actually ends up side-stepping it entirely with what is at best a careless description of the problem.  He then goes on to talk about how it is defined in the law, and sounds even less compatibilist:

This is why free will as understood in Western law (all the way from Model Penal Codes to U.S. Supreme Court precedents) is thoroughly compatibilist in its construction.  Any attempt by a perp to argue they were fated to be a criminal will be met with the response, “Well, then you were also fated to be punished for your crimes.”

No Compatibilist would ever make that argument.  This argument denies that anyone has responsibility for their actions, so the perp is fated to be a criminal and the court is fated to punish them.  Compatibilists hold that we are meaningfully responsible for our choices, and so would reply that he was not fated, at least, in the right way to not be responsible for his crimes, and so can be punished for them.  Again, Compatibilists want to preserve everything important about free will and surely being held responsible in the right way to justify being punished for a crime is something important about free will that needs to be preserved.  This answer completely tosses that aside in a manner that would appeal to a Hard Determinist, not a Compatibilist.

He’s only makes this discrepancy stronger by later saying this:

Background causes are of interest to other operators in reengineering the social system (such as to produce more heroes and fewer criminals); but they aren’t relevant to the separate case of what to do with the products of that social system—the people already produced. What to do with a specific malfunctioning machine is a different question from how to make better machines.

Compatibilists would never cast the discussion in this light.  They would not describe decision-making things as merely machines or talk about failures of decision-making as mere malfunctioning of a machine, because there are important distinctions to make here between mechanistic causes that would be malfunctions — like tumors — and failures caused by their decision-making processes that we’d need to fix in a completely different way.  Again, Compatibilists need to preserve what is important about free will and that there is a distinction between malfunctions of our system and real choices is indeed one of them.  Carrier’s description here treats these things like a Hard Determinist would, making no really meaningful distinctions between these states, while purporting to defend and define compatibilism.

He then says something bizarre and, frankly, rather scary about praise and blame:

This does mean there is no such thing as “basic desert” in the peculiar sense of just deserving praise or blame for no functional reason. If praise and blame perform no function, then they cease to be warranted—beyond arbitrary emotivism, which produces no objective defense. Like whether you like chocolate or not, praise and blame would then cease to be anything you can argue with anyone, as if anyone “should” like chocolate or not, because it would cease to be the case that anyone “should” like anything in such a sense, and thus no sense in which anyone “should” praise or blame anyone for anything. “Well I just like that” is not a persuasive argument that anyone should like it too. The purpose of a behavior, like praise and blame, is therefore fundamental to defending it as anything anyone should emulate.

The problem here is that he conflates “Reason to actually praise or blame someone” with “That person deserves to be praised or blamed” and seems to be arguing that we can’t say that someone deserves to be praised or blamed for something if there would be no benefit to praising and blaming them.  Why this is scary is that it would imply that whether or not someone can be said to be praise- or blameworthy for an action depends on whether there is a functional benefit to praising or blaming them for that action.  Thus, if they did something that they were not really responsible for — in any sense of the word — but there was a benefit to us in blaming them as if they were, then they would be blameworthy for that action simply because there is a functional benefit to blaming them for it.  Thus, we have no independent way to determine if someone should be praised or blamed and so only praise or blame someone based on the functional benefit and not on whether they deserve to be praised or blamed, which immediately should raise alarm bells, as it can be used to guilt people into doing the wrong thing if that is seen to work out better.

Now, this could work if we could establish that there is no reasonable way to talk about being praise- or blameworthy without talking about the reasons why we’d want to explicitly praise or blame them.  But we know that isn’t true.  We can easily imagine a case where we can identify someone who is worthy of praise for an action but where we would get no functional benefit from praising them.  Take, for example, someone who is in general a good person and in general always does the right thing.  Praising them for doing the right thing in one specific case is not going to encourage them to do the right thing, as they just do that regardless of whether they are praised or not.  You could argue that there might be a benefit to other people seeing them as being praised — as an encouragement for them to also do the right thing — but if that was missing in any specific case we would clearly say that the person is worthy of praise for doing the right thing but see no real functional benefit to praising them.  We might praise them anyway — it’s perfectly reasonable to have a rule that says that you always praise someone if they do something praiseworthy — but we would recognize that they are praiseworthy independently of whether there’s a functional reason for praising them in that case.  And the same thing applies to blame:  even if we think there is no point to blaming someone for something we would still consider them blameworthy for what they did.  So we can easily and reasonable separate being praise- or blameworthy from there being a functional reason to praise or blame someone, and so Carrier’s move doesn’t seem to work.  He cannot simply define praise- or blameworthy as following from having a functional reason to praise and blame them and so conflate the terms in the way he seems to want to here, and his argument for that isn’t convincing.  So there is no reason to do this other than to be able to ignore cases where we have a functional reason to praise someone for doing something wrong or to blame them for doing something right.  And to take that approach is, as noted, very, very scary.

He ends that section with this:

Remaining anchored to the function of assigning responsibility is therefore essential to any understanding of what it takes to produce it, and thus to any understanding of the kind of free will that does.

But through the previous sections he ends up defining responsibility in a very odd manner, more akin to proximity than it is to it being the result of a proper decision-making process, which is what we need to have real responsibility.  So at this point this is indeed very confusing.

So now we finally turn to Pereboom’s four cases.  This is my summary of the cases from the above post:

1) Neuroscientists can deliberately manipulate Professor Plum’s reasoning process to make him have desires, at least, that are more rationally egoistic than moral, even though sometimes — I guess either when they don’t manipulate him or when the desires that are there at the time happen to work out that way — he can act morally.

2) Instead of directly manipulating his reasoning/desire-formation, they instead build in a set of desires that strongly bias him towards rationally egoistic choices, although he can overcome them with his other decision-making processes.

3) Instead of those desires being implanted by the neuroscientists, he gets them from training from his culture and upbringing.

4) This is all determined by physicalist determinism.

Pereboom claims that we have to consider each of the first three cases examples where Professor Plum is not morally responsible for his actions, and so from there have to conclude that the fourth case is also a case where Plum is not morally responsible, but since the last case is the case the Compatibilist is defending the Compatibilist must concede that in a deterministic world Plum is not morally responsible for his actions, defeating Compatibilism.  My attack on this starts at 2 and 3, where I argued that because in those cases what Plum has are influences that his decision-making processes can override to make the “proper” decision Plum would still be morally responsible in those cases, so he can’t get to us not having moral responsibility in case 4 from the other cases.  I obviously think that we don’t have moral responsibility in 4 for reasons like the ones I outlined above, but his argument here doesn’t work.  Carrier, however, will go after the first — and foundational, to him — case and argue that Plum is indeed responsible for his actions in that case, which I strongly deny.  Unfortunately, how he gets there is to read into the thought experiment while chiding Pereboom specifically and philosophers in general for simply not understanding how things work in the world.

Here’s what he says:

It is silly to go to such lengths to construct this scenario, because we actually already have such scenarios in the real world that have been very thoroughly dealt with in our legal system—and they don’t turn out the way Pereboom thinks. If we just walked up to Mr. Plum and asked him to kill Mr. White in exchange for a hundred thousand dollars, we would have created exactly the scenario Pereboom is trying to imagine: Plum would not have killed White but for our intervention; our intervention succeeds by stimulating the requisite egoistic thinking in Mr. Plum otherwise in that moment absent (using the sound of our voice, maybe the sight of cash, and his neural machinery already present in his brain); and his acting on the offer remains in accord with his statistically frequent character (as otherwise he’d turn us down; and likewise Pereboom’s neural machine wouldn’t work).

No one releases such a Mr. Plum from responsibility. He will be adjudged fully responsible in this case in every court of law the world over. Pereboom’s argument thus can’t even get off the ground.

The thing is, this is not, in fact, equivalent to Pereboom’s first case.  I can prove that by expanding it to explicitly include Carrier’s case.  We have an organization that offers to pay people one hundred thousand dollars to kill certain people.  They have done this with Professor Plum in the past and he has accepted that in the past.  However, they want certainty, so they have also implanted a mechanism in the brain so that if they would ever go against their egoistic reasoning to take the money and would instead reject it on moral grounds, they will flip the switch and the output of the decision-making process will be the one that would have been produced by a strictly egoistic decision-making process.  So what they have done is what Carrier insisted is equivalent by making it in Plum’s egoistic self-interest to kill the person, and then on top of that have triggered their implant to override Plum’s decision when it would be based on morality rather than on egoistic concerns.  That’s clearly an additional step to what Carrier is proposing and would clearly work, as it would at the very least suppress the moral reasons that Plum would have used to make a different decision.

Now, a stronger argument from Carrier might have been to argue that there is no difference between egoistic and moral reasoning and so the argument cannot get off the ground that way.  So let me point out that even under Carrier’s view I can make that distinction.  Carrier allows for us to make immoral decisions on the basis of a shallow or mistaken view of what is in our egoistic self-interest, and we can say that Plum, in general, has been murdering people on the basis of the shallow self-interest of money rather than on the deeper self-interest of the effect that has on him and on society as a whole, but in this case that deeper self-interest was going to kick in and cause him to reject the money and refuse to kill the person … and then the device activates and switches him back onto the shallow self-interest path.  Again, it’s not them making the offer, nor is it merely them making the money path more salient.  By definition, they are changing what his decision-making process would have come up with without appealing to his decision-making processes.  They aren’t convincing him to do this, but are forcing him to do this.  Carrier’s entire “equivalent” case is simply a case where they convince him to do it, and so it doesn’t map to Pereboom’s case at all.

And this distinction is actually really important for compatibilism, because it explains why most compatibilists should lean towards accepting that the first case is a case where Plum is not morally responsible for his actions (but they could go along with me in saying that 2 and 3 are cases where he is).  In the first case, it is clearly the case that the direct external influence on the brain directly impacts the decision-making process, and so Plum’s decision was not produced by his normal decision-making process doing what it would or should do in that case.  We can see this by imagining that Plum would at some point use simulation — the psychological kind — to put himself back in that situation and run it forward to see what he would have decided in that case.  For pretty much all of his other decisions, if he constructed it properly he would come up with the same answer.  Here, he wouldn’t, because his imagination wouldn’t include the direct intervention and so would come to the conclusion that he shouldn’t kill the person for moral reasons, when he ended up killing them for egoistic reasons.  Thus, this isn’t the result that his decision-making process would have come up with, and compatibilists should note that then it can’t be a free choice and so he can’t be held morally responsible for it.  For the next two cases, his decision-making process could iterate over those values and desires and act morally, and so what his decision-making process actually does is what we can hold him morally responsible for.  But in the first case, what he did was not what his decision-making process would have actually concluded, and given that we can’t hold him morally responsible for that action.

So Carrier misses the point completely, claiming that we have an example of Pereboom’s case that happens constantly in real life when his example is one of convincing or influence, not a complete circumventing of it.

Mr. Plum has not been tricked—he full well knows that he is choosing to kill someone, and for a reason that even Pereboom’s argument entails is both egoistic (and thus not righteous) and derives entirely from Mr. Plum’s own thinking—because all that the “mad scientists” have done is tip him back into his frequent “egoistic thinking”; they have not inserted a delusion into his brain that becomes his reason for killing White (had they done that, we’d be closer to the real-world case of schizophrenics committing crimes on a basis of uncontrollable false beliefs). And that’s all that courts of law require to establish guilt: a criminal act, performed with a criminal intent.

Presumably, when the decision is made the die is cast and the person will be killed based on that decision.  Plum will not have sufficient opportunity to self-reflect and so change that decision before it is carried out, because if he would, then either they’d have to push the button again or else he would decide as he would in my example above, and follow what his actual decision-making process would have come up with and not kill the person.  Thus, when he originally made the decision he did not have criminal intent despite having reasons to commit the crime because his normal decision-making processes acting normally would not have made that decision, and if he has the time to consider it later and decides to not change it he would only then develop criminal intent based on that later decision.  Thus, if the decision as Pereboom describes it is the only decision he makes, then he doesn’t have criminal intent and so can’t be held morally responsible for the crime.

“But they had a machine in his brain and pushed a button to activate it” would bear no relevance whatever at his trial. That would be no different, legally, from “pushing his buttons” metaphorically, by simply persuading him to do something, ginning him up into selfish violence. Guilt stems from his agreeing to go through with it “for egoistic reasons.” He is aware of what he is choosing to do, and chooses to do it anyway.

But that is precisely what he doesn’t do because of the machine:  he didn’t make the choice at all, because if they had left him free to make the decision by definition he would have chosen the other option.  Again, it would only be later if he examined the decision and then agreed with it that the last sentence would be true.  Again, it’s not persuasion, it’s direct intervention that circumvents his actual decision-making process.  That’s not something that a Compatibilist can accept as conferring moral responsibility on someone.

That he would not have thought to do it but for someone instigating it is irrelevant …

Under Pereboom’s example, he quite likely would have thought of it, and rejected it for moral reasons.  The instigation is, again, not reminding him of those reasons or making them seem salient, but directly forcing him to accept them and act on them despite the fact that his normal decision-making process, by definition, wouldn’t have done that.  In that specific case, those reasons would not have been strong enough to get Plum to kill that person, and it’s only the machine that makes that happen.  Carrier is just wrong about what the case is actually describing.

And then he takes on philosophy again:

Pereboom’s argument thus fails from its very first step, all from simply failing to realize he was describing a scenario that is already standard and dealt with routinely in the most experienced institution for ascertaining responsibility in human history: the modern world legal system, a product of thousands of years of advancement and analysis. Philosophers often do this: argue from the armchair in complete disconnect from the real world and all that it could have taught them had they only walked outside and looked around. Philosophers need to stop doing that. Indeed, philosophers who hose this simple procedure should have their Philosopher Card taken away and be disinvited from the whole field until they take some classes on How Not to Be a Doof and then persuade us they’ll repent and start acting competently for a change.

That’s a rather strong statement to make when it can be credibly argued that he didn’t understand the experiment at all, and also that the legal system is not actually a good system to look at for determining when someone is really responsible for their actions because a) it often appeals to philosophy to determine that and b) that’s not really its job.  Laws need to be implementable and have to explicitly aim at preserving society, and so they may well make compromises based not on what really is the case, but on what they can reasonably determine is the case.  Take Carrier citing Spitzley’s migraine example:

It is by the same reasoning that Spitzley’s example of a certain “Bob” who “has a migraine that causes his reasoning to be slightly altered in such a way that he decides to kill David and he would have not made this decision if he had not had this migraine” does not describe a case any legal system would rule Bob innocent in. It does not matter what drove you to do something, as long as you knew it was wrong and did it anyway.

The weakness in the example, to me, that makes it an odd one is that applied to the real world we cannot find a case where a migraine would actually alter someone’s reasoning enough to make someone to decide to kill someone.  The pain could make someone lash out, but that wouldn’t be a decision and the legal system would treat it as likely something like manslaughter instead of fully motivated and intentional murder, but that’s a different consideration.  But even if someone did manage to find a case where the migraine could change their decision-making process sufficiently to make them not responsible for their actions, it’s actually quite likely and reasonable that the law might hold him responsible anyway, on the reasoning that even if he couldn’t be held responsible the law cannot confirm that he actually had that migraine to that degree and so can’t allow someone to shirk responsibility on the basis of a condition that they can’t confirm he had.  So they wouldn’t be saying that he would still be responsible in that case but that he cannot show sufficiently that he actually had it.

And this is why using the law to settle philosophical matters is a bad idea.  The law doesn’t really care about getting it philosophically correct, but only about getting it correct enough to preserve society.  If they can’t determine if someone is or isn’t in the conditions that would impact whether or not they were responsible for the actions the law may well simply choose the one that best preserves society.  Philosophy, on the other hand, can assess whether that person would be responsible in that case even if there is no way of determining if they actually are in that case, which makes their analyses and thought experiments more reliable than legal decisions in these cases.

He then progresses to an even odder case that puts him even further from a compatibilist position:

Consider even Spitzley’s point regarding “Mele’s (1995) case of an agent who has someone else’s values implanted into them overnight.” If this were Bob’s fate, he would still be found guilty. Because it does not matter how he became who he is; all that matters is that “who he is” assented to and performed the crime.

No Compatibilist could accept this.  The reason is that just as we have normal decision-making processes, we also have value-forming processes.  If Bob is implanted with a completely different set of values, those values would not have been formed by the normal value-forming processes, and so they wouldn’t be his values at all.  Even legally, we’d have to consider that someone in this situation as being at least temporarily insane, and so they wouldn’t be held legally responsible for their crimes.  For compatibilism, it does indeed matter quite a bit how he became “who he is”, at least at the extremes, because in this case the person with those values is not who he is.  Values formed through the normal processes, even if those processes are determined, make up who he is, but values forced upon him by brain rewriting do not make up who he is.  Carrier’s statement here is strongly Hard Determinist, not Compatibilist.

He talks about the Angel/Angelus case but it doesn’t apply here, as Pereboom’s case is not one of those cases, the two of them are literally different, and they don’t involve an external direct manipulation of the brain in any way.  So let me finish off his examples with the example of Hal:

This is played out credibly well in a different bit of fiction: Hal 9000 in the film and novel 2010 is regarded as guilty of murder only by reason of a conflicting programming code, such that as soon as that is corrected, he is fully restored as a reliable colleague. Murderous Hal simply is no longer the same person as “fixed” Hal. So we don’t ascribe the guilt of one to the other. Nor should we. The only thing that seems counter-intuitive about this is that it is possible to instantly fix someone, converting them from a malevolent to a benevolent person with just some keystrokes. But the reason that feels counter-intuitive is that it doesn’t exist—that kind of thing simply isn’t a part of our real-world experience, and isn’t an available option (yet, at least) in dealing with malevolent persons among us. But we’ve already established in courts of law the analytical logic needed to cope with it if we had to. And that predictable result simply isn’t what Pereboom imagines.

Hal is a machine, and so we can repair the hardware and change the software and no one cares about whether that’s the same Hal or a different one.  He’s been “fixed” to no longer be murderous and that’s that.  In fact, Carrier is probably wrong to say that it’s not the “same” Hal anymore.  If someone goes for therapy and corrects some mental problems, are they suddenly not meaningfully the same person anymore?  Carrier’s own logic ends up muddling things beyond comprehension and compares people to machines again which implies that if we could just change their brain without giving them therapy or convincing them they were wrong that would be just as acceptable a way to “fix” them, and that is something that no Compatibilist should accept.  Again, Carrier seems to argue from a Hard Determinist position to defend compatibilism, which is obviously not going to work.

Ultimately, I can’t see how a Compatibilist could claim that Professor Plum is morally responsible in Pereboom’s first case, but can see how they could argue that he is in the second and third ones, breaking that chain.  Carrier, however, in attacking the first case misunderstands it, argues from a Hard Determinist perspective, and ultimately doesn’t even seem to be able to make his own argument consistent.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: