Posts Tagged ‘free will’

Jonathan MS Pearce on Free Will and Moral Responsibility

March 18, 2022

Continuing my look at things people have been saying about free will recently, let me take up this post by Jonathan MS Pearce on the topic that continues the theme of hard determinists and compatibilists trying to minimize the gap between them and expand the gap between those two positions and libertarianism.  But first let me look at his definitions.  He starts by defining libertarian free will:

Definition o’clock: I define libertarian free will as follows:

The real ability to consciously and rationally do otherwise in a given situation.

This is a position that I’ve likened to the “Humans evolved from apes” definition.  It’s true as far as it goes and works as a rough folk definition, but stating it this way lends itself to generating rather poor arguments that don’t really get at the issue, such as arguments that for this to work it must be the case that if we “rewound time” and replayed that decision with the same starting conditions the decision could or would change, when most libertarians would argue that most of the time it wouldn’t, or arguments like Locke’s that if someone was locked in a room then even if they weren’t aware of that their decision to stay in the room couldn’t be free.  Here, Pearce will add on using this to differentiate this view of free will from compatibilist free will in a strong way despite the fact that they are closer than he admits.  This is his definition of compatibilist free will:

Mot philosophers are compatibilists, which is to say that they believe free will and determinism are compatible. Or, if cause and effect works in any of the ways mentioned above, we can still have free will. But, and this is a big but, they would define free will slightly differently. It would be something like this:

The ability for an agent to do what they want.

This definition, though, is a pretty poor one.  First, this is not the definition of compatibilist free will, but is instead one specific position compatibilists may hold that would allow them to argue that we do have some freedom in our choices while accepting that the decisions are still deterministic.  Second, this option is itself one that libertarians can adopt — and is one that I, personally, have considered — by arguing that we have some ability to determine what our desires are but our decisions are always determined by processes simply iterating over what we want and what we believe.  So not only does it not properly capture compatibilism, it also is a position that isn’t unique to it.

The best definition of “free” that I’ve come up with for compatibilists is this one:  a free choice is one that is produced by our normal decision-making processes under the conditions where they are acting normally.  An analogy for this is with the digestive system, where we can distinguish between whether something is the result of that process not merely by looking at whether we have nutrients in the blood, but instead by how those nutrients ended up there.  So if someone is receiving nutrients through an IV, those nutrients weren’t put there by the process of digestion and so are, perhaps, non-digested nutrients.  By the same token, if a decision is not made by the parts of the brain that typically produce decisions or if those parts of the brain were interfered with — by, say, a tumor — then we can say that those decisions were not “free”, and so don’t reflect “free will” and so the person isn’t responsible for them and so the person isn’t morally responsible for them.

Now, you may note that aside from talking about the brain specifically doing that this definition is, again, pretty much one that libertarians would also accept.  The reason for that is that the rough idea of free will is indeed the thing that compatibilists and libertarians agree upon, and that hard determinists disagree with compatibilists about.  Where libertarians disagree with compatibilists is over whether we can have this sort of freedom in a deterministic world, and so they disagree that the world is deterministic, which is what compatibilists and hard determinists agree on.  So libertarians agree with compatibilists about what it would roughly mean for a choice to be free, but not that deterministic processes can do that, and hard determinists agree with compatibilists that the only option for these decisions is that they are determined but not that their choices are or can be meaningfully free in that way.  Which is why most hard determinists end up trying to collapse the distinction between normal and abnormal decisions (as Pearce does here).

Thus, Pearce’s statement here isn’t true:

The thing is, I am both a hard determinist and a compatibilist, it just depends on which definition you are using. I argued in my first book, Free Will? An investigation into whether we have free will or whether I was always going to write this book, that the compatibilist position is largely a semantic one. As Arthur Schopenhauer said, “Man can do what he wills, but he cannot will what he wills.”

He can’t be both a hard determinist and a compatibilist depending on what definition is being used, because he himself sees no substantive difference between decisions produced by typical or atypical decision-making processes, and compatibilists do.  They would clearly draw a sharp distinction between a kleptomaniac stealing something because of their overwhelming urge and someone who steals something because they want it and can do it.  Hard determinists tend not to, and then to argue (like Jerry Coyne usually does) that the distinctions and categories that follow from the common notion of free will are not at all useful, meaningful or beneficial and that all we want are concepts that allow us to determine how to shape behaviour to produce behaviours that we think better or more reasonable or, well, more in line with what we want.  But in order for the positions to merely be a semantic difference, it must be the case that the hard determinist can either capture or eliminate all of these distinctions, or else the difference would be functional, not semantic.  If they have to include most or even all of these distinctions, then they might be able to claim that it’s a semantic difference … but not in a way that helps them, because at that point they have captured pretty much everything that everyone involved — compatibilist, folk, and even libertarian — thinks is important about free will but are just balking at calling it “free will” for some reason, even though, again, most people understand those things as following from free will and not being there if there is no free will.  And since there are differences in how we need to treat kleptomaniacs vs people who steal to survive vs people who steal to gain monetary advantage vs people who steal for the heck of it and since those differences are neatly captured by our existing language and concepts then that sort of hard determinist position seems pointless.  They are just going to have to invent the same concepts and language but will have to call them something different, so why bother?  To avoid any hint of anything that might be called a “soul”?  To avoid having to worry if the libertarian claim that deterministic processes can’t actually allow for those distinctions might actually be correct?

Pearce himself, of course, does seem to be a bit overly concerned about religion:

I argue with a lot of Christians about all sorts of things. But if they are Christians who believe in heaven and hell, who believe in divine judgment based on what we do in our lives, and having moral responsibility for doing so, then they have a problem.

Indeed (and I have frequently said this about debating people like William Lane Craig), if they cannot refute this argument—if they can’t establish how libertarian free will exists—then the rest of their God-belief falls down. Their theism rests inexorably on the foundations of a coherent notion of libertarian free will.

Pearce gave a quote from Strawson about that, but it doesn’t show that the concept is incoherent.  As my latest way of framing it states, what we really, really want is not some kind of completely “causeless” free will or free choice, but is instead one that follows from intentions, and where the decisions are made from understanding the meaning of things instead of mere “symbolic” processing like a simple stimulus-response model.  And it’s hard to get that in a deterministic model and a brain model because it looks like the outcome of the decision-making process is determined before it even starts, and the decision-making process is the only one that can actually consider meaning and intent and change its outcome based on that.  Neurons don’t process things on the basis of meaning, and given that it looks like the meanings and intents considered wouldn’t have to be the ones we are using or wouldn’t even have to be there to get the results we do.  So contrary to Pearce and perhaps Strawson, free will for libertarians isn’t about denying our nature, but instead about being able to make decisions considering our nature, and the ability to indeed change our nature or act according to it or reject it.  Based on our experiences, it really does seem like we can do that, and so if Pearce is going to argue that we really can’t that’s a flaw in his view, not in the libertarian view.  The libertarian is indeed noting that we need some kind of process that can actually consider it and, as I am arguing, a form of causation that can pull that off.  If deterministic causation and quantum causation can’t do it, then it really looks like we need some kind of causation that can, and assertions that there is no such kind of causation cannot overturn our impression that, nevertheless, it happens.

And turning to religion is a bad move.  If God is creating us and creating all mechanisms, and needs a mechanism that will do that, then He could indeed create such a mechanism.  It just wouldn’t be deterministic, and that’s pretty much all Pearce has to rely on here.

Pearce then turns to trying to argue against how we could identify moral responsibility at all:

Apportioning moral responsibility to one agent because their action was the closest caused to the effect is very simplistic. It is like saying the green snooker ball was responsible for knocking the red one in as it was the most proximal cause. This says nothing about the fact that it had rebounded off the yellow, after the blue, after being hit by the white cue ball, itself shot by the snooker player, involving all their causal circumstances, including all their training, the support from their parents, the evolution of man, and the big bang. Without each and every necessary condition and event, the red would never have been pocketed.

So was the green ball responsible? In some small, arbitrary sense, along with all other aspects of the causal circumstance. Was it ultimately causally responsible? No.

What does this say about moral responsibility?

Now such philosophical exactitude doesn’t easily help people organize society and suchlike, which is why we have shortcut rules of thumb: you pulled the trigger, you’re responsible. But that’s not technically correct.

Except, we can indeed easily make these distinctions by appealing to intentionality.  We in fact do not assign moral responsibility on the basic of proximity.  Taking his own example, we would distinguish between the green ball being the proximate cause but would also argue that the person with the cue stick who started off that specific chain would actually be ultimately responsible from the perspective of intention.  It was that person’s intention that caused that to happen, and we can analyze that result on the basis of that intention to explain why that happened as opposed to something else … even if they were trying to do something else, like hitting the yellow ball and accidentally hit that one.  And it’s from the latter responsibility that we derive moral responsibility.  But this isn’t just a presumption or an artifact of our language, as we can differentiate — and seem to need to differentiate — between the rock that strikes and breaks a window and the person who picked it up and threw it to break the window, and it’s the latter who is morally responsible for breaking the window and not the former.  Not convinced?  Okay, note the difference in experience and internal decisions between the case where you throw a rock at a window and break it and I throw you at a window and break it.  You are morally responsible in the first case but are not morally responsible in the second case.  That difference is a difference in intent and we can track the specific differences in internal experiences that make you responsible in the first case and me responsible in the second case.  That’s a real experiential difference that cannot be simply illusory if we are going to grant that we have any kind of consciousness that has any impact on the world, and yet just from that we can come up with perfectly reasonable ideas of moral responsibility.  So why, then, is it at all reasonable to claim that moral responsibility doesn’t exist?

Back to God:

The problem here is saying that (1) God is the only originator of a causal chain, but also, (2) human agents are the originators of causal chains every time they make a freely willed decision!

Alas, I digress. The takeaway point here is that theists can’t have it both ways, believing in both libertarian free will and the KCA. Neither work, and both are mutually exclusive.

That’s only if you claim that humans are indeed the originator of a new causal chain, as some libertarians do claim.  But this fails because arguments like the KCA link to and can benefit from the Thomistic arguments, which don’t have one simple notion of causation, and so would allow for God to create a mechanism in the strong sense needed for creation that then causes decisions in the right way to produce free will decisions.  While I think Ed Feser is a bit harsh at times on those who reject the Aristotlean idea of causation, here is indeed one case where a) you need to understand that to make a real criticism here and b) it starts to look like that sort of model avoids all sorts of crazy problems that denying it would cause.  So, no, they aren’t obviously mutually exclusive, and it only appears so if you adopt a very specific idea of causation that itself is somewhat dubious.

I will finish with Pearce’s quote from Pereboom on moral responsibility:

Living without a conception of our choices and actions as freely willed in the sense required for moral responsibility does not come naturally to us. Our psychologies and our patterns of behavior presuppose that our choices and actions are free in this sense. Nevertheless, not only are there good arguments against this belief, but also, despite our initially apprehensive reactions to hard incompatibilism, believing it would not have disastrous consequences, and indeed it promises significant benefits for human life. Hard incompatibilism would not undermine the purpose in life that our projects can provide. Neither would it hinder the possibility of the good interpersonal relationships fundamental to our happiness. Acceptance of hard incompatibilism rather holds out the promise of greater equanimity by reducing the anger that hinders fulfillment. Far from threatening meaning in life, hard incompatibilism can help us achieve the conditions required for flourishing, for it can assist in releasing us from the harmful passions that contribute so much to human distress. If we did in fact relinquish our presumption of free will and moral responsibility, then, perhaps surprisingly, our lives might well be better for it.

This perfectly captures the problem hard determinists get themselves into when they deny things like moral responsibility.  Pereboom asks what we would lose if we give up moral responsibility but then says that we can still have things like freedom from harmful passions and relinquishing flawed ideas to make our lives better … all things that we don’t have control over if we don’t have free will and so aren’t responsible for.  A lot of the things that hard determinists say that we can still have even without moral responsibility are either things that follow from either moral responsibility or the very mechanisms that would give us moral responsibility.  It’s the exact same argument as people who insist that morality is subjective but then insist that we can still meaningfully criticize people for being immoral even if they don’t agree and can even impose our moral views on them, ideas that follow from objective morality but don’t follow from subjective morality.  You can indeed not lose anything important if you smuggle all of the important things into your view, but then it is perfectly reasonable to ask what the point of denying the concepts was in the first place.

Professor Plum, Richard Carrier, and Compatibilism

March 11, 2022

So I have a post from Jonathan MS Pearce to talk about in the “free will” theme, but this week let me focus on this post by Richard Carrier which takes on Derk Pereboom’s “Four Cases” argument against compatibilism.  Unfortunately, as usual Carrier combines his normal strongly aggressive approach criticizing philosophy with a lack of understanding of what he’s criticizing, so much so that after reading the post I’m not even sure that he’s really a compatibilist, despite saying that he’s absolutely one, claiming to have proven it, and attempting to defend it here.

So before I get into the specific case from Pereboom — that I myself have addressed here — I will have to address Carrier’s preamble to defining and justifying compatbilism.  The problem is that he seems awfully mechanistic for a compatibilist.  The best quick definition for compatibilism I can think of is in line with Sean Carroll’s, which is that our choices are free — and so reflect free will — if they were produced by our normal decision-making processes and thus our consciousness, under conditions where they are working properly.  This is to preserve the idea that we make choices and are responsible for them but that cases like where someone has a tumor or a lesion or other abnormal states are ones where we don’t make choices that we are responsible for.  This is because of the other quick definition of compatibilism which is that they want to prove that we can have a definition of free will that preserves everything important about free will while still working in a deterministic universe.  So any case where a choice is not produced by our normal decision-making processes is not a free choice, and things like direct manipulation of the brain will violate that, as we are supposed to go through the normal processes and convince someone through speech.  This will be important for Pereboom’s case because at least the first — and foundational, as Carrier defines it — is a case where a choice is changed not by convincing someone to change their choice but by directly manipulating the brain to do that.  Carrier will go further down this path later in the post.

So let’s look at the preamble.  First, Carrier takes a shot at Libertarian Free Will, in a way that sounds like it’s talking about the differences in causal path but ends up not actually doing that:

… but I also think free will could only exist as the output of a continuous chain of causes, such that any account of “responsibility-bearing” free will (the only kind anyone cares about) that involves any pertinent break in that chain of causes would actually eliminate free will. For example, if you insist free will is supposed to mean making decisions without being causally determined by one’s character, desires, and reasoning, then you have declared a self-contradiction. For that would mean your decisions are not only random, but causally disconnected from who you are.

The thing is, Libertarian free Will insists that our decisions must be determined by our character, desires and reasoning, meaning that reasoning process that we go through in order to produce the decision we make.  For Libertarians, the decision-making process is what actually determines our decisions and so is the crucial causal factor.  Our character and desires influence but do not determine that.  We can act against our desires and against our character, although that doesn’t happen all that often and probably isn’t a good thing if it does happen, but it can.  So Libertarians agree with compatibilists on that score.  What they dispute is whether that can actually happen to the extent it needs to for a choice to be properly free in a deterministic universe.  The issue is that if each event is fully determined by the events that preceded it then every step in our decision-making process was determined by the previous step, and ultimately determined by the events that kicked off that decision-making process.  Thus, the outcome of that process was determined before the process started, and so the process itself doesn’t have any real way to impact it.  It seems like it’s just going through the motions.  It gets even worse if you take a materialist stance based on the brain, because we can trace the causation through the neural activations and so don’t even need to have the experiences of our decision-making process matter to the outcome.  They can be reflecting completely different reasoning than what is represented at the neural level and yet the outcome would be the same.  In principle, those experiences could be missing and the decision would turn out to be the same, and the same ultimate action would be taken.  So what both Libertarians and Compatibilists need is for the decision-making process iterating over our beliefs, desires and character to ultimately determine what decision we make.  Libertarians deny that we can get that in a deterministic universe.

So while Carrier seems to be addressing this with the causal chain argument, he actually ends up side-stepping it entirely with what is at best a careless description of the problem.  He then goes on to talk about how it is defined in the law, and sounds even less compatibilist:

This is why free will as understood in Western law (all the way from Model Penal Codes to U.S. Supreme Court precedents) is thoroughly compatibilist in its construction.  Any attempt by a perp to argue they were fated to be a criminal will be met with the response, “Well, then you were also fated to be punished for your crimes.”

No Compatibilist would ever make that argument.  This argument denies that anyone has responsibility for their actions, so the perp is fated to be a criminal and the court is fated to punish them.  Compatibilists hold that we are meaningfully responsible for our choices, and so would reply that he was not fated, at least, in the right way to not be responsible for his crimes, and so can be punished for them.  Again, Compatibilists want to preserve everything important about free will and surely being held responsible in the right way to justify being punished for a crime is something important about free will that needs to be preserved.  This answer completely tosses that aside in a manner that would appeal to a Hard Determinist, not a Compatibilist.

He’s only makes this discrepancy stronger by later saying this:

Background causes are of interest to other operators in reengineering the social system (such as to produce more heroes and fewer criminals); but they aren’t relevant to the separate case of what to do with the products of that social system—the people already produced. What to do with a specific malfunctioning machine is a different question from how to make better machines.

Compatibilists would never cast the discussion in this light.  They would not describe decision-making things as merely machines or talk about failures of decision-making as mere malfunctioning of a machine, because there are important distinctions to make here between mechanistic causes that would be malfunctions — like tumors — and failures caused by their decision-making processes that we’d need to fix in a completely different way.  Again, Compatibilists need to preserve what is important about free will and that there is a distinction between malfunctions of our system and real choices is indeed one of them.  Carrier’s description here treats these things like a Hard Determinist would, making no really meaningful distinctions between these states, while purporting to defend and define compatibilism.

He then says something bizarre and, frankly, rather scary about praise and blame:

This does mean there is no such thing as “basic desert” in the peculiar sense of just deserving praise or blame for no functional reason. If praise and blame perform no function, then they cease to be warranted—beyond arbitrary emotivism, which produces no objective defense. Like whether you like chocolate or not, praise and blame would then cease to be anything you can argue with anyone, as if anyone “should” like chocolate or not, because it would cease to be the case that anyone “should” like anything in such a sense, and thus no sense in which anyone “should” praise or blame anyone for anything. “Well I just like that” is not a persuasive argument that anyone should like it too. The purpose of a behavior, like praise and blame, is therefore fundamental to defending it as anything anyone should emulate.

The problem here is that he conflates “Reason to actually praise or blame someone” with “That person deserves to be praised or blamed” and seems to be arguing that we can’t say that someone deserves to be praised or blamed for something if there would be no benefit to praising and blaming them.  Why this is scary is that it would imply that whether or not someone can be said to be praise- or blameworthy for an action depends on whether there is a functional benefit to praising or blaming them for that action.  Thus, if they did something that they were not really responsible for — in any sense of the word — but there was a benefit to us in blaming them as if they were, then they would be blameworthy for that action simply because there is a functional benefit to blaming them for it.  Thus, we have no independent way to determine if someone should be praised or blamed and so only praise or blame someone based on the functional benefit and not on whether they deserve to be praised or blamed, which immediately should raise alarm bells, as it can be used to guilt people into doing the wrong thing if that is seen to work out better.

Now, this could work if we could establish that there is no reasonable way to talk about being praise- or blameworthy without talking about the reasons why we’d want to explicitly praise or blame them.  But we know that isn’t true.  We can easily imagine a case where we can identify someone who is worthy of praise for an action but where we would get no functional benefit from praising them.  Take, for example, someone who is in general a good person and in general always does the right thing.  Praising them for doing the right thing in one specific case is not going to encourage them to do the right thing, as they just do that regardless of whether they are praised or not.  You could argue that there might be a benefit to other people seeing them as being praised — as an encouragement for them to also do the right thing — but if that was missing in any specific case we would clearly say that the person is worthy of praise for doing the right thing but see no real functional benefit to praising them.  We might praise them anyway — it’s perfectly reasonable to have a rule that says that you always praise someone if they do something praiseworthy — but we would recognize that they are praiseworthy independently of whether there’s a functional reason for praising them in that case.  And the same thing applies to blame:  even if we think there is no point to blaming someone for something we would still consider them blameworthy for what they did.  So we can easily and reasonable separate being praise- or blameworthy from there being a functional reason to praise or blame someone, and so Carrier’s move doesn’t seem to work.  He cannot simply define praise- or blameworthy as following from having a functional reason to praise and blame them and so conflate the terms in the way he seems to want to here, and his argument for that isn’t convincing.  So there is no reason to do this other than to be able to ignore cases where we have a functional reason to praise someone for doing something wrong or to blame them for doing something right.  And to take that approach is, as noted, very, very scary.

He ends that section with this:

Remaining anchored to the function of assigning responsibility is therefore essential to any understanding of what it takes to produce it, and thus to any understanding of the kind of free will that does.

But through the previous sections he ends up defining responsibility in a very odd manner, more akin to proximity than it is to it being the result of a proper decision-making process, which is what we need to have real responsibility.  So at this point this is indeed very confusing.

So now we finally turn to Pereboom’s four cases.  This is my summary of the cases from the above post:

1) Neuroscientists can deliberately manipulate Professor Plum’s reasoning process to make him have desires, at least, that are more rationally egoistic than moral, even though sometimes — I guess either when they don’t manipulate him or when the desires that are there at the time happen to work out that way — he can act morally.

2) Instead of directly manipulating his reasoning/desire-formation, they instead build in a set of desires that strongly bias him towards rationally egoistic choices, although he can overcome them with his other decision-making processes.

3) Instead of those desires being implanted by the neuroscientists, he gets them from training from his culture and upbringing.

4) This is all determined by physicalist determinism.

Pereboom claims that we have to consider each of the first three cases examples where Professor Plum is not morally responsible for his actions, and so from there have to conclude that the fourth case is also a case where Plum is not morally responsible, but since the last case is the case the Compatibilist is defending the Compatibilist must concede that in a deterministic world Plum is not morally responsible for his actions, defeating Compatibilism.  My attack on this starts at 2 and 3, where I argued that because in those cases what Plum has are influences that his decision-making processes can override to make the “proper” decision Plum would still be morally responsible in those cases, so he can’t get to us not having moral responsibility in case 4 from the other cases.  I obviously think that we don’t have moral responsibility in 4 for reasons like the ones I outlined above, but his argument here doesn’t work.  Carrier, however, will go after the first — and foundational, to him — case and argue that Plum is indeed responsible for his actions in that case, which I strongly deny.  Unfortunately, how he gets there is to read into the thought experiment while chiding Pereboom specifically and philosophers in general for simply not understanding how things work in the world.

Here’s what he says:

It is silly to go to such lengths to construct this scenario, because we actually already have such scenarios in the real world that have been very thoroughly dealt with in our legal system—and they don’t turn out the way Pereboom thinks. If we just walked up to Mr. Plum and asked him to kill Mr. White in exchange for a hundred thousand dollars, we would have created exactly the scenario Pereboom is trying to imagine: Plum would not have killed White but for our intervention; our intervention succeeds by stimulating the requisite egoistic thinking in Mr. Plum otherwise in that moment absent (using the sound of our voice, maybe the sight of cash, and his neural machinery already present in his brain); and his acting on the offer remains in accord with his statistically frequent character (as otherwise he’d turn us down; and likewise Pereboom’s neural machine wouldn’t work).

No one releases such a Mr. Plum from responsibility. He will be adjudged fully responsible in this case in every court of law the world over. Pereboom’s argument thus can’t even get off the ground.

The thing is, this is not, in fact, equivalent to Pereboom’s first case.  I can prove that by expanding it to explicitly include Carrier’s case.  We have an organization that offers to pay people one hundred thousand dollars to kill certain people.  They have done this with Professor Plum in the past and he has accepted that in the past.  However, they want certainty, so they have also implanted a mechanism in the brain so that if they would ever go against their egoistic reasoning to take the money and would instead reject it on moral grounds, they will flip the switch and the output of the decision-making process will be the one that would have been produced by a strictly egoistic decision-making process.  So what they have done is what Carrier insisted is equivalent by making it in Plum’s egoistic self-interest to kill the person, and then on top of that have triggered their implant to override Plum’s decision when it would be based on morality rather than on egoistic concerns.  That’s clearly an additional step to what Carrier is proposing and would clearly work, as it would at the very least suppress the moral reasons that Plum would have used to make a different decision.

Now, a stronger argument from Carrier might have been to argue that there is no difference between egoistic and moral reasoning and so the argument cannot get off the ground that way.  So let me point out that even under Carrier’s view I can make that distinction.  Carrier allows for us to make immoral decisions on the basis of a shallow or mistaken view of what is in our egoistic self-interest, and we can say that Plum, in general, has been murdering people on the basis of the shallow self-interest of money rather than on the deeper self-interest of the effect that has on him and on society as a whole, but in this case that deeper self-interest was going to kick in and cause him to reject the money and refuse to kill the person … and then the device activates and switches him back onto the shallow self-interest path.  Again, it’s not them making the offer, nor is it merely them making the money path more salient.  By definition, they are changing what his decision-making process would have come up with without appealing to his decision-making processes.  They aren’t convincing him to do this, but are forcing him to do this.  Carrier’s entire “equivalent” case is simply a case where they convince him to do it, and so it doesn’t map to Pereboom’s case at all.

And this distinction is actually really important for compatibilism, because it explains why most compatibilists should lean towards accepting that the first case is a case where Plum is not morally responsible for his actions (but they could go along with me in saying that 2 and 3 are cases where he is).  In the first case, it is clearly the case that the direct external influence on the brain directly impacts the decision-making process, and so Plum’s decision was not produced by his normal decision-making process doing what it would or should do in that case.  We can see this by imagining that Plum would at some point use simulation — the psychological kind — to put himself back in that situation and run it forward to see what he would have decided in that case.  For pretty much all of his other decisions, if he constructed it properly he would come up with the same answer.  Here, he wouldn’t, because his imagination wouldn’t include the direct intervention and so would come to the conclusion that he shouldn’t kill the person for moral reasons, when he ended up killing them for egoistic reasons.  Thus, this isn’t the result that his decision-making process would have come up with, and compatibilists should note that then it can’t be a free choice and so he can’t be held morally responsible for it.  For the next two cases, his decision-making process could iterate over those values and desires and act morally, and so what his decision-making process actually does is what we can hold him morally responsible for.  But in the first case, what he did was not what his decision-making process would have actually concluded, and given that we can’t hold him morally responsible for that action.

So Carrier misses the point completely, claiming that we have an example of Pereboom’s case that happens constantly in real life when his example is one of convincing or influence, not a complete circumventing of it.

Mr. Plum has not been tricked—he full well knows that he is choosing to kill someone, and for a reason that even Pereboom’s argument entails is both egoistic (and thus not righteous) and derives entirely from Mr. Plum’s own thinking—because all that the “mad scientists” have done is tip him back into his frequent “egoistic thinking”; they have not inserted a delusion into his brain that becomes his reason for killing White (had they done that, we’d be closer to the real-world case of schizophrenics committing crimes on a basis of uncontrollable false beliefs). And that’s all that courts of law require to establish guilt: a criminal act, performed with a criminal intent.

Presumably, when the decision is made the die is cast and the person will be killed based on that decision.  Plum will not have sufficient opportunity to self-reflect and so change that decision before it is carried out, because if he would, then either they’d have to push the button again or else he would decide as he would in my example above, and follow what his actual decision-making process would have come up with and not kill the person.  Thus, when he originally made the decision he did not have criminal intent despite having reasons to commit the crime because his normal decision-making processes acting normally would not have made that decision, and if he has the time to consider it later and decides to not change it he would only then develop criminal intent based on that later decision.  Thus, if the decision as Pereboom describes it is the only decision he makes, then he doesn’t have criminal intent and so can’t be held morally responsible for the crime.

“But they had a machine in his brain and pushed a button to activate it” would bear no relevance whatever at his trial. That would be no different, legally, from “pushing his buttons” metaphorically, by simply persuading him to do something, ginning him up into selfish violence. Guilt stems from his agreeing to go through with it “for egoistic reasons.” He is aware of what he is choosing to do, and chooses to do it anyway.

But that is precisely what he doesn’t do because of the machine:  he didn’t make the choice at all, because if they had left him free to make the decision by definition he would have chosen the other option.  Again, it would only be later if he examined the decision and then agreed with it that the last sentence would be true.  Again, it’s not persuasion, it’s direct intervention that circumvents his actual decision-making process.  That’s not something that a Compatibilist can accept as conferring moral responsibility on someone.

That he would not have thought to do it but for someone instigating it is irrelevant …

Under Pereboom’s example, he quite likely would have thought of it, and rejected it for moral reasons.  The instigation is, again, not reminding him of those reasons or making them seem salient, but directly forcing him to accept them and act on them despite the fact that his normal decision-making process, by definition, wouldn’t have done that.  In that specific case, those reasons would not have been strong enough to get Plum to kill that person, and it’s only the machine that makes that happen.  Carrier is just wrong about what the case is actually describing.

And then he takes on philosophy again:

Pereboom’s argument thus fails from its very first step, all from simply failing to realize he was describing a scenario that is already standard and dealt with routinely in the most experienced institution for ascertaining responsibility in human history: the modern world legal system, a product of thousands of years of advancement and analysis. Philosophers often do this: argue from the armchair in complete disconnect from the real world and all that it could have taught them had they only walked outside and looked around. Philosophers need to stop doing that. Indeed, philosophers who hose this simple procedure should have their Philosopher Card taken away and be disinvited from the whole field until they take some classes on How Not to Be a Doof and then persuade us they’ll repent and start acting competently for a change.

That’s a rather strong statement to make when it can be credibly argued that he didn’t understand the experiment at all, and also that the legal system is not actually a good system to look at for determining when someone is really responsible for their actions because a) it often appeals to philosophy to determine that and b) that’s not really its job.  Laws need to be implementable and have to explicitly aim at preserving society, and so they may well make compromises based not on what really is the case, but on what they can reasonably determine is the case.  Take Carrier citing Spitzley’s migraine example:

It is by the same reasoning that Spitzley’s example of a certain “Bob” who “has a migraine that causes his reasoning to be slightly altered in such a way that he decides to kill David and he would have not made this decision if he had not had this migraine” does not describe a case any legal system would rule Bob innocent in. It does not matter what drove you to do something, as long as you knew it was wrong and did it anyway.

The weakness in the example, to me, that makes it an odd one is that applied to the real world we cannot find a case where a migraine would actually alter someone’s reasoning enough to make someone to decide to kill someone.  The pain could make someone lash out, but that wouldn’t be a decision and the legal system would treat it as likely something like manslaughter instead of fully motivated and intentional murder, but that’s a different consideration.  But even if someone did manage to find a case where the migraine could change their decision-making process sufficiently to make them not responsible for their actions, it’s actually quite likely and reasonable that the law might hold him responsible anyway, on the reasoning that even if he couldn’t be held responsible the law cannot confirm that he actually had that migraine to that degree and so can’t allow someone to shirk responsibility on the basis of a condition that they can’t confirm he had.  So they wouldn’t be saying that he would still be responsible in that case but that he cannot show sufficiently that he actually had it.

And this is why using the law to settle philosophical matters is a bad idea.  The law doesn’t really care about getting it philosophically correct, but only about getting it correct enough to preserve society.  If they can’t determine if someone is or isn’t in the conditions that would impact whether or not they were responsible for the actions the law may well simply choose the one that best preserves society.  Philosophy, on the other hand, can assess whether that person would be responsible in that case even if there is no way of determining if they actually are in that case, which makes their analyses and thought experiments more reliable than legal decisions in these cases.

He then progresses to an even odder case that puts him even further from a compatibilist position:

Consider even Spitzley’s point regarding “Mele’s (1995) case of an agent who has someone else’s values implanted into them overnight.” If this were Bob’s fate, he would still be found guilty. Because it does not matter how he became who he is; all that matters is that “who he is” assented to and performed the crime.

No Compatibilist could accept this.  The reason is that just as we have normal decision-making processes, we also have value-forming processes.  If Bob is implanted with a completely different set of values, those values would not have been formed by the normal value-forming processes, and so they wouldn’t be his values at all.  Even legally, we’d have to consider that someone in this situation as being at least temporarily insane, and so they wouldn’t be held legally responsible for their crimes.  For compatibilism, it does indeed matter quite a bit how he became “who he is”, at least at the extremes, because in this case the person with those values is not who he is.  Values formed through the normal processes, even if those processes are determined, make up who he is, but values forced upon him by brain rewriting do not make up who he is.  Carrier’s statement here is strongly Hard Determinist, not Compatibilist.

He talks about the Angel/Angelus case but it doesn’t apply here, as Pereboom’s case is not one of those cases, the two of them are literally different, and they don’t involve an external direct manipulation of the brain in any way.  So let me finish off his examples with the example of Hal:

This is played out credibly well in a different bit of fiction: Hal 9000 in the film and novel 2010 is regarded as guilty of murder only by reason of a conflicting programming code, such that as soon as that is corrected, he is fully restored as a reliable colleague. Murderous Hal simply is no longer the same person as “fixed” Hal. So we don’t ascribe the guilt of one to the other. Nor should we. The only thing that seems counter-intuitive about this is that it is possible to instantly fix someone, converting them from a malevolent to a benevolent person with just some keystrokes. But the reason that feels counter-intuitive is that it doesn’t exist—that kind of thing simply isn’t a part of our real-world experience, and isn’t an available option (yet, at least) in dealing with malevolent persons among us. But we’ve already established in courts of law the analytical logic needed to cope with it if we had to. And that predictable result simply isn’t what Pereboom imagines.

Hal is a machine, and so we can repair the hardware and change the software and no one cares about whether that’s the same Hal or a different one.  He’s been “fixed” to no longer be murderous and that’s that.  In fact, Carrier is probably wrong to say that it’s not the “same” Hal anymore.  If someone goes for therapy and corrects some mental problems, are they suddenly not meaningfully the same person anymore?  Carrier’s own logic ends up muddling things beyond comprehension and compares people to machines again which implies that if we could just change their brain without giving them therapy or convincing them they were wrong that would be just as acceptable a way to “fix” them, and that is something that no Compatibilist should accept.  Again, Carrier seems to argue from a Hard Determinist position to defend compatibilism, which is obviously not going to work.

Ultimately, I can’t see how a Compatibilist could claim that Professor Plum is morally responsible in Pereboom’s first case, but can see how they could argue that he is in the second and third ones, breaking that chain.  Carrier, however, in attacking the first case misunderstands it, argues from a Hard Determinist perspective, and ultimately doesn’t even seem to be able to make his own argument consistent.

Free Will, Reductionism, Materialism, Emergence, and the Transporter

March 4, 2022

In an attempt to avoid making all of my Philosophy posts about Richard Carrier — although his latest is going to demand a response at some point or another — I’m going to turn to this post by the blogger known as Coel.  He used to comment here frequently (or as frequently as anyone commented here, which is infrequently) but I guess he isn’t reading this blog anymore.  He’s also a frequent commenter on various atheistic blogs such as “Why Evolution is True”.  We disagreed most over consciousness and morality, and when it comes to free will we are a bit closer because he’s a compatibilist and I’m a libertarian, so the big difference is over whether he can come up with a compatibilist view that can encompass what’s important and necessary about decision-making even in a determined universe.  If he could pull that off, then he’d make me a compatibilist as well, but I don’t think he’s managed it.

Anyway, he hadn’t posted in a while and I’d stopped following his blog up until he made a couple of posts around Christmastime (or a bit before and a bit after) and I happened to check in to see them.  This is one of them, and is looking at someone criticizing people who seem to be Hard Determinists — which Coel is not — while Coel is trying to defend reductionism in general and, oddly, their Hard Determinism as well despite having major dust-ups with Jerry Coyne who seems to have similar views.  He also uses/introduces/modifies a thought experiment about Star Trek transporters.  So I’m going to go through the post and talk about the things that might be muddled, confused or problematic.

Let’s start with the views he ends up defending.  The post is responding to an article by Bobby Azarian, so he’ll be responding to what he says, so anything I quote from Azarian will be taken from Coel’s post, and just for clarity.  And Coel says this about people who seemingly would be criticizing Coel’s compatibilism as Hard Determinists:

He names Sabine Hossenfelder and Brian Greene. The “free will” that such physicists deny is “dualistic soul” free will, the idea that a decision is made by something other than the computational playing out of the material processes in the brain. And they are right, there is no place for that sort of “free will” in a scientific worldview.

I haven’t examined Greene’s work, but as it turns out I have examined Hossenfelder’s, and she doesn’t seem to be simply saying that (and so opposing dualists).  She pretty explicitly rejects the compatibilist project:

But some philosophers insist they want to have something they can call free will, and have therefore tried to redefine it.

Others have tried to argue that free will means some of your decisions are dominated by processes internal to your brain and not by external influences. But of course your decision was still determined or random, regardless of whether it was dominated by internal or external influences. I find it silly to speak of “free will” in these cases.

The first part is pretty much a summary statement of Hard Determinist opposition to compatibilists, and the last part is pretty much Coel’s view of compatibilism, and she thinks that it’s ridiculous to speak of anything like “free will” given that.  So she, at least, clearly opposes compatibilism, which means that she definitely thinks that’s the case (she actually explicitly says that free will is a useless term and people who use it don’t understand science).  So her view is strong enough to strongly clash with Coel’s and to pretty much fit the quote that Coel is opposing here.

He names, among others, David Deutsch and the philosopher Dan Dennett. But the conception of “free will” that they espouse is indeed just the computational playing out of the material brain. Such brain activity generates a desire to do something (a “will”) and one can reasonably talk about a person’s “freedom” to act on their will. Philosophers call this a “compatibilist” account of free will.

Importantly, and contrary to Azarian’s statement, this position is not the opposite to Greene’s and Hossenfelder’s. They are not disagreeing on what the brain is doing nor about how the brain’s “choices” are arrived at. Rather, the main difference is in whether they describe such processes with the label “free will”, and that is largely an issue of semantics.

I find this odd, since again one of the objections that Jerry Coyne — and other Hard Determinists — make to compatibilists is that there position is a mere semantic difference with no really meaningful distinction, which means that all they are doing is try to maintain the phrasing to appeal to the masses and potentially avoid people acting badly because they believe that they don’t have “free will”.  I don’t recall Coel simply accepting that there was no significant differences in the positions, even as he noted that the behavioural differences that supposedly followed from Coyne’s view could fit under his view as well (and for Libertarianism, I’d like to add).  The positions are indeed significantly different, in that Hard Determinists, in general, align with Libertarians in arguing that you can’t make any kind of meaningfully “free” decision if determinism is true, and thus tend to have to conclude that we don’t make any kind of meaningful decision.  For example, it’s a very common Hard Determinist position to say that our actual choices are no different in kind from cases of kleptomania or brain damage in terms of how “free” they are.  Compatibilists like Coel reject that idea and think that those cases can be meaningfully distinguished by appealing to the proper definition of “free”, which actually aligns closer with Libertarians who think it just clear and obvious that those cases are meaningfully different and no idea of human behaviour can be correct if it doesn’t result in those cases being different.  So those Hard Determinist views aren’t a simple semantic difference, so it puzzles me to see Coel argue for that here.

Coel objects to this characterization from Azarian:

Origins-of-life researcher Sara Walker, who is also a physicist, explains why mainstream physicists in the reductionist camp often take what most people would consider a nonsensical position: “It is often argued the idea of information with causal power is in conflict with our current understanding of physical reality, which is described in terms of fixed laws of motions and initial conditions.”

He takes this as a criticism of reductionism that completely misses the mark and mischaracterizes reductionism, but it doesn’t really seem to be the case.  The idea is of having information and meaning as having independent causal power from the underlying structure, and there doesn’t seem to be room for that in reductionism.  I’ll talk more about that later, but Coel here introduces his thought experiment to clarify what reductionism means:

Imagine a Star-Trek-style transporter device that knows only about low-level entities, atoms and molecules, and about how they are arranged relative to their neighbouring atoms. This device knows nothing at all about high-level concepts such as “thoughts” and “intentions”.

If such a device made a complete and accurate replica of an animal — with every molecular-level and cellular-level aspect being identical — would the replica then manifest the same behaviour? And in manifesting the same behaviour, would it manifest the same high-level “thoughts” and “intentions”? [At least for a short-time; this qualification being necessary because, owing to deterministic chaos, two such systems could then diverge in behaviour.]

If you reply “yes” then you’re a reductionist. [Whereas someone who believed that human decisions are made by a dualistic soul would likely answer “no”.]

Now, when I first saw this, I thought it was just a simple transporter example, at which point while some might think that a dualist would insist that the “soul” doesn’t come along with it — and might be lost — that isn’t actually the case.  What’s important about Cartesian dualism, at least, is that the mind/soul doesn’t have material properties, and one of those material properties is indeed “being in space”.  The mind is not in space and so doesn’t exist in any particular place.  So while it’s common to imagine that we have the soul sitting in the head that could be “left behind” in that case, that’s not what happens.  So in a  transporter case, the answer for dualists is “It depends”.  If the transporter breaks down the physical side in a way that causes the mental side to “lose” the body, then it wouldn’t be there after the transport, but that mechanism isn’t one that is necessarily likely to be lost, and by dualism it is possible for the mind to come apart from the body and then return to it later, so it depends on the details of the mechanism.  For dualists, we’d have to try it and find out, really.

As noted, though, this thought experiment is of a replica, and so essentially a clone, and not a simple transport.  This is actually a problem for dualists not because there’s a clear answer but because there isn’t one.  What would it mean to have an actual identical replica made of a physical body?  The obvious answer is that the replica wouldn’t get a mind, but as noted above it’s possible that the original mind runs both, or that a new mind is created/co-opted for the new replica.  It’s a tricky question that really does rely on how the mind is connected to the body, both originally and when things are changed.

But we aren’t talking about dualism.  We’re talking about reductionism, and Coel thinks that this thought experiment captures what the reductionist position really entails, and it doesn’t.  What Coel’s position describes here is one that is common among any materialist position about mind.  If the mental is the result of the physical, then if you duplicate all of the physical properties of a brain (or whatever physical things are needed, which in general is pretty much the brain) then you will reproduce the mind as well.  For some reason, Coel treats the above objection as saying that reductionism means that you don’t have to care about positions/places, but I don’t really see that (again, I haven’t read the article myself).  Any materialist position will say that if you reproduce the physical state you can reproduce the relevant phenomena.  That’s true for reductionism, eliminativism, and even emergentism.

So what actually is the reductionist position?  The reductionism position as differentiated from the two I mentioned above is a position that says that the concepts at the higher levels may be valid and meaningful, but you can use “bridge laws” to “reduce” those concepts to concepts at the lower level.  In short, you can use those laws to find out — in principle, at least, if potentially not in practice — how those higher level concepts are reflected in the lower levels, and you can do that all of the way down.  Eliminativists differ in that they say the concepts at the higher level are meaningless and add no value whatsoever, while emergentists argue that you can’t find the correlates to those concepts at the lower level (at least for strong emergence).  So what reductionism really means, in terms of implications, is that the higher levels are identical to the lower levels in a really serious and important way.  Emergentism says they aren’t, and eliminativism says the higher levels don’t really exist.  But if those theories are all materialist, they will all say that if you can duplicate the physical level you have duplicated the “mind”, because what else could you use to say that the two things are not the same?  There is going to have to be some kind of physical difference to point to if the things aren’t the same, or else materialism is false.

Additionally, you can have non-material reductionisms that would fail Coel’s thought experiment.  Someone could indeed posit a view of non-material mental processes where higher level mental processes reduce to lower level mental processes by bridge laws.  This would be a reductionist view by definition, but it would have to argue that if you duplicated the physical properties you wouldn’t have necessarily duplicated the mental ones and so they might not come along as the process goes along.  So this thought experiment doesn’t say much about reductionism.  Even by comparison to emergence:

Well, no. Provided you agree that the Star-Trek-style replication would work, then this counts as weak emergence. “Strong” emergence has to be something more than that, entailing the replication not working.

I looked the terms up, and my understanding of them is that weak emergence is closer to reductionism:  you can derive the higher level properties from the lower level properties in some way.  Strong emergence simply says that you can’t do that:  there is no way to determine that those properties would result from that underlying structure by analyzing the underlying structure.  What it doesn’t do is deny that the properties are ultimately the result of that underlying structure.  It just claims that they aren’t predictable from that level.  The common example of strong emergence is the feeling of wetness from water (a bad one because feelings are actually properties of consciousness, but let’s let that slide for now).  You can’t explain even in principle how to get from the properties of H2O molecules to those properties of “wetness”, but no one denies that “wetness” is the result of those properties, somehow.  And so we’d all agree that if you ever stuck hydrogen and oxygen together in that form you’d get wetness.  Strong emergence just says you aren’t going to be able to discover that without actually sticking them together and making it work.

Azarian talks about top-down causation as a clarification of the above quote, and here’s where I think Coel doesn’t really get the idea:

Top-down causation is another phrase that means different things to different people. As I see it, “top-down causation” is the assertion that the above Star-Trek-style replication of an animal would not work. I’ve never seen an explanation of why it wouldn’t work, or what would happen instead, but surely “top-down causation” has to mean more than “the pattern, the arrangement of components is important in how the system behaves”, because of course it is!

Again, no idea where the idea that the arrangement is what matters here.  What seems to be going on here is a similar objection to one I made when talking to im-skeptical, which is that in order to get proper agency we need the information processing itself to actually have causal power.  What I’d see in particular with a reductionist model is that it really looks like all of the levels have the same causal power, and so there is no additional or differing causation happening at the various levels.  We might talk about or describe the causation differently and in different terms, but it can’t be the case that a completely different type of direct causation comes into play at a different level (to forestall objections about a more Aristotlean differentiation of causation coming into play).  So what I think we could do is explain all of the relevant causations by tracing the causal chains and paths at the lowest level, which is generally at the level of physics.  It may be extremely difficult to do and may be somewhat meaningless, but it can be done.  And this seems to work for chemistry, where we can indeed describe all chemical reactions by reducing them to the causal events that joined the individual atoms together.  I would suggest that if you can’t do that then you don’t have reductionism.

However, this doesn’t seem to work for information and meaning.  It seems that in order for us to have proper agency we need causation based precisely on the concepts of information and meaning that only appear at the higher level.  Information and meaning doesn’t appear at the biological, chemical or physical levels of the brain.  So if we look there, we won’t see anything at all that maps to that information, at least not as far as we can tell.  A specific neural firing doesn’t in any way represent any kind of meaning, as we see with connectionist systems (as I’ve noted in the past, I can hook up any connectionist system to a different external system and have it work even if the “meaning” of the inputs and the outputs is completely different).  So where is causation based on “meaning” happening?  It’s not like what we have are chains of neurons that resolve themselves into things that we can identify as a piece of information or something that has meaning, like we have in chemistry with chains of atoms that we can take as a whole and call “molecules”.  So if we want our behaviour to be caused by what things mean, we need to find a way to represent meaning at the level where the “real” causation is happening — the physical level — in a way that the physical level can actually have “meaning” play a direct role in its causation.  And it’s hard to see how that can happen since meaning has no place in the underlying physical level.

He thinks we can get that meaning from comparing it to a chess computer:

If we want to talk about agency — which we do — let’s talk about the agency of a fully deterministic system, such as that of a chess-playing computer program that can easily outplay any human.

What else “chooses” the move if not the computer program? Yes, we can go on to explain how the computer and its program came to be, just as we can explain how an elephant came to be, but if we want to ascribe agency and “will” to an elephant (and, yes, we indeed do), then we can just as well ascribe the “choice” of move to the “intelligent agent” that is the chess-playing computer program. What else do you think an elephant’s brain is doing, if not something akin to the chess-playing computer, that is, assessing input information and computing a choice?

Basic chess playing computers are essentially the guy in Searle’s Chinese Room:  they took in a basic input, looked it up in some kind of look-up table, and then outputted the “right move”.  While we can argue over whether the room itself understands, the guy doesn’t and that intuition is what drives people to think the room doesn’t understand.  So it doesn’t seem to be assessing the information for meaning.  And the Deep Learning ones are connectionist systems, and again they aren’t assessing the information for meaning because again I can hook that computer up to something else that, say, tries to solve differential equations and it will gamely try to do that — and might even succeed — instead of replying that it has no idea what those inputs mean and so can’t do anything.  So that’s not an example of where the meaning of the information itself is causing the outcomes, and that’s what Azarian seems to be saying he needs.  Top-down causation, it seems to me, is the case that there has to be a causation happening at the higher level that changes the lower levels in ways that wouldn’t have happened of that causation hadn’t happened at the higher level.  Bottom-up seems to me to be what I described:  the higher level causations and differences happen because of the causations and differences at the lower and ultimately the lowest level.  So when the lower level changes the higher level also changes, but it’s not the case that there is any meaningful change at the higher level that wasn’t already reflected at the lower level.  So changes at the lower level determine the state of and therefore the changes at the higher level, but the argument is that the higher level is the one that has the concept of meaning and so is the only one that can change things on the basis of meaning, but it can’t actually do that because all the causation is happening at the lower level.

So Coel would need to show how the lowest levels can change on the basis of the meaning of the information we have, and chess computers aren’t doing that now.

Let me summarize with this:

The macrostate cannot contain more information (and cannot “do more causal work”) than the underlying microstate, since one can reconstruct the macrostate from the microstate. Again, that is the point of the Star Trek thought experiment, and if that is wrong then we’ll need to overturn a lot of science. (Though no-one has ever given a coherent account of why it would be wrong, or of how things would work instead.)

If we can get causation based on meaning in the microstate, then this will all work.  The question is if we can actually do that from the microstate.  The thought experiment doesn’t actually address this in any way, and so the objection is not vulnerable to it.  The question is about materialism and about the specific posited mechanisms of the mind, not over reductionism or emergentism.

Jonathan MS Pearce on Free Will

October 29, 2021

A while ago, I commented on a post of Jonathan MS Pearce’s over at “A Tippling Philosopher”, and had an aside about how some things he does there and in his books — I’ve read a couple of them but haven’t commented on them yet — annoy me, and specifically that he will do things like dismiss the free will counter on some positions and then demand that those who want to use that argument demonstrate that free will exists before they can do so.  Not only is that going to be very difficult for people who are not philosophers to do, it also takes an open and long-standing debate in philosophy and asserts that the solution has been found and that it’s hard determinism that has won.  This is actually really, really bad for free will because in general most philosophers are compatibilists, not hard determinists, but almost all of them will agree that the question has not yet been settled.  Pearce himself commented on my post talking about how there is no coherent idea of libertarian free will, which eventually led to him sending me an article in an upcoming book that talks about free will, which is now going to lead to me talking about it, because obviously I disagree that hard determinism is the correct way to look at free will, and over whether we have free will at all.

So let me start with his opening issue.  Like almost all hard determinists, he starts by defining free will as “the ability to do otherwise”, and then works through an example of what that would mean and why it doesn’t work:

Imagine a decision. For example, let us take Wendy. She decides at 09:15 to give $5 to a homeless person she passes in the street. Now imagine that the world continues for any amount of time (say, ten minutes). We then rewind the world back to 09:15. The LFWer believes that Wendy, at 09:15, could just as well have decided not to have given the money to the homeless
person, rationally and consciously.

The issue is that, speaking as a LFWer — Libertarian Free Willer — I don’t believe that, and again most people who care about free will at all don’t believe that.  This sort of situation is what follows if you take the vague and somewhat folk view of “The ability to do otherwise” and try to use that simple basis as the basis for a deep analysis of free will, in much the same manner as people may argue that evolution makes no sense because “If humans evolved from apes, then how come there are still apes?”.  While often proponents of evolution consider that to be a sign that the people making that argument simply don’t understand evolution at all, that’s not really the case.  We can make the objection seem more reasonable by changing it to “How come humans changed so radically from those ape-like creatures, but apes didn’t?’.  The mistake, though, is that when we say that humans evolved from apes or even ape-like creatures what we mean is that there was a common ancestor between humans and apes that had more characteristics of apes than they had of humans.  But the mechanism of evolution would explain that the reason for the differences is due to differences in environment and mutations that caused apes to evolve less or in different ways than humans did, which explains the differences.  The phrase, then, only points to the ancestor, and for evolution the ancestor is less important than the process, which the statement references but doesn’t analyze.  To really do a proper analysis, you need to look more at the process and less at the vague explanatory statement that describes the relationship between humans, apes and that common ancestor.

The same thing applies to the folk phrase of “The ability to do otherwise”.  Taking that as the totality of free will and analyzing free will from that point produces the exact problem we see here, where the focus is on whether a different decision can be made and not on what it means to make a proper free decision.  So then we fall into the pit of winding back time and claiming that different decisions have to be made no matter how ridiculous, which as Pearce notes decouples free will from reasons and reasoning, which would then produce a rather strange idea of free will.  But when we analyze what is important and meaningful about free will, what we discover is that free will decisions are crucially related to reasons and to the decision-making processes that are responsive to reasons.  What we mean when we say that they could do otherwise in that situation, really, is to note that the outcome is not determined until the decision-making process finishes, and whatever decision that process comes to is the one that we will follow.  Now, all rational decision-making processes — like the ones humans tend to use when they deliberate — will respond to reasons, and so an LFWer would have to conclude that if her decision-making process proceeded as it did the first time then she would indeed make the same decision even if we rolled back time.

This actually goes a lot further.  Decisions are made on the basis of us examining the situation we are in and assessing the available options on the basis of what we desire and what we believe to be true to pick the action that maximizes the satisfaction of our desires — including the relative importance of those desires — given what we think is true about the world.  Thus, for a perfect decision-making process, there really is only one right decision or, if there is more than one that is equally important, all you could do is randomly choose between them.  However, in humans our decision-making processes are not perfect, and so we will decide to take suboptimal actions for various reasons, including that we just don’t happen to think of those actions or forget desires we have or beliefs we have when considering what action to take.  For example, imagine that I’m deciding what to have to supper.  I ponder a number of options and then decide that I’m going to cook chicken strips, and start cooking them.  And then I remember that I had leftover spaghetti and wanted to have that because it’s too much to be eaten as a snack or small meal but for the next few days I have other things that I wanted to eat as meals, and if I delay any of those meals they’ll go bad before I can eat them.  However, now that I’ve started cooking I can’t change what I will eat, but if I had remembered about the spaghetti I would have made a different decision.  So given the beliefs and desires I have, there is only one decision that a perfect decision-making process would come to:  I would have the leftover spaghetti.  But since my decision-making process is not perfect, I can come to any number of other decisions, and that is determined by the decision-making process itself and the considerations I make or fail to make while making the decision.

The thing is, though, the decision in the case above isn’t random either, which with totally determined are the two options that Pearce — and, to be fair, most people in the debate — allows for.  The thing is, that decision doesn’t seem to be totally determined, as it does indeed rely on me not considering something that it was perfectly reasonable for me to consider, but it isn’t random either because it not only is based on the other desires and beliefs that I’ve considered but also critically on the desires and beliefs that I forgot to consider.  Ultimately, what we have when we look at our decisions even in such simple and common cases they definitely seem to be reason-responsive, which is certainly not random but also at least isn’t determined by the desires and beliefs we clearly have, but only by the ones that are considered there.

So when Pearce later asks what the world would look like if we had free will, while the simple answer from LFWers would be “Like the one we have, since we have it!”, we can give him a more detailed answer here.  We can see that what we should see when we examine the actions of people when they make decisions that we consider free will decisions is that they should be reason-responsive.  What this means is that if you take the same person and put them in similar situations, they should act in a way that aligns with their basic character and desires and beliefs, and so much of the time they will make similar or perhaps even identical decisions.  And yet, at times even in almost identical situations they will make completely different decisions and take completely different actions, but in general you won’t be able to find something in the environment that has changed that explains the difference, but you will be able to find a “reason” for it so it isn’t entirely random either.  They in general won’t do it “for no reason” but if you try to look outside them to the cause for the different outcome you won’t be able to find it there, but instead only by appealing to their internal beliefs and desires.

For example, imagine that I’m going down to the cafeteria for lunch, and I’m offered a choice between poutine or Sloppy Joes, to use one of my favourite examples.  Given that I really like poutine and dislike Sloppy Joes, there is almost no chance that I will choose anything other than poutine.  While I definitely could choose Sloppy Joes — and so could “do otherwise” — my beliefs and desires are such that I’m never going to.  In order to understand why I will never choose Sloppy Joes over poutine, you will not appeal to the outside world but instead will have to appeal to my inner beliefs and desires.

So what we will see in the world is that in general people will make decisions in line with what we’d consider their basic personalities, but the decisions will not always align with that but if the decisions don’t align with that we will be able to find a reason why it didn’t.  To continue the food examples, some people on going to a restaurant will always order the same thing and will be very hesitant to try any new items that appear on the menu, and some people will always order something different and be eager to try any new menu items.  And yet, either of those people may change that behaviour on certain days or at certain restaurants.  The person who always wants to try something different may, at a particular restaurant, always order the same thing, and the person who always wants to order the same thing may try something different or a new menu item.  But this won’t be random.  They will always have a reason for the change in their typical behaviour, and it will be internal, not external.  So the person who atypically decides to stick with one order might do so because they really, really love that dish and can only get it there, and so every time they get the chance to have it they take it.  The person who generally doesn’t order the new item may look at it and find it appealing.  Heck, someone who always orders the same thing may simply decide one day that they don’t feel like the same old thing and so feel like doing something new.  So they would be deciding to give in to a lower-level — and possibly more determined — feeling that they could resist and have resisted in the past.  That doesn’t make it random, but instead keeps it as reason-responsive, even if their “reason” is just to avoid considering and following their reasons, like someone deciding to flip a coin to decide something rather than working it out in detail.

So the key to free will and our decisions as we see them in the world is that we need them to be reason-responsive in a way that determined outcomes and random/probabilistic outcomes don’t seem to be.  If we look at a determined decision — and examples like the Libet experiments — we see that they separate our deliberations from the outcome, and so the reasons we think we are using to make those decisions don’t actually have to be the “reasons” we actually took that action.  A determined world leaves no room for anything else to be the causally determining factor, and since it’s only in our conscious deliberations that we truly consider reasons that would have to have a causal impact on the outcome, and determinism says that it can’t do that.  On the other hand, random or even probabilistic methods don’t allow for reasons to play a strongly determining role.  Probabilistic ones might allow reasons to bias the outcomes one way or the other, but it really does seem like reasons — and, in particular, the reasons that we consciously consider — play a determining role in what actions we take, not merely a biasing one.  So neither of these work for our experience of making decisions.

And speaking as someone who rejects materialist ideas of mind, these considerations are what characterizes the entire debate:  it doesn’t seem like a purely physical mechanism has any room to actually take into account things like meaning, reasons, intentionality, intensionality and all of the things that seem to make humans actually intelligent.  While some may point to neural nets as examples of purely physical intelligence, the interesting thing about them is that they actually don’t decide things on the basis of reasons at all, and you cannot ask them what their reasons are for deciding what they decide.  Any determinations of the reasons behind their decisions have to come from humans, not from them.  We can contrast this with inference engines that can give the reasons for their decisions, but we can see that under the hood all they are doing is symbolic matching and explanation and so there is little reason to think they really understand what those things mean.  Ultimately, what is important for human intelligence is meaning, and strictly physical systems don’t seem to have any way to actually get that.  As noted above, that’s the issue with free will:  we need reasons, and the alternatives don’t seem to be able to actually provide reasons.

So after clarifying what the free will really means, we can address his comment on compatibilists:

Compatibilists deny LFW too. But they take the term free will and mold it into something new; they redefine it.

This is a common take from hard determinists on compatibilists, but it’s actually incorrect.  Compatibilists don’t want to redefine free will, but instead want to clarify what is important about it, given that the arguments from hard determinists don’t really seem to be making a dent in convincing either the common people or most philosophers that we really don’t have free will.  And while Pearce casts their view as generally being about being able to do what we want to do, the common thread among compatibilists tends to be pretty much what I’ve pointed out above:  they’ve identified that what is really important to us about free will is that our decision-making processes are what determine our actions, and so try to find ways so that our decision-making processes can be determined and yet still be meaningfully determining our actions.  I’m not a compatibilist because I don’t think that can be done, but I respect their move even if I think it won’t bear fruit.  Pearce’s comment here only highlights that the simple view of free will as “ability to do otherwise” is insufficient to properly analyze the philosophical debate around free will.

Which we can now finally return to.  In general, LFWers will say that if her decision-making process ran properly and nothing changes her reasons, then she will make the same decision again and that will still, in fact, be a free decision.  Compatibilists will argue the same.  And I dare say that the average person would say the same thing.  Of course, there have been numerous psychological studies that show that the average person will give deterministic or libertarian answers to questions depending on the context, which I think the above discussion can explain.  If we focus on Wendy deliberating on her decision and have a good reason not to give the money to the homeless person, I suspect that in the above thought experiment most people will answer that she will not change her decision, which would look deterministic.  However, if we focus on Wendy regretting her decision not to give the money to the homeless person then I suspect that most people will say that she would change her decision.  In both cases, we’d be focusing on the reasons she had for the decision in the first place and how “proper” those decisions were.  If she had a good reason for not giving the money, that won’t change just because time was rewound.  But if she feels that based on what she knew at the time she made the wrong decision, then we think it possible that the decision-making process could work out better the second time around.  Again, as noted, it’s all about the reasons.

So, after all of that, let’s see what’s left to talk about in the essay.

Some libertarians will claim that, yes, we are largely determined, or influenced, but that we can overcome this problem with our own volition. This is what I call the 80/20 Problem. That is to say, if an LFWer claims that they are influenced, say, 80% then this leaves 20% of a decision making the process open to agent origination (this is often put forward by proponents I have spoken to). The problem is that all the logical issues I mentioned above are now distilled into the 20%. In effect, it makes the problem worse. The LFWer here accepts much determination, but allows a small window of opportunity, but does not escape the grounding objection and any other logical issue with causality previously expressed. The problem of LFW is even more acute, then.

I’ve gone over a lot of free will books and views lately, and the most common view that seems like the one Pearce is referencing is the idea that in general things are determined but that our decisions can interrupt that determination and substitute a different path and so a different action.  This avoids the need to have free will being constantly active while preserving our decision-making mechanisms.  So it isn’t just that it’s a certain percentage, but instead that it’s under certain conditions, which alleviates the problem that Pearce is referring to here.  And, of course, no libertarian has ever claimed that everything we do must be driven by free choices, allowing for things like automatic and instinctive and emotional responses and the like to be determined, but that we can override them if we decide we want to, as I talked about with the restaurant example.

Which is important, because in the next long section Pearce spends a lot of time talking about genetic and psychological components that impact the decisions we make.  But libertarians do not deny that things like that can influence our decisions.  They only argue that they don’t determine them.  What we can have are predispositions or influences, and those influences and predispositions can cause our decision-making processes to come out with less than ideal conclusions, but ultimately it’s still our decision-making processes that make the final decision.  And, in fact, these sorts of things actually support libertarianism more than determinism, because we can see that people with these things often have radically different outcomes in the very areas where those things are most relevant.  We know that there’s a genetic predisposition for alcoholism, and yet while children of alcoholics disproportionately become alcoholics as well, the children of alcoholics run the gamut from alcoholics to people who never drink to people who drink normally.  Pearce talks about priming but again all that does is make people more likely to make those choices, but many people in the experiments don’t make the choices they are primed to make.  The most interesting example Pearce gives is this:

Liane Young and her colleagues have astonishingly found that in making moral judgments, a key area of the brain is a knot of nerve cells known as the right temporo-parietal junction (RTPJ), and that by sending in transcranial magnetic stimulation (TMS), they were able to change people’s moral judgments. 18 The judgments of the subjects shifted from moral principle to verdicts based on outcome (in philosophy we might say from deontology to consequentialism). The ramifications of which are that such judgments are physical in nature or grounding and that physical influences in the brain are likely to have an effect on core moral judgments.

However, that example never actually said that it determines their decisions or that it was even moral decisions, but that it focused them more on considering the consequences of their actions than the strict morality of them.  As noted above, that could be just impacting our decision-making processes to make them less — or more, depending on what your view of morality is — correct.  So it’s not really changing our moral judgements because it isn’t clear from the example that people still thought of it as a moral decision.  At any rate, it’s all influence, and libertarians accept evidence.  And as Pearce himself notes, the issue with Libet-type experiments is that they don’t look like actual decisions of the sort that we care about.  (I’ve already addressed Wegner’s view here).

Here Pearce highlights something that is always a problem for hard determinists (Jerry Coyne has fallen into this trap consistently):

The point I want to make here is that most people think that such a tumor would abrogate moral responsibility in the agent. In other words, Whitman should not be deemed fully morally culpable for his actions since his brain was impaired: he couldn’t help himself. I want to look at this claim because it implies that a neurotypical person is categorically different to
someone with a brain tumor.

I contest this.

Of course, the tumor makes a person act differently to that which they would have done. But all it actually does is change one form of determined outcome into another. It is not, I posit, a categorical difference. I think people make this mistake too often such as in the sort of claim that follows: “I think it’s not sensible to infer anything about ‘normal’ cognition from experience of people who exhibit obviously-abnormal cognition.”

The issue is that even hard determinists need to consider the case of tumors or abnormal brain states or else nothing can make sense.  There is a significant difference between someone who has a brain condition that short circuits their decision-making process that makes them act in a certain way and that way following from their decision-making process.  We need to treat those people completely differently, and differently in a way that aligns with what we think of as moral responsibility.  Pearce may argue that these things are all determined, but we still need to make the distinctions that he contests that we need.  And one of the arguments that compatibilists make against hard determinists is that they argue for rejecting these concepts and distinctions but in the end need to reinsert them so that they can explain human behaviour.  So if the compatibilist move would work and continue to make sense of human behaviour but also accept determinism, there seems to be no reason for hard determinism at all.  And we know that we have to treat the kleptomaniac and the person who steals to feed their family and the person who steals for the heck of it differently, no matter what we call the distinctions that justify that different treatment.

I don’t want to get too deep into discussions of God, but at the end Pearce basically uses the ideas around free will to argue that the nature of God as defined in Christianity, at least, makes it so that we can’t have free will and so that God Himself can’t have free will.  Let me just address these briefly.

First, Pearce takes on the pretty standard argument that God’s omniscience means that He knows everything that we are going to do before we do it, and so we can’t do anything other than what He knows we will do, and so we can’t have free will.  So imagine that I get the ability to go ahead into the future, and if I do so I see that you are going to do something, but when I return to the present I do nothing with that knowledge.  I have no causal contact with you.  If I have no causal contact with you after I gain that knowledge, that knowledge I have cannot play a causal role in your decision.  And if it can’t play a causal role in your decision, then the fact that I merely say it cannot change the causal chains that produced that decision.  So my knowledge cannot cause your decision and so cannot override whatever causal process produces it, so my knowledge in and of itself cannot determine your choice, and so cannot take away your free will.  So simply observing the outcome of a free choice before it was technically made cannot turn it into a non-free choice without it somehow actually causally determining the choice itself, and God’s omniscience is an observation, not a cause, so that in and of itself cannot mean that we do not have free will.

In general, then, what we think of when we think that being able to simply observe a choice before it was made means that it cannot be free is that the only way to be able to do that is if the world is such that all choices are, in fact, determined.  The future must already exist at the time of the choice and so must also be determined at the time of the choice or else how could we actually observe it?  This, however, is not a safe assumption.  Take the case of the Prophets from Deep Space 9.  They could see the future, but only because they themselves were outside of time and so the idea of the past, present and future made no sense to them whatsoever.  They were indeed outside of time.  So, then, given that their knowledge didn’t have any causal impact on anyone (er, for the most part, as they did interfere in the affairs of people inside of time), is it necessarily the case that we needed to have a process that could determine everything so that the “future” could exist for them to observe?  Being outside of time, they could progress up and down the “timeline” at will at any point that time exists, but that would only mean that they technically came in “at the end”, once everything had been done.  If they are only observing, then all they’d see are the free will choices that people have made, and would have no causal impact on them.  So the choices could still be free and yet they’d technically see them “before” they happen.  God is noted as being outside of time, and so again it isn’t at all clear that his omniscience would impact our free will.

(Note that, yes, there are issues for the Prophets and God who do interfere in the timeline.  That’s not an issue I want to take up here since it isn’t Pearce’s original comment).

In the second argument, Pearce argues that God’s perfect nature means that He can only do what is maximally loving (Pearce’s example) and so since He can do nothing else He doesn’t have free will.  However, as noted above a perfectly rational agent will also do the thing that is most in line with their beliefs, desires and nature.  That is not a clash with free will, but is instead the definition of free will.  So that God will always perfectly exercise his free will in line with his nature is simply the result of the fact that, unlike us, his decision-making processes are perfect and so always perfectly reflect his free will, a state that we can only aspire to but never achieve.

Ultimately, Pearce tried to argue that free will is logically inconsistent and so no one can credibly think that free will exists.  As we’ve seen in this post, I strongly beg to differ, and think that I’ve made a good stab at it.  The ball, then, is back in Pearce’s court since someone clearly has tried to take on his arguments against free will and clearly finds them lacking.

Living Without Free Will

August 20, 2021

So, it’s only in the last two chapters of “Living Without Free Will” where Pereboom finally gets around to talking about how we can live without free will or what life would be like without free will.  But for the most part all he does is try to talk about how hard incompatiblism doesn’t seem to have the problems that we think it does, and so ends up making the same mistake that a lot of hard incompatiblists — like Jerry Coyne — do in that he ends up essentially trying to stick with the general structure of a society that we currently have while taking specific cases where hard incompatiblism says something different to show how it’s better.  The problem is that even those purported improvements are dependent on the structures and ideas that we developed in a world where we at least thought that we had free will, and so it ends up not being clear that those underlying structures make any sense in their model.  And even worse, if all they want to do is keep most of the ideas and structures of free will but alter some of them as a reaction to determinism, that seems to be pretty much what compatiblism wants to do, so it becomes difficult to see why it’s not just a form of compatiblism with a different approach.  So hard determinists like Pereboom often make strong statements about us not having free will and how we need to throw the entire concept out, but then either continue to use things that assume some form of free will, or else throw out the terms but then reinsert them with different names but without a justification.

The example that most struck me is the one where Pereboom talks about someone who constantly does something that harms or irritates you, but then when called out on it seems genuinely apologetic and committed to avoiding doing that.  The first problem with this is that it’s difficult to say what it would mean for someone to be genuinely apologetic under hard incompatiblism.  That apology is just as much a determined reaction as the original event was.  What does it mean for them to be genuine?  They’d have to be able to act differently if they weren’t being genuine, but again they can’t react differently.  Being genuine does imply that the main difference is internal to the person, and that their internal state is the real difference.  But if that internal state is totally determined by external factors, then what meaning is there to saying that they are being “genuine”?  Presumably, they could be just as genuine when they are taking that harmful or annoying action, and certainly under hard incompatiblism both cases reflect them equally well, because under hard incompatiblism neither case can really reflect “them”, being determined by external factors and not things specifically about them.  If we wanted to claim that these things were still determined but that the internal processes are importantly the ones that determine their actions — and so that the actions are critically determined by them in and of themselves and so really reflect them — then we’re pretty much pushing a compatiblist line.  So it’s very difficult to see how the term “genuine” can even have meaning under hard incompatiblism.

But hard incompatiblism also seems to have a problem deal with this situation.  Under libertarianism or compatiblism, in such a case we would see an obvious contradiction that needs to be resolved.  If they really do genuinely want to avoid hurting us but nevertheless keep doing it anyway, then we can identify that as a specific issue that needs to be resolved.  And we can see that either they really don’t want to avoid hurting us, or else they have some kind of overwhelming compulsion to commit that action that we need to help them break.  But under hard incompatiblism, their purported compulsion to commit that action is exactly the same thing as their expressing that they “genuinely” don’t want to take that action and hurt us.  In the “normal” cases, we have a contradiction that we need to explain and resolve.  In the hard incompatiblist case, that purported contradiction is nothing more than what the agent does in those situations, and so isn’t a contradiction at all.  We have to contort the philosophical position of hard incompatiblism to even get to see it as a problem that needs to be fixed, and contort it even more to decide that the actions that harm us are the ones that need to be fixed.  And yet the entire example relies on us understanding it as a contradiction that we care about and, presumably, want to resolve.  So it’s entire basis is a model that it is purportedly rejecting.

So, to finish off Pereboom, there wasn’t much in this book that I didn’t get from “Four Views of Free Will”, and so it wasn’t all that interesting to me from that angle.  But at the end, I came away even more convinced that hard incompatiblism is a non-starter because he hit the same problem that most people do when they promote that position, which is coming up with a way to talk about “Living Without Free Will” that isn’t totally nonsensical.  Since this is very difficult to do, most of them end up importing terms from our free-will-infused world and using them in at least slightly different ways.  But those terms only make sense because of their association with free will — at least in our minds — and so when we look a little deeper and remove those associations as seen above it isn’t clear that those terms still make sense and make sense in the way that the hard incompatiblists need them to to make their theory work.  Compatiblism has the issue of allowing for real decisions in a determined world, but at least it is far easier for it to reuse our everyday terms and so make sense than it is for hard incompatiblism.

And that’s the real issue I have with hard incompatiblism:  if we shake out its philosophy, it has to be a radical departure from how we view and talk about the world today (but the ones that cleave closer to that are libertarianism and compatiblism).  But that makes it really difficult to talk about hard incompatiblism in a way that aligns with our current views of the world, and trying to do so often smuggles in the very concepts of free will that they deny.  At the same time, hard incompatiblists don’t seem to have been able to come with a new way of talking about these issues that lets us get into the right mindset to see how it would work.  So they themselves seem to want to appropriate the original terms, with all the attendant risks.  My impression, then, is that hard incompatiblism becomes nonsensical if we actually try to take it seriously, and the only way to make sense of it is to import the very concepts associated with free will that it’s trying to convince us are nonsensical and not real.  So then it’s very difficult for me to see how hard incompatiblism is even a coherent concept, let alone the one that is most likely to be true.

Morality Without Free Will

July 23, 2021

Last time while discussing “Living Without Free Will” by Derk Pereboom, I criticized the focus that he and others had on the problems for morality if we didn’t have free will, as if solving the issues around morality would solve the problems that we seem to face if we decide that there is no such thing as free will.  This chapter doubles down on that, focusing on discussing if notions of morality can be preserved even if we don’t have free will.  But as I’ve already argued the problems for morality follow from the problems a lack of free will would introduce in general, and thus even if you manage to preserve some notion of morality given determinism that wouldn’t mean that you’ve actually shown that free will itself isn’t problematic.  And I don’t think that you can actually do that anyway.

The key issue here is one that Pereboom raises:  the idea that ought implies can.  He somewhat dismisses this as maybe not really being a problem or a real defining trait for morality, but when we unpack what it really means in general — and not just for morality — we can see why it is necessary.  One of the key things we use morality for is to regulate behaviour, and it does so by telling us what we ought to do in order to be moral.  From that, it seems obvious why ought implies can if the ought is going to have any meaning whatsoever, because you cannot say that one ought to do something other than what they did if they couldn’t do that thing.  So, for example, if we want to say that someone ought to have dove into the water to save that drowning child, that would be hollow if they couldn’t swim and so would have been unable to save that child anyway.  We cannot morally judge someone for not doing something that they couldn’t reasonably do.  If determinism removes free will and so removes choice from us, then we can’t do anything other than what we do.  So there can be no meaningful “ought” statement that is anything other than what we were determined to do, because we couldn’t reasonably do anything else.  Someone may, then, be able to say “You ought to have done X instead of Y!” but if determinism is true then that is a statement we need not care about, since there was no way for us to actually have done X instead of Y.

To be fair, a lot of Pereboom’s defenses here are more along the lines of that we might be able to come up with some meaningful notion of morality even if we don’t have free will.  So even if we couldn’t really hold people morally responsible for their actions and couldn’t hold them morally praise- or blameworthy for their actions, maybe we could still have some notion of morality that makes talking about it meaningful and even potentially useful.  The problem with this approach is that the only way I can see to do this is to talk about morality independent of moral agents, which would undermine morality as we see it.  What we could talk about is broad principles, such as saying that murder is wrong, and perhaps we could judge specific actions by saying that what that person did is murder, murder is immoral, so what that person did was immoral.  But we couldn’t assign that immorality to the agent without violating ought implies can or without being able to blame or praise them morally for their action.  So what we’d have to understand is that saying “What that person did was immoral” is not using “person” in the sense of a moral agent as we would expect it to be used there, but is instead using “person” more similarly to how we’d talk about a rock, or a computer, as an entity that is taking the action but isn’t in any way really responsible for the action in any strong way.  They obviously wouldn’t have chosen to take that action, and so we aren’t talking about their choice or their reasons or their deliberations.  We are literally simply saying that the person there is the object that did that, and not the agent that did that.  And the problem with that, though, is that it makes no sense for us to say that something an object did is immoral.  If a rock is picked up by the wind and breaks a window, it makes no sense for us to claim that the rock did something immoral, or that the wind threw a rock at a window and broke it and so did something immoral.  They aren’t agents, have no concept of morality, and don’t make choices.  So what they do cannot be called immoral.  And yet, this seems to be pretty much the sort of things that hard determinism makes us into.  So if they cannot be considered immoral for doing things like that, then it doesn’t seem like we can be considered immoral for that either.

So compatibilists or hard determinists inspired by compatibilists can argue that those things don’t have any kind of consciousness at all, but we do and so we can be held to higher standards than simple objects can, just as we might be able to argue that an advanced AI would still be deterministic and yet still can be held responsible for their decisions in a stronger or different way than a calculator can be.  This actually doesn’t help that much, because we actually already distinguish between things that have consciousness and things that don’t.  A lion, for example, may kill a human, but we don’t consider that murder nor do we consider the lion immoral for doing so, even though we’d consider it murder and the person immoral if a human did it.  The reason we don’t consider the lion immoral is because we don’t consider the lion a moral agent, despite it having at least some agency.  It’s incapable of understanding morality and so incapable of acting for moral reasons, and so cannot be judged immoral for what it does, but we are seen to be able to understand morality and act for moral reasons, and so we can be.  So adding consciousness doesn’t get them out of the problem.  We need a special kind of consciousness to get what we consider to be a meaningful morality.

So this opens up a potential way out, as they can argue that agency isn’t what’s important, but instead that it’s morality that’s important, so what the entity needs is the ability to comprehend the somewhat abstract notion of morality and reason on the basis of that, and so doesn’t have to have agency per se.  The issue is that we also have examples where we don’t consider an action immoral even if the entity seems to understand morality but doesn’t seem to have the proper agency.  Take the classic kleptomaniac example.  What they do really is stealing, they clearly seem to understand morality since they often feel morally guilty for doing the action, but we don’t consider their stealing to be immoral because they don’t seem to have the proper agency.  They don’t really choose to steal, but instead have an irresistible compulsion to steal.  Which brings us back to “Ought implies can”:  they can’t do otherwise, and so it can’t be said that they ought to do otherwise, so it is meaningless to say that what they did was immoral.  Not only does it not make sense as per our existing concept of morality, haranguing someone and punishing them for not doing what they couldn’t do really makes no sense, nor should they actually feel guilty for not doing what they, again, couldn’t actually do.  What purpose does morality have in that case, then?

The key thing is that morality is critically driven by moral agency.  Moral agency is a combination of two things.  The first is the ability to understand what morality is and assign value to morality and classify statements and reasons and actions as moral or immoral ones.  This is what the lion lacks.  But the other thing is the agency part, which is the ability to make choices and take actions according to those moral considerations and reasons.  What hard determinism at least risks us losing is the ability to make choices or act for reasons.  But that’s not unique to morality, nor is there some special type of agency that is required there that we might be able to work around.  No, the same concerns about moral agency apply to, say, rational agency.  If I can’t act for reasons that I recognize as moral and shift my behaviour based on my classification of them, then it looks like I can’t act for reasons that I recognize as rational and shift my behaviour based on my classification of them.  The problem is with being able to act on my mental and conscious classifications, not with any specific classifications that I might have.  So if I can’t be said to act rationally, then I can’t be said to act morally either, and if we could preserve a notion of agency that would allow for rational action, then it should allow for moral action as well, at least from the perspective of agency.  So the agency problem equally hits pretty much all of our actions, and so losing moral agency is a side effect of losing agency in general, not a specific sort of agency needed for morality.

This is where the compatibilists do have an advantage, because their model is trying to preserve some meaningful notion of agency that can save all of these functionalities, and so they aren’t locked into arguing over different notions of morality that might save things.  They can try to save agency in general and so sidestep all of these issues.  And the problem with this chapter is that it spends too much time trying to come up with different notions of morality while missing that agency in general is the overarching issue here, and agency in general is crucial enough to morality — and other things — that unless you save agency in general you will never be able to come up with a notion of morality that looks at all like the one we have, and can be used the way we use it, and so you will never be able to come up with a notion of morality that aligns with the strict hard deterministic notion of agency that retains any meaning.  Thus, we need to massage agency, not morality.  And if you do that, then you pretty much are a compatibilist and not a hard determinist anymore.

Free Will is Not Defined By Moral Responsibility

July 16, 2021

I’m still going through Derk Pereboom’s “Living Without Free Will” and am on the chapter on Compatibilism.  As I’ve been doing in this series of posts, I’m not getting into that in detail, as I did that when I went through “Four Views on Free Will”.  In this case, that’s pretty much literally the case, as Pereboom’s chapter here very much repeats what he said in his essay in that book, which I’ve already covered.  So here what I’m going to talk about is how I think we’ve gone astray by making free will out to be pretty much defined by how it creates or preserves moral responsibility.  The problem is that this leads to talking about free will and free choices only in the moral context, and ignoring that the reason we think that a lack of free will would eliminate morality is not because of some special moral consequence, but instead because it would eliminate any real responsibility we could have for our actions and choices which is necessary for morality to work.  We cannot be moral people if whether we act morally or immorally is completely out of our control.  For morality to work, it must be the case that I am responsible for what I do, at which point we can judge whether what I did was moral or immoral.  The modern free will debate often seems to be taking those two separate judgements as the same judgement.

Pereboom does this explicitly with his argument — referenced in the post linked above — aimed at compatibilism where he tries to show that we would have to move from the first condition where the action itself is completely controlled externally through a set of two other slightly different conditions and ending at physical determinism, trying to argue that we had to consider all of the previous cases ones where we wouldn’t be held morally responsible and so it had to be that way in the physical determinism case as well.  This runs into the first problem with doing this, as I noted there that a number of moral theories would indeed be able to break that chain at some point before the end, and so whether he’s right or not would depend on which moral theory ends up being correct.  But discussing free will should not involve studying moral theory in detail and coming up with an accepted moral theory.  He can — and did — argue that it relies on our intuitions, but again those intuitions are being challenged by moral philosophy and so cannot be used to prove his case.  If we might still be morally responsible for our actions there, his argument fails, and if moral philosophy is saying that we might indeed still be morally responsible in some of those cases then his argument is unfounded.  And the only way to save it would be to solve the question of what the right moral theory is.

The second issue is that moral reasoning is a lot more complicated than this argument will allow for.  It is entirely possible that we wouldn’t hold someone morally responsible for doing something even though we would concede that they are responsible for it.  We may, for example, consider that while they are responsible for the action and while under our moral theory they really ought to do otherwise, asking them to not do that might be more than we can demand of someone, making the action morally desirable but not morally mandatory.  For example, Utilitarianism has a rather famous problem where by the strict calculation of utility someone should save a doctor who is on the verge of making a cure for cancer rather than their spouse if they can only save one, but it seems like too much of a sacrifice to demand of them.  By the same token, using Pereboom’s own example of a kleptomaniac as someone who has a strong compulsion to steal but who could choose not to steal with an incredible effort of will, it’s perfectly reasonable morally to say that to demand that they do that is demanding too much.  While they technically can, we might say that they can’t reasonably can, and since their condition is not their own doing nor is something that they could change on their own demanding that they overcome it might well be too much for them.

But what’s important here is that we aren’t saying that they didn’t really have a choice or that they aren’t really responsible for the actions they take.  The moral reasoning here is that it’s just too much to ask of them to demand that they act morally, which is especially indicated by the spouse case.  It’s just too much to ask to ask someone to sacrifice their spouse or children for a stranger, no matter how much “utility” that will cause.  They are still responsible for their actions and choices, but we don’t feel comfortable demanding that they make the “moral” choice there.  So, while arguments like Pereboom’s require the discussion to focus on responsibility, that’s not what the moral judgement and even arguably the intuition are using there, and so the arguments and intuitions aren’t tracking the thing Pereboom needs to make his argument.

This is also creeping into arguments about actions, as a number of people — Coel, who used to comment here, does this a lot — use this to argue that in such situations we don’t have free will or make free choices.  Then they incorporate that idea into their theories of free will and argue that if we have significant external influence then we really don’t make free choices there.  One commonly cited example is the idea of someone being asked to steal $5 by someone else who has a gun to their head.  Morally speaking, this relies on the idea that we can’t reasonably ask a person in this situation to give up their life for that amount of money, which is why the Stoics could actually argue that it would be immoral because you would be choosing to take an immoral act, and the morality of your own act is not impacted by the immoral actions of others.  However, the argument here tries to deny that we would be responsible and insists that we don’t “really” have a choice.  But as the Stoics note, we indeed actually do have a choice there.  And even empirically, we ourselves can note that some people do actually make the other choice.  So since the choice can actually be made, even though it’s really hard to make that choice, we are still responsible for it and still make a free choice when we decide to not choose that action.  Conflating moral demands with responsibility can thus lead to some dire consequences for both free will and morality.

Morality is the thing we most want or need to preserve by preserving some notion of free will.  But that does not mean that free will is defined by what is moral and what is not moral.  The loss of free will risks losing the sort of responsibility that we need to get morality off the ground, but that does not mean that cases where we cannot morally judge someone even if the choices seem to have relevantly moral elements follows from a loss of responsibility as opposed to some other moral consideration.

Libertarianism and the Evidence

July 9, 2021

So, in chapter 3 of Derk Pereboom’s “Living Without Free Will”, he talks about how agent-causal libertarian views on free will face challenges aligning with the empirical evidence.  And as per usual I’m not going to go into those arguments in detail — they’ve been done before — but instead am going to focus on the overarching attitude that tries to justify that, which is a very common one in all of the debates over consciousness:  the ignoring of the only real evidence anyone really has in favour of the supposed objective scientific and empirical evidence that supposedly works against it.

At one point, Pereboom concludes that while he can’t actually use the empirical evidence to rule out agent-causal libertarianism, the onus is on the advocates of that position to provide evidence that that position is correct.  Otherwise, if compatibilism fails then hard determinisms are the only options for explaining our free will (or, in that case, the lack of it). But what he has focused on in the chapter is discussions of neurons and the brain and things like that, to force the libertarian into positing some kind of non-physical entity that does that.  What he has been mostly ignoring is the actual experience that makes us think that there’s anything that needs to be explained, which is our inner thoughts and deliberations and how they all seem to align perfectly with the choices we make and actions we ultimately take.  Yes, there may be influences from things outside of us, but every day we make choices about what we do and want to do and every day our actions align with that, and when they don’t align we can pretty much always find some kind of external interference that caused us to switch actions at the last minute.  We forgot about some of our wants.  We were reminded of something.  We unconsciously followed an old habit instead of doing the new thing we wanted to try to do.   And so on.  Our mental life and physical actions in the world seem in perfect step, and our mental life suggests that we are really making choices that determine that physical action.  How is it it, then, that the actual experiences that we are using to define and talk about free will don’t count as strong evidence in favour of our position?  Shouldn’t it be the case that the hard determinists have to provide really, really strong evidence to show that those defining experiences are indeed misleading?

And it turns out that despite their assurances, they don’t have strong evidence.  When everything else that we encountered in the world seemed deterministic, they could make a case that since that was the case for everything else and our minds not only had to interact with a physical brain that worked that way but could even be claimed to be nothing more than a physical brain that worked that way then we had good reason to think that the mind had to work that way, too.  Sure, that was shaky, but it at least raised the interesting conundrum of either having to argue that mind was not-physical and yet somehow interacted with the physical world or else being able to explain how the mind and the “physical” world interacted while having to give up any non-deterministic elements that we thought we had.  But as noted last time, when quantum interactions were discovered this completely killed that argument (although physicalists don’t seem to understand that).  The reason isn’t that we can use probabilistic models to preserve free will — as they are right that randomness doesn’t help either — but that we’ve shown that we can indeed have different models in the physical world, or the world we consider physical.  Free will, then, could easily be built out of some kind of intensional model, and lots of the issues that non-physical minds are posited to solve are indeed ones where intensionality is required.  That some things, then, like minds might have to use this model in the same way as quantum things need to use the probabilistic model instead of the deterministic one is certainly a live option, and determinists haven’t found a credible deterministic explanation for why we have the experiences we have if they don’t actually do what they look like they’re doing.  As I noted in posts here talking about Jerry Coyne’s objections to free will — that I’m not going to look up at the moment — how in the world did we evolve these sorts of elaborate internal decision-making models if they don’t actually do anything?  Evolution can only select for things if they themselves provide value or else get dragged along with something else that has value.  Unless these experiences are just inherently part of a brain like ours doing that sort of thing, those experiences need to have a purpose, and determinists can’t find a purpose for them if they are causally inert.  And we have no reason to think that these experiences are just inherently part of brain like ours doing that sort of thing.

So physicalists and materialists are making a mistake by assuming that all the evidence we have and the important evidence we have aligns with their idea.  Yes, we know that our experiences can indeed be illusory or mistaken.  When we stick the stick in water, it doesn’t really bend even though it looks like it bends.  But we have lots of good evidence to show us that it doesn’t really bend in water.  Determinists have not provided that evidence for free will, and seem to be retreating to the outdated idea that the world itself refutes the idea rather than providing that solid evidence.  At a minimum, libertarians have good evidence for some sort of meaningful free will, and so it’s a clash of evidence/interpretation or an issue over what someone wants explained or considers the more important evidence, not a case where determinists have all the evidence on their side and the libertarians then need to find some evidence to oppose them.

And this is reflected in the rather popular position of compatibilism.  Why is this position so popular?  Because it still takes our experiences of choices seriously and insists that, yes, they still have meaning and roughly mean what we think they mean.  The challenge for those positions is finding a way to reconcile those experiences with a deterministic physical world in a way that doesn’t leave them as libertarians or determinists that have actually solved their dilemmas, but they don’t challenge the physicalist notions like the libertarians do but also don’t dismiss our experiences like the hard determinists do.  If the position works, it’s the one with the least challenges.  However, it’s hampered by, well, having to actually solve those challenges first before it can come up with a credible theory.


July 2, 2021

So I’m continuing to read Derk Pereboom’s “Living Without Free Will”, and in the second chapter he examines coherence objections to libertarian free will.  This mostly focuses on event-causal libertarianism, which basically posits that a choice is free if there is a “free choice” in the causal chain, at least as I understand it.  The main objections to this end up being that it’s difficult to see what sort of actual choice in the causal chain could be “free” in the right way to preserve things like moral responsibility.  If the “free” choice was caused, then wouldn’t it be just as deterministically caused as all the other events?  But if it isn’t caused, then it would seem to be random, and that doesn’t work.  So we’d need to have some kind of event that can be a cause in the right way and yet would be uncaused in the right way.  Pereboom spends much of the chapter arguing that agent-causal libertarianism isn’t vulnerable to this sort of objection in the same way, but he argues that event-causal libertarianism cannot be made coherent given these objections.

I’m not going to go into that discussion in detail (and, in fact, that’s going to be consistent across all my posts on the book).  Instead, I’m going to take a point that came to me while reading it and talk about that.  The main thrust of all the arguments revolves around causation, talking about causes for events and uncaused free events and so on and so forth.  But as all this went on, I started to feel that there was something odd about all of those discussions of causes.  It seemed to me that there might even be a sort of equivocation going on about cause, where the coherence argument ends up using cause in a very deterministic way when the sort of thing that would cause a free choice wouldn’t be that sort of thing.  Now, Pereboom could argue that that only works for agent-causal libertarianism since it would be the sort of thing that could itself be a different type of cause, and event-causal libertarianism needs the free choice to be the different type of cause but then we need to ask what causes it, but while valid that’s not really my point here.  My point is that we seem to get stuck talking about cause in a way that doesn’t really apply to what we think free choices would be.  We seem to be working with the notions of cause that don’t seem to be the same sort of cause involved with free choices and from that generate all sorts of problems for free will.  But maybe what we’re talking about is a different type of causation altogether.

The key thing that is missing in a lot of these sorts of discussions is the special nature of consciousness and free choices.  How they differ from both the seemingly deterministic macro level and the probabilistic quantum level is that their causes fundamentally depend on meaning.  For both consciousness and free choices, the outcomes are crucially determined not by the symbols themselves or simply reacting to symbols, but instead by what those actually mean.  So for free choices it’s never going to be the case that we just experience something and it will always be the case that we will react the exact same way to that situation.  And yet it won’t be simply probabilistic or random either.  What we should be able to do in all of these cases is appeal to the reasons the person has and the meaning of the things involved in the choice and determine from that what the outcome of the choice will be.  If we know all the reasons and all the meanings — and there can be a lot of these in one person so that can get very complex — we should be able to determine to more than a simple probability what choice will be made.  But contrary to views like behaviourism we aren’t going to be able to do that without critically appealing to reasons and meaning.

So perhaps there is a third sort of causation that we can describe as intensional.  We have the strict physical level where causation is deterministic, the quantum level where causation is probabilistic, and the conscious level where causation is intensional.  And the key here is that before we had the quantum level, determinists could insist that such a thing wasn’t possible because everything was deterministic, but once we’ve noted that a different sort of causation might apply at the quantum level the door is open for other sorts of causations in other sorts of areas, as long as we can properly separate that area from the others.  And all meaning-aware cases are easily segmented from both physical and quantum phenomena by the precise fact that they are meaning-aware.

So perhaps we can save event-causal libertarianism by noting that the free choice events are a special sort of cause, meaning that they are meaning-aware.  And if they are meaning-aware, then they are “free” in the way we want them to be free:  produced and explained by intensional factors as opposed to deterministic or probabilistic factors.  All we need to do, then, is recognize that the sorts of causation involved in free choices is neither of the other two sorts, and so if we try to use the other sorts of causation in describing it we are going to get things wrong and so generate inconsistencies and problems that aren’t really there.

I also wonder if this might tie into Feser’s criticism of modern science and how it rejects Aristotlean causation.  Aristotle’s view focused heavily on having different types of causes and explanations that are used in different cases, and Feser argues that a flaw in modern science is ignoring all of those to reduce everything down to direct causation.  We know that we can do interesting things by considering a different type of cause using “structuring” causes, where we can say that the reason that a specific thing happens is because something set things up that way (think of explaining why dominoes fall in a certain pattern.  Yes, it’s because the first domino was knocked over, but if something else didn’t set them that way we wouldn’t have gotten that pattern).  While I’m not an expert on Aristotlean causation and not about to dig through that to see if one of those causes could resolve the issue, perhaps expanding our notions of cause could reveal a sort of cause that free will could be using that would avoid the issues here.  Maybe we are making the mistake of only talking about direct causation (I think that’s the “formal” cause for Aristotle) when the appropriate interaction is governed by a different type of cause, that allows for the appropriate responsibility without contradicting the physical causes related to the actions that are ultimately taken.

Choice and Outcomes

June 25, 2021

So I’m reading “Living Without Free Will” by Derk Pereboom.  I’m only two chapters in, and so far it’s not really talking at all about how to live without free will or even showing that we don’t have it, but instead is basically summarizing — and somewhat attempting to refute — the competing views on free will, and so is summarizing a lot of the points found in “Four Views on Free Will”, usually using the same people as references.  So there isn’t really anything new yet.  However, while reading the chapters some new things occur to me and so I’m going to comment on them.  And in the first chapter it’s about Frankfurt examples and how this demonstrates what choice really means to us.

To refresh everyone’s memory, Frankfurt examples are examples aiming at the idea that free will requires that there be alternate possibilities by inventing cases where there aren’t any actual alternate possibilities, but we still think that the person chose to do it anyway.  In general, these cases tend to be ones where someone is allowed to work through the entire decision-making process, but if that process produces the “wrong” decision some external force intervenes and “switches” the choice — and the outcome — to the “right” one.  So, obviously, if they chose the wrong one that wouldn’t be a free choice, but surely if they made the right choice the first time they made a free choice, right?  So this proves that free will does not rely on having alternative possibilities or being able to do otherwise.  And there are lots and lots of examples where people try to save that view from the examples and other counter-examples and so on and so forth.

What struck me while reading things this time is that this is really proving something that was obvious but that our language was somewhat hiding.  Free will was always about choices, not necessarily about outcomes.  So what we want is to have the ability to choose otherwise, even if we can’t or couldn’t implement that choice.  And why this is obvious is that we always had the possibility of choosing to do something and yet discovering as we try to implement that choice that it can’t be done, or failing to implement it, and we don’t see that as striking at free will.  If I choose to do something and then discover as I try to do it that I can’t, we don’t think that our original choice was invalid or not a free choice.  Instead, we usually think that at the very least we decided to try to do something but ultimately ended up being unable to do it.

Let me return to my standard example of this:  someone choosing between Sloppy Joes and poutine for lunch.  They hem and haw and pore over the menu and finally decided to order Sloppy Joes.  And then the waitress informs them that the cafeteria has run out of Sloppy Joes and so they can’t have that for lunch.  We clearly wouldn’t think that their entire decision-making process was invalidated by that.  Their decision-making process did indeed settle on having Sloppy Joes for lunch.  But when they tried to actually implement the decision, they were prevented from doing it by the fact that there weren’t any left.  So they had to have the poutine.  If those were the only two choices, then there was only one outcome:  they were going to have poutine for lunch (putting aside them leaving and going somewhere else for lunch, but let’s ignore that for now).  But the decision-making process still proceeded and came to a conclusion, and nothing that happened later can invalidate that.  If it was a free choice, it remained a free choice even if the implementation of the choice failed.

This also carries on to the more classic example from John Locke, of the person staying in a room that is locked and so they can’t do anything else but stay in that room.  If they are unaware of that and never try to leave the room, could we say that they are freely choosing to stay in the room?  And the same thing applies.  They are free to decide to stay in the room or to leave the room, but the only choice that they can do successfully is to stay in the room.  As soon as they try to leave the room, they will discover that they can’t leave the room.  But they surely can make a free choice to stay in the room (if they are unaware that they can’t leave it).  And, in fact, I’d argue that they can make a free choice to leave the room, right up until they discover that they actually can’t.

The reason why that might seem confusing is because of how we talk about it when our choices fail.  Let’s return to my lunch example.  How the person is likely to describe it later is that they tried to choose Sloppy Joes, but they were out of them, so they chose the poutine instead.  So it implies that not being able to implement their choice impacted their actual choice.  Instead of saying that they choose, failed, and then chose again (or had no choice), they say that they would have chosen X but instead had to choose Y.  And this is actually not an unreasonable way of thinking about it, since in general if we are aware that an option isn’t available we won’t choose it, even if we acknowledge that if it was available we would have chosen that.  So if the waitress had told the person that they were out of Sloppy Joes before the person made their decision, that would have been a perfectly good description of what happened.

However, the person made their decision before knowing that they couldn’t implement it, and only after discovering that switched to another decision.  So they made a free decision to try to do something but failed to implement it.  And that’s how we really should describe it:  I chose to try to do X, but I failed to do X, so in response I chose to do Y.  This, then, applies to the locked room as well.  The person in the locked room chose to try to leave the room, but failed to do so because the door was locked, and so had to stay in the room instead.  Our common language tends to bind the choices to the outcomes because the main purpose of our choices is to make a choice to generate actions to attempt to produce outcomes, but in the context of free will the really important thing is making the choice, not implementing it to produce a specific outcome.  After all, if someone tried to do something but chose a sequence of events that by happenstance produced a different outcome, we wouldn’t say that they chose that outcome, but instead that they chose to try to produce one outcome but instead accidentally produced another outcome.  We acknowledge that we can choose to try to produce an outcome and ultimately not succeed at producing that outcome.

I submit that the Frankfurt examples — or, at least, their damage to the “alternate possibilities” idea — are all examples of that confusion, where we focus overmuch on there being alternate outcomes when what we really care about are alternate possible choices.  For free will, we want to be able to choose to do otherwise, even if we can’t actually successfully do otherwise.  Tunneling down to specific outcomes and possibilities only confuses the issue and moves us away from focusing on choice and decision-making processes and towards what can happen in the world, which is not at all what we wanted or what free will is really about.