Bear and Bloom’s Experiment on Free Will … and Why it Fails

So, from Jerry Coyne at “Why Evolution is True” comes another experiment that he, and others, seem to think supports the idea of hard determinism, the idea that we don’t really have any kind of free will at all, even the sort of free will that people like Dan Dennett think is worth wanting. Coyne describes the experiment thusly:

The first thing the authors did was expose the subjects (who had been trained) to five randomly-placed circles on a computer screen, asking them to choose one circle quickly. Then, after intervals of time ranging from 50 to 1000 milliseconds (0.05 to 1 second), the computer randomly turned one of the circles red.

The subjects were then asked if their chosen circle turned red. They had three choices: “yes”, “no” and “I didn’t have time to choose before the circle turned red”, all indicated by pressing one of three keys on a keyboard.

Without any “postdictive bias” of the kind I described above, one would expect “yes” to be answered about 20% of the time when subjects reported that they did make a choice, because the circle that turned red was one of five chosen randomly by the computer. Instead, regardless of the interval before the circle turned red, the probabilities that you said “yes, my chosen circle turned red” was always higher than 20%. That’s shown in the graph below, which plots “probability of a yes answer” against the interval after which the circle turned red.

What’s important about this plot is not only that the probability was higher than 20%, which means that people were saying that their “choice” turned red more often than they should, but that that probability was higher when the interval between the start of the experiment and the circle’s turning red was shorter. That is, people’s bias—that they had “chosen” the circle that later turned red—was higher when they had less time to “make” a choice:

Now, the experimenters thought of a potential problem with this — although you should be able to come up with some problems with it beyond that one — and tried to fix it:

The authors thought of one problem with the experiment above. If the subjects were confused about whether they had chosen the circle that turned red, they might simply randomly press the “yes” or “no” button. That would drive the “yes” answers, expected to be 20%, towards 50%, giving the higher-than-expected “yes” rate shown above.

To deal with this, they used an experiment in which they showed TWO randomly positioned, and colored, circles on a screen, with the two colors chosen from an array of six. The told the subjects to choose one color. They then added a third circle between the two that had a color randomly chosen from the two initially displayed. And, as in the five-circle experiment, the third circle appeared at intervals ranging between 0.05 and 1 second. This way a random punch of “yes” and “no”—”I chose the right color” or “I chose the wrong color”, respectively—a randomness due to confusion, would not bias the results. With only two circles, a random punch would just make the probabilities of “yes” and “no” closer to 50%, which is what they should be anyway.

And again, the same bias was shown: subjects generally reported that they chose the circle of the same color as the one that appeared later with a probability of higher than 50%: as high as 63% at short time intervals. And again, the shorter the time interval, the greater bias was seen in the self reports.

Supposedly, this shows something important:

What both of these experiments seem to show is that, as Bear wrote in the Scientific American piece, “Perhaps in the very moments that we experience a choice, our minds are rewriting history, fooling us into thinking that this choice—that was actually completed after its consequences were subconsciously perceived—was a choice that we had made all along.” The paper with Bloom cites earlier experiments that also support this result. We have to face the possibility, just as we now realize that choices can be made by the brain before we become conscious of them,” that choices may actually be carried out before we become conscious of having made them; and yet that we feel that the sequence was the opposite of what really happened.

So, this might show that we make a decision and then trick ourselves into thinking that the decision we made was the one that occurred, no matter what decision we actually made. Put this way … it seems almost nonsensical to have such a convoluted process to trick us into thinking that we made a specific decision based on our own experiences and consciousness only for it to pull the rug out from under us later and rewrite our memory to think that the result of our conscious experience of decision-making was the exact opposite.

Fortunately, there are two other really big issues with the experiment. The first is that the key is the conscious recognition of when a choice was made. It appears that in all cases the experiment relies on the participant being certain that they actually made a choice, and so they can report the cases where they made the decision after the circle had changed colour or appeared so that we can eliminate any bias in the decision-making from this new stimulus. But note that the time ranges are all incredibly small. The longest time interval was one second, so that gives lots of chances for the person to have simply not decided before the stimulus kicks in. Ideally, these would be eliminated, but making this choice isn’t going to be all that binary a process. We’re picking at random, which is mostly subconscious anyway, and so we’re just going to react with a gut “That one!”. When the stimulus and the choice reaction come close together, it’s quite possible that the people were on the cusp of deciding when the stimulus flashed, especially since we do seem to at times be able to react to a stimulus before we are consciously aware of this. Given all of this, the more likely scenario — which they haven’t eliminated — is that the people were generally still making a decision when the stimulus kicked in, and the stimulus impacted their subconscious decision-making processes. To really eliminate this, you’d have to make them hit a button to lock in a choice, and then have the new stimulus appear. But if they did this, it would almost certainly be the case that this effect wouldn’t appear. So at a minimum they need a better experiment, one that can let us ascertain that the choice was clearly and distinctly made before the stimulus appeared, and that the participants aren’t invalidly thinking that they made the choice clearly before the stimulus appeared.

The second big issue is the common one with all of these experiments: as noted above, they are testing decisions that are, in fact, mostly subconscious in the first place. We don’t reason out choosing something at random. But free will is all about the choices we make when we reason out decisions, not instinctive or gut reactions or random choices. So, for libertarians, we’re interested in being able to make choices for legitimate reasons, and these experiments test cases where we are told to choose something for no reason. There’s no reason to think that these experiments say anything interesting about those sorts of cases.

Ultimately, for hard determinists to make their case, they have to start getting into experiments that test the paradigmatic cases of what we think are free choices. These are much harder to test, but these controlled cases simply leave out everything that makes free choices free according to libertarians or even compatibilists, and so add little to the debate.

Advertisements

Tags:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: