Mistaken Experience as Evidence for Dualism

So, I’m currently reading “The Illusion of Conscious Will” by Daniel M. Wegner, where he’s trying to argue that our conscious will and our experience of it is an illusion and all the real work is done at the neural level (roughly).  I’ll talk more about his book later, but here want to talk about something that follows from his examinations.  In order to establish that our conscious will and experiences of volition are illusory, he tries to show cases where they come apart.  He references the Libet experiments which are not very good examples, but then in the next chapter talks less about cases where we are trying to map neural correlates with timers but instead cases where our experiences would claim that we are making a volitional choice when we really aren’t, or cases where we don’t think our will is at all involved and it actually is.  So one of his examples is that of priming, which isn’t all that big a problem for free will or conscious experience, but he also gives examples of cases where the brain is stimulated and in some cases we think it was willed and in some cases we think it wasn’t depending on what was stimulated and even the context of what was stimulated, as well as the case of schizophrenics who hear voices that they claim are external but we can prove are being generated by them, to cases where a confederate can trick people into forcing a choice but having them not see it that way.  And so on and so forth.  I don’t think these examples are necessarily good ones — although some are interesting — but I think they are good enough for us to grant that sometimes our experiences and our sense of what we ourselves are doing get fooled and report that we are doing things that we aren’t and that we aren’t doing things that we are.  So the question is:  how is this possible?

One common explanation for conscious experience is that conscious experience is just what neurons firing in the right ways to produce the right functionalities does.  But if this is the case, how could we ever get a discrepancy between what our neurons are doing and what we think our neurons are doing?  They are both produced by the exact same sequence in the brain.  Thus, the experience would just be our experience of what we are doing.  How could we get that wrong?  If we can, then it would mean that our experiences of what the brain is doing and even what it is receiving from the outside world could be completely disconnected from what’s really there.  And we’d have no way to check, because every test we could make relies on our experiences being at least somewhat aligned with the outside world.  Even asking others to check our experiences has to assume that a) their experiences are not disconnected in the same way as mine are and b) that I’m actually getting their reports properly.  So if we argue that experiences are just what we get when neurons do those things they do, but then our experiences are reporting that the neurons are doing different things from what the neurons are actually doing, then the experiences from our neurons are not accurately representing what the neurons are doing, and so those experiences seem to be pretty much useless and meaningless and, worst of all, not necessarily reflective of anything, even the external world.  That’s a pretty bad outcome, so we probably want to avoid taking that tack unless we have no other choice.

This is where dualism can come in, because it provides a nice explanation for why our experiences can come apart from the neuron firings that are implementing the functionality:  they aren’t the same thing.  If we have a separate mind that interacts with the brain, then it’s going to be triggering parts of the brain and receiving feedback from various parts of the brain to determine what happened and what is going on so that it can react accordingly.  But then we can see how they can come apart, because if the feedback from the brain is incomplete or misleading, or if the mind has to aggregate feedback from a number of sources and reason out what is actually happening, then it can interpret what it is receiving incorrectly.  However, in order for this arrangement to be at all useful it cannot be completely disconnected.  Most of the time, the feedback and the mechanisms that interpret it would do so correctly and, most importantly, in a useful way.  So if we posit that the conscious experience part and the neural firing part are separate objects communicating with each other, we can explain why they aren’t always in agreement without risking having to conclude that conscious experiences might be or seem to be entirely unreliable.

Now, what I’m sure that most materialists are waiting to scream about is that we don’t need to have a separate or immaterial mind for this to happen.  We can separate the interpretive part from the producing actions part simply by introducing — or perhaps reintroducing, since it seems to be out-of-favour at the moment despite there being no real neurological reason and much neurological reason to think it’s true — modules.  One set of neural firings are producing the experiences based on the feedback they are getting from brain overall, while the other is actually producing the experience.  This would give us two separate “objects” in the brain so that they can come apart at crucial times, but also in theory allows us to trust those experiences because they would need to be accurate for the system to have any use.  All we have to do is find a way for that module to causally impact what we do — since dualism would presume that a dualistic mind can causally impact even the original chains directly while they’re happening, despite them being separate objects — and we’re golden.

As you might have noticed from my little aside above, that actually isn’t all that easy.  In order for these to remain as separate objects, we’d have to be careful about allowing the experience module to causally impact the action-production module, because then we’d start to wonder how, if they are that intimately connected, that they can come apart wrt what is actually happening.  If the module is producing those neural connections, then how come it doesn’t know what those neural connections are doing?  It’s one thing if it thinks that it’s doing something and that thing doesn’t happen, or else thinks that the firing is fully-caused by its deliberative actions when an automatic process is what kicked it off, but how could it ever be fooled into thinking that it hadn’t done something when it actually did?  It would surely have access to what it was doing, and the explanation for how it could be fooled into thinking it did something when it didn’t is in fact that it notes that it was activated itself and notes that the effect was produced even if the actual cause of those neural firings.  So if it can interact with the action-producing parts, it’s not likely doing the entire thing itself.  But then what is it doing?

One suggestion is that what we have is some kind of self-monitoring system (I think that this is pretty much Dennett’s view of what conscious experience is) that monitors and interprets what we do and what our various areas of the brain are doing and then produces an experience that reflects this.  This would work pretty well at explaining why things come apart, but it’s not a very good explanation for why it gets things wrong.  If this is an important module in the brain that we need for important things, it seems that it’s a pretty weak process if things can come apart that drastically.  Yes, we can argue that it seems to work well-enough most of the time, but it actually has only one job, and it gets it wrong a significant amount of the time.  If what it’s doing is sufficiently important, then shouldn’t it do that better?  And if it isn’t and so these actions don’t matter (as people who think free will an illusion would have to assert), then why do we have it?  So we’d still need to find a cause for it, and if we did that we’d have to wonder why it can be tricked so relatively easily.

A perhaps better suggestion is that the main purpose of this module isn’t to monitor us, but is instead to monitor things in the outside world, and it only monitors us or interprets what we’re doing as a side effect of doing that.  This actually seems pretty natural, since we would definitely need to filter out the things we’re doing from the things that are done externally to us, and since this would be just providing us with information that we can use to feed into our actions later then we don’t need any direct cause for actions, but instead just need it to file away information that impacts later decisions.  And as long as it does this better than something that can’t do that, it’s useful enough to be selected for by evolution, so some unimportant errors can be explained, and perhaps even explained by appealing to which of the errors are more useful than insisting on always getting it right.  This has the advantage over self-monitoring in that we get a set and simple purpose for what this is doing while still being able to explain it not being perfectly accurate.

But wait.  Do we really need it to have a strong causal role?  Can’t it just be something that happens but has no causal impact?  Consider that any experience-producing module is going to be a pretty expensive one, and that there must be something special about these modules so that they produce experiences because we would have shown that very important parts of the brain don’t seem to produce experiences.  So it can’t just be assembled from neurons and just happen to produce experiences, or at least that’s not a very credible story.  Thus, we both a) need to have a purpose for this specific module and b) need to find a reason why it has experiences, either because having experiences is required for it to do what it does or because that assembled module is something that will do that.  And given that much if not most of the brain won’t produce experiences, it’s the former that seems the more likely, not the latter.  So we’re really going to need both the module and the experiences to have a strong causal impact to explain the presence of these experience producing modules.

Which cycles us back to dualism, because dualism has an advantage here as it can argue that the primary defining quality of a mind is having experiences, and so experiences are going to be a fundamental part of everything the mind does, including these interpretations.  The dualistic mind works through experiences and has that as its fundamental nature, so everything it does — including any causal connections it has with the brain — is going to involve experiences.  So it wouldn’t need to explain why it produces experiences when the brain doesn’t.  The very nature of the connection is that the mind does the experiences and the brain doesn’t.  Yes, it would need to explain how it can cause anything at all, but it doesn’t have to explain how one part of the brain produces experiences while another part doesn’t.

So our experiences and our neurally-produced actions can come apart as Wegner suggests, then that suggests that we have separate objects involved here.  And dualism is based on the idea that there are two separate objects involved in such things.  That gives it a huge advantage in explaining what those separate objects actually are.

2 Responses to “Mistaken Experience as Evidence for Dualism”

  1. Red or Black: Perception, Identity and Self | The Verbose Stoic Says:

    […] more certain of a set and separate self, possibly dualistic, than doubtful of it, just as it did in my discussion of mistaken experiences.  We have all sorts of issues trying to resolve these questions if we try to argue that […]

  2. Thoughts on “The Illusion of Conscious Will” | The Verbose Stoic Says:

    […] our conscious sense of what we do and what the full cause of our actions are can be out-of-sync.  I noted that this suggests that what we have are two different modules or things that talk to each o….  The problem is that in order to have this conclusion hold for the cases where we do think it […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: