Nailing the Third-Person Science of Consciousness to the Wall.
In “Sweet Dreams”, Dan Dennett waxes eloquently about how he’s sure that the problems of people like qualia freaks and those who like the zombie examples when discussing consciousness will be seen as being quaint and misguided in 100 years, when the third-person view of consciousness that he advocates will have settled all the questions that seemed so mysterious to us.
I smells me a challenge.
So, here, I’m going to go through and outline just how no sort of third-person science of consciousness is every going to be able to settle what qualia freaks like me are interested in, and so it’s never going to be able to settle what consciousness is for us. I’ll walk through functionalism, romp through neurons, and ultimately stroll with zombies, cyborgs and Martians as I both nail views like Dennett’s to the wall, and then put nails into their coffin lids, before being magnanimous in victory and letting them have their view of consciousness. And I’ll do all of this while holding tightly to the hand of phenomenal experience because, well, I’m a qualia freak. What did you think I was going to talk about?
After a set-up like that, how can I fail?
So, first up, functionalism. Functionalism about mind in its basic form is essentially this: consciousness is as consciousness does. It’s similar to behaviourism in that it tracks what the organism is doing in judging consciousness and doesn’t care much about the underlying implementation. As such, it’s pretty popular for people who think that a computer can be intelligent or conscious because you don’t need neurons to get consciousness; you just need to get it acting the right way. Now, while there’s nothing in functionalism that says that internal functions can’t matter, if we’re going to have a third-person view we’re not going to be able to rely on looking at what’s going on inside the process; it has to be things that are observable from the third (or perhaps second, as Dennett himself says) person view. Things that are only accessible at the first-person view are not going to work for Dennett’s preferred viewpoint. Which is right, because it’s that first-person view that people like Nagel, Chalmers, and myself are worried about. So if he can explain everything without having to go there, our concerns over the mysteries of the first-person view vanish, and were just misleading sidebars.
So, what ultimately will have to happen is that this science will have to be able to explain and differentiate all interesting things about consciousness — which, for now, includes phenomenal experience — without having to ever look at the view from inside. If this science can track down to what phenomenal experiences someone is having simply by looking at what they say and do, then it will have succeeded, and all the qualia freaks will simply have to slink off into the corner to cry.
So, the obvious first step is to see if that can be done.
Before starting, I need to introduce another concept, that I first described here:
The “phenomenal-behavioural knave”, or — as I’d like to shorten it now for more punch, the phenomenal knave. What’s a phenomenal knave? Well, it’s a creature/thing/person that has phenomenal experiences, but always misreports them, where misreports is not just the simple “It says it sees green when it sees red” but carries on all the way. The phenomenal knave, when asked about its experiences, will claim that they are different than what they actually experience, but will also act as if they are what they report them to be and will do so consistently. In short, you won’t be able to tell that they aren’t really seeing green when they look at a red object because all of their visible behaviour will be consistent with them seeing green.
This was, of course, inspired by the classic knight/knave logic problems. And from here, the possibility of such a phenomenal knave is proven. What we’d have — putting it all back into that structure — are phenomenal knights that cannot lie about their phenomenal experiences, phenomenal knaves that cannot tell the truth about their phenomenal experiences, and everyone else who can either respond — and act — truthfully or deceitfully about their phenomenal experiences. So, can we have phenomenal knaves? Well, can we lie about our experiences? Yes? Then why couldn’t you have someone who simply couldn’t do anything but lie about their experiences? So you can’t reject it on the grounds of implausbility; even if it has never happened, it certainly could.
So, now, in order to get at functionalism I’m going to compare two completely different types of people. The first are people with red-green inversion (RGIs), who legitimately see red as green and green as red and act accordingly. The second are a special type of phenomenal knave, who only misreport that they see red as green and green as red but see them both normally. Call them PK-RGIs.
So, we can see that there is a critical difference between RGIs and PK-RGIs, and that difference is entirely at the phenomenal level. RGIs actually have a different phenomenal experience than normal people and than PK-RGIs. PK-RGIs actually experience the world exactly the same way as normal people do, but they don’t act that way.
So, at the third-person observable functional level of behaviour, what can we say about RGIs and PK-RGIs? Can we capture this critical difference? Well, it turns out that we can’t. They act the same way. Every single test that you can run on RGIs and PK-RGIs will have them acting precisely the same, by definition. Functionally, then, they’re identical, at least from the outside view. Functionalism, then, would fail to capture this critical distinction: the distinction between someone who is really having an inverted red-green experience and someone who is not having an inverted red-green experience but is acting consistently — and unwillingly — as if they are.
Note that here we can escape the issue of “consciousness about consciousness”, since the internal perception of the experience is not being called into question. Internally, PK-RGIs very much know that they are seeing red as red and green as green. They just never reflect that in any of their external behaviour, and so we could only determine the difference by sharing in their first-person view, which is what the functionalist and the advocate of a third-person science doesn’t want to and, really, cannot rely on.
So, functionalism doesn’t work here. Is there any other third-person accessible perspective that can save the say? Well, yes, there is: neuroscience. What we can do is — in at least this case — break the “implementation-independence” of functionalism and look at the brain directly. The hope is that through the self-reports of normal people and RGIs, we can determine where in the brain the phenomenal experiences are being generated and where they’re being reported. Once that’s been settled, we can then happily point out what must be true: that at the physiological level, RGIs have issues in the phenomenal experience generation faculty while PK-RGIs are generating experiences like normals but have an error in their reporting faculty. This, then, would solve the problem, and we could tell the difference. So functionalism would fail, but functionalism plus neuroscience would succeed. And there was much rejoicing.
So, let’s reiterate where we are. In order to save the third-person science of consciousness from the challenge of the PK-RGI, the advocate of that position must retreat to neuroscience and identify where in the specific implementation phenomenal experiences are generated and where they are reported/acted on and only then can they identify this important difference about how PK-RGIs don’t actually experience things the way RGIs do, but experience things like normal people do. This is pretty much their only way out; functionalism won’t do it and nothing else can do it while remaining third-person. So, saved by neuroscience, right?
Well, not quite, because it opens up another set of interesting problems. In order to solve the problem they had to identify the specific mechanisms in the brain that generate phenomenal experiences, be that a specific module in the brain or a specific quality of neurons, or whatever. Thus, they can say that the RGI case is an example where that mechanism fails and note that in the PK-RGI case that mechanism is working fine, but it’s downstream actions — things that occur after the experience is generated — that alter the behaviour of the PK-RGIs so that they look like RGIs. So we know what produces the experiences.
Now, enter the cyborg. Imagine that we take someone and replace whatever it is in the brain that produces the phenomenal experiences with a set of computerized mechanisms that don’t produce phenomenal experiences, but we hook it all up so that they take the same inputs, produce the same outputs, and hook into the non-phenomenal — and thus behavioural — aspects in the right way so that everything works as it did before. To forestall an objection, I’m not saying that computerized mechanisms can’t produce phenomenal experiences, just that these don’t. If we hook these up in the right way, it looks like we could have a person who in fact acts just as they did before the implantation of those computerized devices, but doesn’t have phenomenal experiences anymore because those devices don’t themselves produce them.
And here we can see a big problem: at this point, we have to be committed to epiphenomenalism, the idea that our phenomenal experiences don’t actually have any causal impact on our behaviour. We could have completely different ones — or not have any — and our behaviour wouldn’t change. If the above cyborg is possible, then epiphenomenalism follows. While some people might not have a problem with that — Jaegwon Kim for one — for most people this is slightly problematic.
So, the most immediate reply would be to deny that this is possible; since phenomenal experience is missing the behaviour cannot be the same, since the qualities of phenomenal experience have to matter in how we act. The problem is that how we tried to answer the PK-RGI case seems to make this unreasonable; PK-RGI cases have a different reaction to the same phenomenal experience as others. One could claim but that mechanism at least needs phenomenal experience as an input, but at the neurological level that would be simply be neurons firing and neural connections made that cause firings in the behavioural faculty. If we had a module, it would be trivial to replace the phenomenal module with a computerized model that simply activates the connecting neurons without itself generating the actual experience, and if it is more distributed or is a quality of the neurons themselves at some point we pretty much have to get into things that don’t experience themselves, even if that’s just the nerves and things that directly move arms and legs and activate voice boxes. Unless one wants to assert that everything in the body produces phenomenal experiences, if you can identify in the brain what causes phenomenal experiences you can replace those things with something that doesn’t produce those experiences yet still hooks up in the right way to all the things that don’t do phenomenal experience but ultimately implement behaviour. This may not be a slam-dunk argument, but it seems to me to be difficult to imagine how this isn’t a consequence of a neural story.
And establishing this is important because of the next — and more interesting and important — problem: by this, we have zombies. No, we don’t have physical zombies like Chalmers wanted; their physical make-up is, in fact, completely different. No, what we have are behavioural zombies, things that act exactly like they have phenomenal experiences in all ways — even to the level of consciousness about consciousness — but don’t have any at all. Taking the third-person route and trying to explain RGIs and PK-RGIs reveals that, yes, we have zombies.
But this doesn’t seem like a problem, does it? After all, we can look at the physiology that we discovered and detect what we have a zombie, because we can see that the physiology that’s supposed to produce phenomenal experiences is missing or impaired. So we can tell zombies from non-zombies at the third-person level. Putting aside the potential for epiphenomenalism, isn’t this exactly what people like Dennett have promised?
Well, it turns out that there’s a problem. Note that all of the discoveries of how phenomenal experience works in the brain are going to have to be based on self-reports and observations of behaviour, and our discussions of RGIs and PK-RGIs have revealed that that isn’t exactly safe; we could not distinguish them at the self-report and behavioural level. It turns out that this also holds at the zombie level; they act at the behavioural and self-report level just like non-zombies.
But who cares, right? After all, don’t we have the physiological differences to settle this? The problem here is that we did rely on the self-reports and behaviour to determine that for the “normal” physiology, but when confronted with the potential zombie we have to ask “Do they really lack all phenomenal experience, or are they just implementing it with a different physiology?”. After all, their self-reports and behaviour indicate that they do really have phenomenal experiences. If we judged based on that, we’d have to conclude that they have phenomenal experiences. Should we reject that just because their physiology is slightly different?
Okay, okay, it might seem reasonable to say that for humans the physiology would be close enough and clear enough that we could eliminate that. After all, if we know enough about the brain we ought to be able to tell when the mechanisms aren’t there and when they aren’t being replaced by something else. So, then, let’s turn to another case: the Martian.
Imagine, then, that Martians appear and have a completely different physiological structure for their minds than we do. If they have anything that could even remotely be considered a brain, it’s nothing like our brains. Yet they act as phenomenal as we do. The question is: from the third-person, could we tell whether they really have phenomenal experiences or if instead they are simply zombies?
Well, we couldn’t use their self-reports and behaviour, as we’ve already seen. And we couldn’t use neuroscience and physiology, because their physiology is completely different from ours. So, ultimately, we could not tell with the third-person science at our disposal whether these things have phenomenal experiences or are just zombies.
And to the extent that having real phenomenal experiences is required to be conscious, we couldn’t tell using any of the methods we have and any that we can currently foresee that they’re really conscious. We’d need the first-person viewpoint for that, which is precisely what people like Dennett wanted to deny. The first-person viewpoint, then, is critical for determining if something is conscious if determining that they’re having phenomenal experiences is critical important for determining if something is conscious.
So this leads to one path out of the problem: deny that having actual phenomenal experiences is important for consciousness. You can take a tack from people like Andy Brook and argue that being conscious is just being aware, and being aware is just about having the right sort of representations, representations that let you act as if you are really seeing, say, red instead of green. The Martians, the zombies, the RGIs, the PK-RGIs, the cyborgs and all of us have those representations, as evidenced by our behaviour and self-reports. Since we do, we’re all conscious, and what phenomenal experiences we’re really having — if we’re having any at all — just don’t matter.
At this point, the qualia freaks will rightly cry foul. It seems that, for us, having phenomenal experiences is really, really important to what it means to be conscious. You don’t just get to dismiss it by definition or fiat, since that’s what all the qualia freaks thing is the defining quality of experience, and it seems that most people in their everyday lives think of it that way as well. As an example, when I’m asleep and not dreaming I’m not experiencing anything and am not conscious. When I’m dreaming, it’s unclear if that really counts as conscious or not. But when I’m awake and walking around, I’m definitely experiencing and definitely conscious. And it would be hard to imagine that experiencing things doesn’t mean that you’re conscious.
So, ultimately, my claim there would be that phenomenal experience is not necessarily sufficient for consciousness, but it’s necessary. This could be countered with the claim that we’ve been wrong all along, and that phenomenal experience is sufficient for consciousness, but not necessary. If you’re having phenomenal experiences, you’re conscious, but you can be conscious without having them. That would explain why we think that phenomenal experience is necessary for consciousness, but are misled by the fact that for us they are generally always co-associated. The examples I’ve given, then, just support their claim about consciousness.
And here is where I get to be magnanimous: I accept this, at least for the sake of argument. Why? Because at this point, the qualia freak and the mysterian have already won. See, the objection to the third-person science from qualia freaks and mysterians is precisely that that third-person science will never be able to explain phenomenal experience, and that’s why consciousness will always be mysterious and resistant to third-person science. And here, the advocate of third-person science would be accepting that, yes, they cannot explain phenomenal experience, but that doesn’t make consciousness mysterious because you don’t need phenomenal experience to be conscious.
So here it becomes clear how the view gets nailed into its coffin and then nailed to the well. First, we nail the third-person science into its coffin by proving that it can’t say explain phenomenal experience. And then we nail it to the wall by forcing it to accept that when it comes to consciousness they aren’t talking about and can’t talk about phenomenal experience. Previous to this, you could look at their models and wonder where phenomenal experience came in, and there seemed to be an underlying presumption that if they got the behaviours and self-reports right, they’d have everything interesting about phenomenal experience, too. After all, that’s part of consciousness, right? But to them it can’t be if they want to explain consciousness. So, no, they ain’t getting phenomenal experience for free — or possibly at all.
So, the qualia freak wins. The opposition have to concede phenomenal experience to us and the first-person view, one way or another. And that, really, is all we wanted.