Functionalism and Eliminativism About Consciousness are Incompatible

I was reading this post by Richard Carrier which actually takes on someone who I guess is a bit of an eliminativist about qualia, at least, and so Carrier is criticizing his stand.  I don’t think that Carrier’s rebuttal is necessarily right and certainly don’t think that it’s as right as Carrier thinks it is, but while reading it I noticed something:  Carrier ultimately describes both functionalism and neural eliminativism — the idea that we can reasonably reduce all discussions of qualia to discussions of neural patterns — as being the moves that we can make to explain qualia.  He does this by advocating for the views of Daniel Dennett and the Churchlands.  The functionalist view is the one that says that if we are doing sufficiently advanced information processing then we are properly conscious and, in theory, even have qualia.  I believe that Dennett advocates this idea, if I’m remembering him correctly (this is what a friend of his, Andrew Brook, advocates to some extent so it seems reasonable to assign Dennett to that broad category as well since they didn’t seem to be disagreeing greatly when it comes to consciousness).  The other view is that qualia is simply the result of neural firings, and so it’s just what neurons do when they are doing the information processing that we rely on.  As Carrier references both of them quite aggressively as if at that point that view itself could refute the person he’s talking to, it made me realize something:  the two views actually cannot be used that way because they are contradictory on the key points that you would need for them to actually refute the points Carrier is using them to refute.

In hindsight, I’ve been fumbling around with this for quite a while now.  I’ve constantly criticized neural views for making it so that AIs could never be conscious, since they’d never have neurons.  And I’ve pointed out on a number of occasions that functionalist views are implementation independent and so the details of neurons aren’t really relevant to those views.  But I never really realized how much this makes the two views clash, especially since at first glance they do not seem to conflict with each other very much.   I realize now that they only look that way because like Carrier their proponents tend to use them at different levels of explanation or to engage different counters.  When we consider them in the same post, as Carrier has done here, it becomes obvious that the two of them do not work together.

Let me quote the part where Carrier references functionalism:

In other words, qualia are not an extra something that explain anything; they are, rather, the inevitable consequence of certain forms of information processing. I concur.

So what this implies is the functionalist view:  if you are doing certain advanced forms of information processing, then you’ll have qualia.  If you make qualia a defining component of consciousness, and you argue that consciousness is or is the consequences of sufficiently advanced information processing — as Dennett does tend to do — then the clear implication is that if you are doing that sort of information processing you will have qualia, no matter how that information processing is implemented.  It doesn’t matter how you get there, as soon as you get there you will be conscious, and by Carrier’s view here you will then have qualia.

So now let’s look at the neural view:

Indeed, in accord with his ignorance, perhaps Hacker might ignorantly blather on about how we could possibly know my desktop computer doesn’t experience these things as I do; at which he should be instructed to read up on the science of comparative neuroanatomy. My desktop computer has none of the corresponding hardware we know my brain requires to experience those things. We know a computer’s entire contents, and nowhere in that inventory is any experiential circuitry analogous to ours. Yet my computer can agilely handle the conceptual content of these numbers through countless renderings and computations. Perhaps that does feel like something to it; but it won’t be at all like what it feels like to me: our phenomenological circuitry is too radically different. My computer’s phenomenology couldn’t even be identical to that of a flat worm; and yet is surely far more distant from mine than a worm’s. And unless Hacker is going to profess a belief in magic, he cannot propose an effect can exist without a necessary and sufficient cause.

But the flat worm is not doing anything like the complexity of information processing that the computer is, and yet Carrier is convinced that it has more phenomenology than the desktop computer.  Why?  The only reason he gives is that it presumably has something like a biological brain that has something like neurons whereas the desktop computer doesn’t.  Thus, this summarizes the neural view of consciousness:  qualia, at least, is what the neurons are doing when they are doing that information processing.  The desktop computer doesn’t have neurons and so can’t be having qualia, no matter how complex the information processing it is doing is.  And he can’t escape that by appealing to specific experiential circuitry that the desktop computer is missing because that would be positing some kind of qualia module in the brain that generates the qualia, and so talking about information processing would be irrelevant as well.  And ultimately from the functionalist view this would separate the generation of qualia from the functionality of information processing, and so the functionalist view would have to change from arguing that qualia is what happens when something processes information in the right way to arguing that qualia is what happens when the functionality of producing qualia is performed.  In short, under this view qualia would be a separate functionality that humans, at least, perform, and so qualia and consciousness would be separate from information processing.

And this shows why the two views seem compatible at first blush.  Functionalism is implementation independent, and so only talks about what functions we seem to think follow from or are important for consciousness.  Since humans are our big examples of an implementation of consciousness and/or qualia, talking about the properties of that implementation seems to make sense.  We don’t have any other examples of an actual implementation to talk about, and so if we’re going to talk about actual examples of functional consciousness, it’s going to be the neural one.

And that works for the most part until we decide to use one of the views to define consciousness, which is what both Dennett and the Churchlands are trying to do.  Since functionalism is implementation independent, it’s not going to want it to be the case that the properties of the neurons are critical to understanding qualia, nor will it accept claiming that something doesn’t have or we shouldn’t think it has qualia only because it doesn’t have neurons.  Anything that does the right sort of information processing has to be conscious and has to have qualia.  On the other side, if we can reduce talk about consciousness to talk about neurons as the Churchlands tend to do, then while it might only have qualia while doing certain functions it’s going to be the neural states that matter and we won’t really need to refer to those functions at all.  So as definitions, functionalism will say to ignore the neurons and talk about the functions, and neural eliminativism will say to ignore the functions and talk about the neurons.  It’s only because we tend to talk about the functions and the neurons in response to different challenges that we don’t normally see that they work at cross purposes to each other.

This cause major issues for Cognitive Science, because they recommend radically different research projects.  The functionalist view will say that neuroscience isn’t that important and so we should focus far more on things like psychology, in order to tease out the functions that are important to and matter for consciousness.  The neural view, obviously, will recommend the exact opposite, that we work out things at the neural level more and only refer to functions when absolutely necessary.  And while we can somewhat compromise by using both fields — and Cognitive Science, like Philosophy of Mind, has been pretty agnostic about which fields it considers, looking at anything that might possibly shed some light on the subject — the problem is that the two explanations and definitions of consciousness are incompatible, and so anyone who understands the explanations and chooses one of the other should denigrate the use of the other as anything more than supplementary as a waste of time.

They even have clashing consequences.  Functionalism allows for AI to be conscious and have qualia but cannot answer Carrier’s question about why we have qualia and our desktop computers probably don’t.  The neural view, on the other hand, does not allow for AI to be conscious and have qualia but does have an explanation for why biological beings seem to be special.  Each of them can actually solve the problems with the other, which is why at least some people tend to treat them so loosely.  But they cannot both be correct because they have contradictory implications.

But ultimately both of them end up bailing on the question of qualia and trying to put our focus somewhere else.  Functionalism does that by making the functions primary and so we can find out about consciousness and qualia by looking at the functions instead.  The neural view does that by making that simply a side effect of neurons and neural firings and so we can find out about them by looking at the neurons instead.  Both views are clearly aimed at allowing us to ignore the actual perceived properties of qualia itself, and what we experience when we experience qualia.  But I’m okay with that, as long as they don’t use their views to contradict what our experiences are, or to define qualia out of existence.  Of course, that’s pretty much what the term “eliminativism” means, so, yes, they keep trying to do that.  But we don’t have to let them.

To be fair, most philosophers do not conflate the two views in the way Carrier does.  But they seem more compatible at first blush than they actually are, and I never really realized that until someone tried to use the two separate views to refute the same arguments.  Yes, they are different views, and yes, they aren’t compatible, and in fact are quite diametrically opposed.  That doesn’t mean that they’re both wrong.  It does mean that they aren’t both right and you can’t use one of them to patch up the problems that the other one is having.

One Response to “Functionalism and Eliminativism About Consciousness are Incompatible”

  1. Free Will the Real World | The Verbose Stoic Says:

    […] Of course, I had to finish Bob Seidensticker’s Silver Bullets first, and then was inspired to write a post on functionalism and eliminativism, so this is the first chance I’ve really had to examine Carrier’s attempt to argue […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: