Phenomenal Experience and Cognitive Function

What is phenomenal experience, and what relation does it have to consciousness?  The second part of the above question seems absurd; phenomenal experience is a critical part of if not the entirety of what it means to be conscious.  However, there is some debate over this.  Many times, things like a general awareness of the states of the world are included in consciousness, and in some cases it seems that those things are considered to be the primary components of consciousness.  Part of the reasoning that things like general awareness might be the primary components of consciousness are arguments that phenomenal experience itself is simply a type of cognitive function, similar to that involved with general awareness of states of the world.  If this is true, then the same mechanisms involved in general awareness should be involved in phenomenal experience, thus justifying the claim that those mechanisms are the primary mechanisms of consciousness.

This essay will argue two main points.  First, phenomenal experience – whatever it is – is the primary component of consciousness and is in fact the entirety of what it means to be conscious.  Second, phenomenal experience cannot be explained by an appeal to any sort of cognitive function, as the critical defining virtues of what it means for a phenomenal experience to be a phenomenal experience is in no way captured by its cognitive function.  This will be done by showing that we can have all of the cognitive functions of phenomenal experience without actually having phenomenal experiences.  Then, I will show that phenomenal experience still exists and still retains everything that was interesting about it even if all of its direct cognitive functions are removed from it.  Finally, alternative views that would still make phenomenal experience in some way a result of or identical to cognitive functions will be examined to show that they, at least, have some unresolved issues before they can be considered clear front runners.  All of this will be done in a framework of Chalmers’ phenomenal/psychological distinction, which will allow us to highlight the differences between the phenomenal and the psychological more precisely than could be done otherwise.

First, let us look at a distinction that is made by David J. Chalmers.  In his book “The Conscious Mind”, Chalmers proposes a distinction between the phenomenal and the psychological aspects of mind.  For him, the phenomenal “ … is the concept of mind as conscious experience, and of a mental state as a consciously experienced mental state” [Chalmers, The Conscious Mind, pg 11.  Whereas the psychological “ … is the concept of mind as the causal or explanatory basis for behaviour … it plays the right sort of causal role in the production of behaviour, or at least plays an appropriate role in the explanation of behaviour … What matters is the role it plays in a cognitive economy” [ibid, pg 11].  Perhaps this statement best sums up Chalmers’ idea of the phenomenal/psychological distinction: “On the phenomenal concept, mind is characterized by the way it feels; on the psychological concept, mind is characterized by what it does” [ibid, pg 11].

For Chalmers, the phenomenal is the experience we have in our mind or consciousness.  It is what it is like to see red, or hear a trumpet play, or feel a cold wind, or feel pain.  Basically, the phenomenal is nothing more than our conscious experiences, according to Chalmers.  On the other hand, the psychological is perhaps the harder working part of our mind.  The psychological is nothing more than – at a minimum – the terms and structures that we would describe or create to talk about how the mental is involved in producing behaviour.  It seems reasonable to conclude from the above quotes that Chalmers is not particularly concerned with psychological terms mapping to specific structures in the brain or anywhere else; as long as the term is usefully used to describe the role of the mind in our behaviour, then for Chalmers it would be an appropriately psychological term.  As long as it talks about what a mind does, it would be psychological.

So why is Chalmers’ phenomenal/psychological distinction important to this paper?  It allows us to nicely split the relevant parts of consciousness along two lines.  What many refer to as “cognitive function” should obviously fit nicely into the psychological, while phenomenal experience will obviously take up the phenomenal side of the distinction.  This frames the debate in an interesting way: if the phenomenal is going to be about cognitive function, there will have to be a psychological explanation of it.  If no psychological explanation can be advanced for the phenomenal, that means that what it means for something to be phenomenal is not simply a matter of a type of cognitive function.  It may still be the case that the phenomenal is either the result of certain cognitive functions (or, more likely, their implementations) or that it is identical to (perhaps just from a different viewpoint of) certain cognitive functions, but at least we will no longer be trying to explain phenomenal experience by appeals to cognitive function.  And of course the converse is true; if you can explain everything interesting about phenomenal experience by appeal to strictly psychological concepts, then phenomenal experience is indeed about cognitive function.

The first step in the examination of whether or not the psychological can explain phenomenal experience is to look at whether or not all of the cognitive functions that we could ascribe to phenomenal experience could be achieved through other means.  If none of the cognitive functions are uniquely phenomenal, we have some reason to doubt that any interesting definition or distinction of phenomenal experiences can be made using strictly psychological arguments.  Moreover, if all of the cognitive functions can be achieved by mechanisms that do not involve phenomenal experiences, we have reason to doubt that cognitive functions can be the primary component of consciousness; surely a large part of being conscious is having phenomenal experiences, and if all the cognitive functions can function without phenomenal experiences, consciousness must be therefore primarily categorized by the uniquely phenomenal experiences that we might have.

To that end, imagine that a friend of yours has just bought a new car.  Another friend who has seen the car mentions to you that the car was red.  At this point, it can be said that you are aware of what colour the car is; you can go out and act as if the car is red in the same manner as if you had actually seen the car.  For example, you can go out and buy seat covers in a matching colour to the car, or tell other people that the car is red, and so on.  All of this occurs without ever having to have had a phenomenal experience of the car or its colour.  While one certainly could and almost certainly would imagine the car and its colour when being told about it, it is clear that one would in no way have had to do so to be able to take those actions.  In addition, we can almost certainly all think of cases where that imagining did not occur.  So here we seem clearly to have a case where we are aware of a quality of the world without having any sort of phenomenal experience of that quality.  We clearly had a phenomenal experience of the person saying that the car was red, but not actually one of the colour of the car.

What is critical about this example is that basically it seems that all of the appropriate functionality of the car being red can be covered, even though the person had no experience of the car being red.  Basically, what the mind is doing seems to be the same even though the phenomenal experience never occurred.  So it is hard to imagine any sort of psychological explanation that could explain what the phenomenal experience of seeing a car really is in this case, since all of the psychological aspects of seeing a car were captured without ever actually having had the phenomenal experience.

A strong objection to this reasoning is that this example is a fairly weak one with respect to psychological aspects.  This situation seems like just a case where someone formed a belief: the car is red.  We already know that once a belief is formed, there are a fair number of things that we can do using it that do not matter at all to how it was formed. But what about “real-time” cases?  What about the cases where we are experiencing something and then acting on it?  Surely those cases would reveal an interesting cognitive function for phenomenal experience to play.  It seems that this example relies on capturing the appropriate functions of the weak “I have a belief” case, and then stating that none of those are interestingly about the phenomenal since we can have all of them without having had an experience.  But beliefs have been studied for quite some time as things completely separate from phenomenal experience; surely it is no surprise to anyone that you can act on them whether or not they were formed by a phenomenal experience.

To deal with this objection, we have to examine real-time phenomenal experiences directly.  This is, of course, a tricky matter since phenomenal experiences seem to always be involved in any real-time cognitive functions that we participate in.  But here is an example that might provide illuminating:  Imagine that you are wearing a pair of glasses that filter out all the colours from all the objects you see.  However, you are also carrying a spectrometer that measures the wavelengths of all the light reflected from all the objects that you come across.  As you move about the room, you look at objects and then quickly glance down at the spectrometer to see what colour the object is based on the measured wavelength. You can take any and all immediate actions that you could take based on the object being red at that point. Again it seems like all of the cognitive functions that you could have with respect to phenomenal experience are captured in this case.  But it is clear that you have had absolutely no actual experience of red here, and so no actual phenomenal experience.  So this implies that all of the relevant cognitive functions that the phenomenal experience of red may give us are present and available in this case –  without a phenomenal experience of red.  Again, it seems that the cognitive functions of phenomenal experience are absolutely no different from the cognitive functions produced by other sorts of things.

An argument could be made here that this is the same as the first case: all that I am doing is talking about the functions that are the results of beliefs.  This would seem to be a fairly odd claim; what else could be done by the experience of red that could not be done in the spectrometer example?  There does not seem to be anything missing.  At any rate, at this point I am open to someone pointing out what function could be missing from my example.  Basically, all direct actions are covered, so only indirect actions or a formation of some structure could be missing.  So it seems that, at least, all direct actions of colour experiences would also be covered by the spectrometer example.

In that light, let us examine the question of whether or not all phenomenal experiences could be dealt with in a similar manner.  One of the most famous examples that attempts to prove that all of our cognitive functions could be captured without ever having a phenomenal experience is the zombie example.  The basic notion of the zombie is the claim that in general it is logically possible to have a being that acts in exactly the same way as humans do, and yet does not have phenomenal experiences (or consciousness, as it is normally phrased).  Now, there are several types of zombies that we need to consider here.  In general, the zombie that is most talked about is the physical-behavioural zombie; in short, the zombie is physically identical to humans and yet for some reason does not have phenomenal experiences.  This sort of zombie is generally used to argue that consciousness and phenomenal experiences cannot be physical.  However, this sort of zombie makes a much stronger claim than is necessary here; no one need argue – yet – that phenomenal experiences do not have a physical basis.

We can turn to a strict behavioural zombie to make the claim we want.  A behavioural zombie is just a zombie that acts precisely the way we do, but does not have phenomenal experiences.  Andrew Brook and Paul Raymont share a conviction with Daniel Dennett that there may need to be more said here.  It is not explicit in the claim that a behavioural zombie that acts exactly like us will in fact act exactly like us.  Some may find the thought experiment plausible because they do not immediately allow that it basically even talks about its inner states the way we do.  As they put it: “To assess what is really going on in zombie thought-experiments, Dennett introduces what he calls zimbos. Zimbos are simply zombies described without cheating: they behave like us in every respect. Zimbos are lively and animated, express pride in their kids, curse when they smash their thumb with a hammer, make all sorts of fine distinctions about the vividness, clarity, etc., of their experiences, wax rhapsodic about how they feel when they are with someone they love – in short, they behave in all the ways relevant to their conscious life just as we do.” [Brook and Raymont, A Unified Theory of Consciousness, Chapter 3, pgs 17 – 18].

There are more to zimbos than the above description, which I think that everyone will accept as having to be the case if we are going to talk about behavioural zombies.  The more worrying and controversial claim about zimbos is this: “Moreover, zimbos do more than just behave like us. Their behaviour has causes. We can explain their behaviour only by postulating that they have as rich a repertoire of psychological states and the same kinds of access to them as we have and do.” [ibid, pg 18].  This is a bit more controversial.  In the first description, they simply argued that the zimbo produced behaviour like we did, including all behaviour about internal states.  It is quite credible to think that they could do this without having anything going on internally or even causally like we do.  Taking the physical-behavioural restriction out of the picture, we could posit all sorts of internal physical structural differences that could explain why they acted that way but did not have phenomenal experiences, which it seems clear would also allow us to posit all sorts of different – but generally functionally equivalent – psychological states for our zimbo. After all, if they do not have to have the same physical structure as we do, why would they have to have the same psychological states?  But the second description requires that their psychological states be exactly the same as ours; this is a much more stringent requirement.

The obvious answer to the second challenge is to reject it.  After all, in many discussions over whether or not such behavioural zombies are zimbos in the sense that they have the precise same psychological states as we do is not necessarily relevant.  For example, in the physical zombie case it is not relevant whether or not the psychological states can be said to be identical (whatever that would mean) to show that the zombie could be physically identical to us and yet not have phenomenal experiences.  If psychological states were purely physical, then that would be assumed in the example.  And if they were not, then they certainly could be different.  For other behavioural challenges, there is no strict requirement that the psychological states be identical, as long as they provided the same functionality.   So in most cases this demand can simply be ignored.

However, that option is not available to me here.  The reason is the phenomenal/psychological distinction that is forming the framework of this paper.  The argument is that there is no psychological distinction that can form what it means to be a phenomenal experience.  There is nothing psychological about a phenomenal experience that defines what it means to be a phenomenal experience.  If that is going to be true, then it is going to have to be the case that my zombie is a zimbo in the strongest sense: there is nothing psychologically different about the zombie.  Only the phenomenal experiences are different.  While proving that there is no difference in the psychological states between zombies and non-zombies is a daunting task, I will take a decent stab at it, and at least attempt to show that it is at least somewhat plausible, if not necessarily slam-dunk proven.

For the remainder of this paper, you can assume that my behavioural zombies fulfill the definition of zimbos in the strongest sense.

Before diving into the zombie, a ground rule must be established.  Brook and Raymont attempt to establish the target of the zombie example, meaning whether or not it is attempting to target what experiences of the world or ‘consciousness of consciousness’.  They conclude that it must be ‘consciousness of consciousness’.  Their reasoning is:

“What about zombies? Would they still be conscious of the world? It is hard to see why they would not be. They are supposed to be able to make all the discriminations about the world that we can make. Thus, they would have to be susceptible to the same differences between how things are and how things appear to us as we are and how things appear to them would have to play the same role in their belief-formation, or at any rate in shaping their belief-expressing behaviour, as they do in us. If so … they are conscious of the world.”(emphasis theirs) [ibid, pg 12].

In order to see if this reasoning is valid, we now have to turn to how they view consciousness.  In an earlier chapter, they come up with an idea of what every theory of consciousness is going to have to have, even if it is not a precise definition: “Common core of the concept of consciousness: consciousness is at least a matter of things appearing to a cognitive system.” [Brook and Raymont, Chapter 1, pg 14].  So what does this mean for consciousness about the world?  “When we are inclined to ascribe conscious access, there is evidence that how things appear to the organism, how the organism represents things, is what is shaping its behaviour.” [ibid, pg 17].  So it seems that to them determining whether or not something is conscious is going to relate to how the ‘appearance’ maps to the behaviour the entity takes towards the things in question.

The problem is that the word ‘appearance’ is a bit vague, and is unfortunately vague in just the right way to be problematic here.  My first interpretation of appearance is to claim that what they mean is that appearance really does just mean phenomenal experience.  So then when they talk about how to ascribe consciousness of the world to something, what they would mean is that that entity has phenomenal experiences of things in the world and the qualities of those phenomenal experiences matter in determining what the entity does.  Now, if this is what they mean it seems odd that we can ascribe this to the zombie.  Why?  Well, what we demanded that the zombie have is the precise behaviour that we have with respect to things in the world (for consciousness about the world).  The examples earlier in this paper show that in general for qualities of things in the world – even for phenomenal qualities like colour – it seems that something can act as if it had those experiences and yet never have had them.  The person in the spectrometer example can take all actions as if those things are red.  So we have no reason to grant that a zombie that acted exactly like us with respect to the qualities of things in the world would have to have phenomenal experiences like we do.  Certainly the person in the spectrometer example would not report that they see red, but that is consciousness about consciousness, not consciousness about the world.  In addition, the set-up of the thought experiment is such that the zombie does not have phenomenal experiences.  If this is what they meant, they would be trying to force us to include phenomenal experiences in the zombie.  This would, however, completely disprove the thought experiment and no further discussion would be required.  If we are forced to grant the zombie phenomenal experience at any point, the thought experiment fails.  Thus we would need stronger reasons to include consciousness of the world  – by this meaning – in the zombie.

On the other hand, if this was not what they mean by appear, we can safely grant their version of consciousness of the world to the zombie, and then still argue that it does not have phenomenal experiences of the world.  So phenomenal experiences about the world are still open for discussion.  So we are not limited to consciousness about consciousness as the target for our thought experiment about zombies in that case; we can include talk about states of the world and in particular phenomenal experiences about them..

Basically, if their definition of consciousness of the world includes phenomenal experiences about the world then there is no reason for us to grant it to a zombie that acts at a minimum the same way we do towards the world.  If their definition does no include phenomenal experiencest, then it can be granted to the zombie without impacting any discussions about phenomenal experiences.  From this, I will state that my target in these discussions should be interpreted as phenomenal experiences in general.  Phenomenal experiences about the world thus remain in play.

Trying to imagine what it is like to be a zombie that acts just like us and yet does not have any phenomenal experiences seems to be difficult to do, mostly because it is so far removed from our common experience.  I think that some of the incredulity towards this thought experiment is the result of the fact that when we imagine such a thing, we immediately start to want to insist that there has to be phenomenal experiences involved.  So what I will try to do is build towards such a thing step-by-step, arguing that along each step we get closer and closer to having our total behavioural zombie.  Hopefully, we will also drop differing psychological states along the way.

The first step is to talk about what I will call a “phenomenal-behavioural knave”.  Recall from basic logic examples the standard notion of “knights” and “knaves”, where knights always tell the truth and knaves always lie.  A “phenomenal-behavioural knave”, therefore, is someone who always lies about their phenomenal experiences.  In addition, in order to avoid their behaviour ratting them out they always act as if their phenomenal experiences are different than they actually are.  And to forestall an obvious objection, let us also stipulate that they may indeed be consistent in their misrepresentations; they may do anything required to buttress previous lies, except tell the truth about their current phenomenal experiences.  Note that this does indeed potentially cause a problem, since if we ask them if they remember saying that, for example, a green car was red and they do remember saying that, they cannot lie by saying that they said the car was green (or yellow or whatever) without giving away that they are lying about their phenomenal experiences.  This is not much of an issue since they can a) always claim that they do not remember it and b) we are not too concerned about them being able to be found out, as will be shown as we work through the example.

An additional stipulation needs to be inserted here when dealing with phenomenal experiences like pain.  Very intense pains are such that one cannot help but react in at least some way to, spoiling the faking.  So we should limit the knave in this step to only those actions that they have deliberate control over.  This should not cause too many problems since those sorts of reactions to very intense pains could be claimed to be reflexive anyway, and certainly not any sort of reaction to an experience itself.  Thus, they would have their own purely psychological explanations without resorting to our thought experiment.

Now, such a knave could – in general – act in precisely the same ways as someone who was not a phenomenal knave.  This would even include all actions and behaviours of consciousness about consciousness.  All they would have to do is lie about what experience they were actually having, or what the qualities of that experience are like.  And they do not have to lie about absolutely everything; they can claim, for example, that the qualities that would make an experience one of pain are the same qualities as those we would give.  It is just that when it comes to saying what experiences they are actually having, they lie about that.

This can be made clearer with an example.  Imagine that our phenomenal-behavioural knave is looking at a red car.  We ask the knave what colour the car is.  The knave replies that it is green even though he experiences it as being red.  We ask the knave later what colour the car was.  The knave dredges up the memory of the experience, and that phenomenal experience shows the car as being red.  So the knave is going to lie about that phenomenal experience as well.  But the knave remembers that he said that the car was green, so he chooses the consistent lie and says that it was green.  So we start to ask the knave questions about colour experiences in general, such as “Which colour is darker, red or green?” or “What colour do you get if you mix red and green?”.  The knave answers all of these questions perfectly accurately, because these are not about phenomenal experiences that the knave is actually having; these questions can be answered without any appeal to a phenomenal experience at all.

Now, the objection can be raised that such a knave would be fairly easy to detect.  After all, we would constantly note that they said really odd things like “That car is green” when we all experience it as red, and that they are not feeling pain when something that seems to be obviously painful (even excluding intense pains from consideration).  Surely something is not right about them.  This is, of course, true, but the question is: what?  Is it that they are lying about their phenomenal experiences?  Or that they have completely messed up phenomenal experiences?  We cannot tell based on the evidence of either what they do or what they say that involve their phenomenal experiences.

This argument should satisfy most of the requirements for most discussions of behavioural zombies.  The knave acts as if he has different experiences than he has, and we cannot tell what experiences he is having (or even if he has any at all).  So if you are not involving the psychological or physical states, the knave is more than sufficient.  However, this is only the first step for us, since we care about the psychological states being identical, and there is clearly a different psychological state involved here: the deliberate lying on the part of the knave.  So let us move on to the next step.

The next step is to talk about “Random phenomenal-behavioural knaves”.  These are phenomenal-behavioural knaves that always misreport their phenomenal experiences, but not in any deliberate way.  They just report something other than what they experience.  This, of course, is done without their deliberate intervention; they are not deciding what to report or how to act, but are just acting or reporting.   What we have in this case is similar to the first case: all of beliefs and representations that are required to directly produce the behaviour must be present, or else the behaviour would not get produced.  However, we have cut out the extra psychological processing of deliberately noting the experiences and then acting as if those are not the ones that really occurred; in this case, the behaviour just happens and just is different. (By the way, it should be noted that this case is very similar to either inverted spectrum or locked-in mind cases, depending on whether the knave can or cannot tell the difference).

Since the behaviour is no longer linked to phenomenal experiences, it seems that the leap to the behavioural zombie is simple; just take all the experiences away and make the behaviour different from that by making the zombie act as if it has proper phenomenal experiences (which it does not) and voila, the behavioural zombie.  And this would hold for both phenomenal experiences about the world and consciousness about consciousness; all the beliefs and representations that are required to act on and report about the zombie’s supposed phenomenal experiences must be there since it is acting on them but it simply is not having those experiences.

So probably one of the first questions that can be asked here is: if we can follow through these steps step-by-step and not only conclude that they are possible but almost certainly could occur – the first step is obvious, locked-in minds have been proven to exist, and inverted spectrum cases are similar to some forms of colour-blindness  – why is this argumentation so unconvincing?  If all of the beliefs and representations are really there in the second case and in the behavioural zombie case, what could be missing?   The main issue that we have here is this question: what is responsible for the zombie or random phenomenal-behavioural knave having those beliefs and representations in the first place?  In the normal case, it seems that the phenomenal experiences are at least responsible for these beliefs and representations, but what is responsible for them in our zombie?  In particular, consciousness about consciousness is a problem; what is responsible for the zombie “introspecting” on its non-existent phenomenal experiences and reporting that they are there and have certain qualities?

Let us really highlight this question by examining it with respect to the phenomenal/psychological distinction.  Assume, as I do, that introspection is a phenomenal process.  Then our zombie would not have that process, so what is it that is producing these beliefs and representations?  It cannot be anything psychological if the above arguments are to hold because that would be a psychological difference.  But let us assume the other alternative and claim that introspection is itself a psychological process; perhaps psychological states “scanning” phenomenal experiences and dragging the qualities of them out of that.  Well, then what is that scanning in our behavioural zombie?  Again, what it is scanning cannot be psychological in nature because that would be a psychological difference.

Before turning to this rather difficult problem, let me first introduce yet another problem that I believe the solution to may be relevant.  It can be claimed that all of these behavioural traits are caused by representations of the right kind, especially the ones about consciousness of consciousness, or what the zombie believes it is experiencing.  And then it can be argued that a phenomenal experience is critically a representation.  In addition, it is a representation of the right kind to produce all of the behaviours that are associated with phenomenal experiences.  As Brook and Raymont put it: “Even though it is supposedly not conscious, it will represent itself to itself as conscious, as feeling pleasure, as having pain, and so on. And it will do all these things, ex hypothesi, on the basis of representations of the right kind. (emphasis theirs)” [Brook and Raymond, Chapter 3, pg 19].  If a phenomenal experience is a representation of the right kind, then what kind of representation “of the right kind” could we have that could allow the zombie to produce all of the right behaviour associated with phenomenal representations and yet would not itself be a phenomenal representation?

I will answer this challenge by coming at it backwards and arguing that a phenomenal experience is not itself any sort of interesting representation of the right kind, by arguing that a phenomenal experience may not be a representation at all.  If this argument at least becomes plausible, then we have at least good reason to think that what makes a phenomenal experience a phenomenal experience is not its representational properties.  Thus, a “representation of the right kind” would not have to be phenomenal, and thus this argument would lose its force.

When we look at things out in the world, the link between them and representations – it seems to me – seems clear.  I look at red cars, and evergreen trees, and white houses.  My visual experiences generally seem to represent an object and an identifiable object in the world at that.  And these associations seem to be made precisely at the same time as the experience itself, with little to no additional processing required.  So from the visual perspective it seems quite reasonable to claim that experiences are representations.  I argue that from the perspective of sound the link is not as clear.  Imagine that I hear a sound.  I have no idea what the sound is, but I hear it.  What does that sound represent?  It may be a bird call or a motor revving or any number of other objects, but what is it?  It seems that all sorts of extra processing is required to turn a sound into any sort of meaningful representation.  In addition, one can look at an unstructured sound experience – like white noise – and ask what in the world that could represent; white noise seems unique mainly in the fact that it cannot actually represent anything that can have any meaning to us.  So, for this, I can propose this somewhat startling hypothesis: a phenomenal experience is not itself a representation, but only causes or produces representations.

One may reply that this does not explain how with any of our sensory experiences we usually can link it directly to a real object, or a real experience of an object.  This seems to occur with little to no additional processing.  This even applies to sounds.  So how can it be that they are not really representations?  My theory would be that if we have enough exposure to a visual experience we can condition the processing to a strict memory retrieval or pattern-matching response, and thus basically produce a “short-cut” to the right representation.  So when the experience occurs, a representation is thus quickly produced.  So it just seems like the experience and the representation are simultaneous; in reality, the experience is producing the representation just as it does in the cases where more processing is required.

If one finds oneself unconvinced that I have proven my hypothesis with the above argumentation, that is perfectly all right; it is not supposed to prove the hypothesis.  After all, much work on what it means to be a representation would still have to be done to show that the white noise case cannot be any sort of a representation at all, or that a representation has to be meaningful to be a representation.  But the argument above at least opens up a plausible question, which leaves me free to ask: what if it was true?  What would be the case if a phenomenal experience turned out to not be a representation?  Would that critically change what we would mean by a phenomenal experience?  It does not seem likely.  The worst objection that anyone could make here is that phenomenal experiences would no longer play the role they seem to play in producing my behaviour, meaning that the view would be epiphenomenal.  Well, first, that would be a change in what it does, not what it is – so phenomenal experiences would still not be just their representational content – and second, I am not claiming that they do not play the role they seem to play in producing my behaviour.  A phenomenal experience that merely produced representations of the right kind would not be critically different than a phenomenal experience that was a representation of the right kind, at least in what it would mean for it to be a phenomenal experience.  So a representation of the right kind does not have to be a phenomenal representation, even if phenomenal experiences happen to also be representations of the right kind.  And since what it means to be a phenomenal experience is not just to be a representation of the right kind, we have no reason to think that any representation of the right kind therefore just is a phenomenal experience.

This analysis allows us to answer, in some sense, the really hard question of what it would take to have our zombie have all the right representations and beliefs and yet not have phenomenal experiences; it just has to have representations of the right kind, and get that through means that cannot be called “psychological”.  Basically, the zombie’s mind – especially the conscious mind – cannot be creating these representations of the right kind either in what it feels (since the zombie does not have that) or what it does.  Yet we can off-load this creation to the sensors providing slightly different information, or taking a larger role in producing the representation of the right kind.  Reasoning can be employed to build the right representation, based on information that is already present and reasoning that may already be done by the system (even though in humans it adds no new beliefs).  The system would have to leverage its resources in different ways, but no new resources would be required.  Since I am not arguing for a physical zombie, the brain can change to simply make different information that was already present the main causal focus in producing the representation.  Basically, the argument against the really hard question is this: the zombie would have to have a representation of the right kind, but the representation of the right kind is the only relevant psychological state that needs to be the same.  Add that argument to the claim that what makes a phenomenal experience a phenomenal experience is not its representational function, and we get that there is no relevant psychological difference in between our behavioural zombie and us; all that phenomenal experience does is produce or be the right sort of representation, and what makes the phenomenal experience a phenomenal experience is not producing or being the right sort of representation.  Add to that the claim there is no reason to think that the production of that representation need include new or different psychological states (although that is an objection that could be developed at some later date), and you have your behavioural zombie with that very likely has no new or different psychological states, and certainly no different psychological states that are directly relevant to the behaviour that we are claiming the zombie is showing identically to ours.

Now, can this argument lead us to epiphenomenalism?  It should be clear from my arguments that I at a minimum claim that phenomenal experiences do produce representations of the right kind.  All that my arguments attempt to establish is that we could have representations of the right kind without phenomenal experiences.  If you add in the claim that the behavioural zombie could take all of those actions based on information that we already have, a claim can be made that I have no reason to suggest that we are not just using that information now and that the phenomenal experiences are irrelevant to the behaviour that non-zombies produce.  This is a valid claim.  However, I see no reason to insist that it follows from my arguments – what we could do does not determine what we in fact do – and see little reason to take it exceptionally seriously.  It appears to me that phenomenal experiences matter to my actions, and that the qualities of those experiences also matter to my actions.  To make that claim epiphenomenal would mean claiming that the other possible ways of getting representations of the right kind are really what is done and that all of the phenomenal experiences are therefore either just the result of that other method or fortunately just happen to appear to sync up with that other method.  More evidence that what I have said here would be required to show that that was actually the case.  But it must be said that my arguments do not rule it out either.

This seems to remove any claim that what is unique or different or important about a phenomenal experience is just its cognitive function or representative power.  But there is an additional set of arguments that would state that even though it does not have to be the case that the phenomenal and the psychological are identical, they just basically are.  I will take a quick look at two of the more reasonable proposals and show that they have unresolved issues as well that may at least make them as doubtful as the proposal that phenomenal experience is different than its psychological counterparts.

The first one I will look at is the “view from inside” argument.  This argument would basically state that even though the psychological and phenomenal may be different, in practice the phenomenal is simply the view from inside of certain states – the psychological.  This argument is not generally an argument about phenomenal experience being just what certain physical structures do while implementing these functions – or, at least, that is not when this argument is interesting.  This argument is interesting only if it is an argument about the functionality itself, and that having that functionality will always produce phenomenal experiences.  If one takes the weak form of this argument and claims that there is a view from inside of conscious beings without listing the appropriate functionality that is producing it, then one runs right into the problem of what view from inside a chair or a desk or a computer possesses.  Do they have one now, and if they do, what is it?  But if one starts to list the functionality that would produce these experiences, the problem of showing that the functionality necessarily produces those experiences comes to the fore; could something have all that functionality and yet not have the phenomenal experience?  Our zombie examples suggest that it is at least possible. And so it cannot simply be assumed that anything with that functionality must have those phenomenal experiences, thus leaving some unresolved issues for this alternative.

However, this is certainly a credible alternative, and one that has to be considered and that hopefully deeper analysis of the zombie case can resolve.  The most interesting argument it can raise against my claim is produced by Brook and Raymont: “For any non-cheating putative zombie, there will be no difference between it being and not being a zombie even from its own point of view. (emphasis theirs)” [ibid, pg 19].   Their explanation that phenomenal experience is the first person point of view of what is happening means that if the zombie represents the states properly from its point of view, it must be conscious in all meaningful ways.  The reply of course is that they have to show that the zombie has a point of view before claiming this, since we can show that we can misreport our own point of view, and the zombie is assumed to not have a point of view.  So having a state that would allow the zombie to report that it has an inner state does not necessarily mean that it has an inner state, meaning that it does not necessarily have a point of view.  If it having a point of view was established, however, then it might be the case that the point of view of the zombie could basically be enough to say that the zombie is conscious and has phenomenal experiences in all ways that matter.

The second argument is much weaker and much more problematic: phenomenal experience is the result of any sort of neural firing or neural steady state while it is going about doing the psychological things.  Since neural firings are causally closed – this is the main objection to any sort of substance dualism – these neural firings cannot impact what the brain is ultimately going to do and so cannot impact what action is ultimately taken.  This would result in epiphenomenalism of the worst kind; what experience we actually have would not matter one bit since the neurons would produce the right behaviour even if they produced as a side effect the wrong phenomenal experience.  One obvious way around this would be to claim that part of our neural structure is a “consciousness module”, which plays a causal role in the neural firings and produces the right phenomenal experiences.  This does not work any better, since those neurons may only produce any sort of reasonable phenomenal experience by accident, and may produce completely incorrect phenomenal experiences and yet would still fulfill the proper causal role.  In addition, explaining it this way leaves phenomenal experience as either a property of the structure of the brain or a property of the neurons.  If it is a property of neurons, it seems credible to claim that only neurons can produce phenomenal experience until we can identify what property they have that allows for them to do so. Then we would have to show that other things have or even can have that property.  If it is a property of the structure of the brain, then we avoid this issue.  However, that has its own problems: a) it seems somewhat implausible that structure alone is sufficient irrespective of materials and b) only things structured like the brain could have phenomenal experiences, and so we must throw out any functional notion of phenomenal experience.  Both of these have nasty consequences for artificial consciousness; basically, it means that the only way to get artificial consciousness is to build a brain in some critical way.  About the only thing this theory has going for it is that it rather nicely ties into what the brain actually does and accepts that evidence more easily than the other views.

This paper has examined the rather tough question of what phenomenal experience is.  While that question has not been answered by this paper, the idea that phenomenal experience is just its cognitive or representative function has been attacked by showing that we could do anything that we currently use phenomenal experience to do without actually having phenomenal experiences.  This led to showing that a behavioural zombie is possible, and that there need not be a psychological difference – or, at least, not one that matters – between a behavioural zombie and us.  It was also shown that phenomenal experiences would still be interestingly phenomenal experiences if they did not represent at all.  It was also shown that this view that phenomenal experience does not have to play any role in our behaviour does not, at least, entail ephiphenomenalism, and finally certain alternative arguments were evaluated to see how credible they were.  Basically, none of them seemed to be necessarily more credible than the idea that the phenomenal and the psychological are not identical.  Hopefully the clash between these credible alternatives can advance our understanding of perhaps the most important concept to human consciousness: phenomenal experience.

11 Responses to “Phenomenal Experience and Cognitive Function”

  1. Claude Lhospital Says:

    Great but difficult (for me) article

  2. verbosestoic Says:

    Well, originally this was an essay for a Master’s course, so since it was aimed at that level it probably should be [grin].

  3. The Professors Who Most Influenced Me Philosophically. | The Verbose Stoic Says:

    […] Andrew Brook – Also very influential in my overall views on consciousness — despite, again, being a staunch materialist and absolutely not a qualia freak — but even more importantly was one of the main driving factors behind my developing the view of the distinction between the psychological and the phenomenal. […]

  4. Awareness, Consciousness, Qualia, and Blindsight | The Verbose Stoic Says:

    […] having had a conscious experience of it … which might seem puzzling at first, but then I’ve already shown that this, in fact, can be the case. As a short summary, note that you can walk down a street or drive to work completely lost in […]

  5. Zombies, Blade Runner, and the Mind-Body Problem | The Verbose Stoic Says:

    […] is very similar to the argument made by Andrew Brook that I replied to in this essay. Essentially, the argument is that if they have awareness, then that’s good enough for […]

  6. Carrier on Materialism About Mind … | The Verbose Stoic Says:

    […] as if we actually had experienced that qualia, even if we haven’t. This is the main thrust of my essay on phenomenal experience and cognitive function. However, the problem is that we have to get those beliefs somewhere. Qualia, then, is an input that […]

  7. “The Hornswoggle Problem”: Reconceptualizing Consciousness | The Verbose Stoic Says:

    […] we act in response to a stimuli that we aren’t paying to (I talk about this a bit in my essay here). Ironically, then, it would be Churchland who would be begging the question through equivocation, […]

  8. Functionalism and Eliminativism About Consciousness are Incompatible | The Verbose Stoic Says:

    […] have qualia.  I believe that Dennett advocates this idea, if I’m remembering him correctly (this is what a friend of his, Andrew Brook, advocates to some extent so it seems reasonable to assign Dennett to that broad category as well since they didn’t […]

  9. Illusionism as the default theory of consciousness | The Verbose Stoic Says:

    […] I actually in fact hold that we could indeed be caused to have our beliefs by mere neural representa….  So does Chalmers.  I just don’t think that our beliefs are in general produced by neural representations that don’t have or produce phenomenal experiences, and so think that when I come to believe that the mug exists that it’s because I see it or touch it and that experience causes the representations which then cause the beliefs.  I actually argued this for a graduate course in Cognitive Science once, arguing that the normal procedure is that we have a phenomenal experience which produces a representation that doesn’t have (or have to have) phenomenal content which then produces or can be broken down into specific beliefs that we can act on, but we can get the representations and beliefs in other ways, through reasoning and the like.  What I would argue, then, is that the phenomenal experience portion is consciousness and the representation part isn’t.  But if Dennett wants to insist that that is consciousness, I can oblige.  And if Dennett wants to go along with that move, then any sort of illusionism goes away because we would separate what Dennett considers to be “consciousness” into one bucket and the phenomenal experiences into another, and so everything can indeed be what it appears to be without stepping on each other’s toes.  But he needs to be clear about what he thinks of as consciousness, and all of the people he’s arguing with are not going to accept tossing aside phenomenal experience in order to explain consciousness. […]

  10. Carrier’s View on Illusionism About Consciousness | The Verbose Stoic Says:

    […] the only way that philosophical zombies are possible is if phenomenal experience is epiphenomenal, which I clearly do not believe.  So for me you could never have a philosophical zombie who was physical identical to us because […]

  11. Qualia as Input | The Verbose Stoic Says:

    […] So I talked about consciousness a while back, and after talking about free will for a while I’ve decided that this week I’m going to highlight a rough view of mine that I developed a few years ago but haven’t really thought much about since, which was a view in reaction to representationalism that argue that in that sort of model qualia/consciousness would be an input to the system and so we could not simply claim that consciousness was about having the right sort of representations (a big part of this was outlined here). […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: