Nailing the Third-Person Science of Consciousness to the Wall.

In “Sweet Dreams”, Dan Dennett waxes eloquently about how he’s sure that the problems of people like qualia freaks and those who like the zombie examples when discussing consciousness will be seen as being quaint and misguided in 100 years, when the third-person view of consciousness that he advocates will have settled all the questions that seemed so mysterious to us.

I smells me a challenge.

So, here, I’m going to go through and outline just how no sort of third-person science of consciousness is every going to be able to settle what qualia freaks like me are interested in, and so it’s never going to be able to settle what consciousness is for us.  I’ll walk through functionalism, romp through neurons, and ultimately stroll with zombies, cyborgs and Martians as I both nail views like Dennett’s to the wall, and then put nails into their coffin lids, before being magnanimous in victory and letting them have their view of consciousness.  And I’ll do all of this while holding tightly to the hand of phenomenal experience because, well, I’m a qualia freak.  What did you think I was going to talk about?

After a set-up like that, how can I fail?

So, first up, functionalism.  Functionalism about mind in its basic form is essentially this:  consciousness is as consciousness does.  It’s similar to behaviourism in that it tracks what the organism is doing in judging consciousness and doesn’t care much about the underlying implementation.  As such, it’s pretty popular for people who think that a computer can be intelligent or conscious because you don’t need neurons to get consciousness; you just need to get it acting the right way.  Now, while there’s nothing in functionalism that says that internal functions can’t matter, if we’re going to have a third-person view we’re not going to be able to rely on looking at what’s going on inside the process; it has to be things that are observable from the third (or perhaps second, as Dennett himself says) person view.  Things that are only accessible at the first-person view are not going to work for Dennett’s preferred viewpoint.  Which is right, because it’s that first-person view that people like Nagel, Chalmers, and myself are worried about.  So if he can explain everything without having to go there, our concerns over the mysteries of the first-person view vanish, and were just misleading sidebars.

So, what ultimately will have to happen is that this science will have to be able to explain and differentiate all interesting things about consciousness — which, for now, includes phenomenal experience — without having to ever look at the view from inside.  If this science can track down to what phenomenal experiences someone is having simply by looking at what they say and do, then it will have succeeded, and all the qualia freaks will simply have to slink off into the corner to cry.

So, the obvious first step is to see if that can be done.

Before starting, I need to introduce another concept, that I first described here:

https://verbosestoic.wordpress.com/phenomenal-experience-and-cognitive-function/

The “phenomenal-behavioural knave”, or — as I’d like to shorten it now for more punch, the phenomenal knave.  What’s a phenomenal knave?  Well, it’s a creature/thing/person that has phenomenal experiences, but always misreports them, where misreports is not just the simple “It says it sees green when it sees red” but carries on all the way.  The phenomenal knave, when asked about its experiences, will claim that they are different than what they actually experience, but will also act as if they are what they report them to be and will do so consistently.  In short, you won’t be able to tell that they aren’t really seeing green when they look at a red object because all of their visible behaviour will be consistent with them seeing green.

This was, of course, inspired by the classic knight/knave logic problems.  And from here, the possibility of such a phenomenal knave is proven.  What we’d have — putting it all back into that structure — are phenomenal knights that cannot lie about their phenomenal experiences, phenomenal knaves that cannot tell the truth about their phenomenal experiences, and everyone else who can either respond — and act — truthfully or deceitfully about their phenomenal experiences.  So, can we have phenomenal knaves?  Well, can we lie about our experiences?  Yes?  Then why couldn’t you have someone who simply couldn’t do anything but lie about their experiences?  So you can’t reject it on the grounds of implausbility; even if it has never happened, it certainly could.

So, now, in order to get at functionalism I’m going to compare two completely different types of people.  The first are people with red-green inversion (RGIs), who legitimately see red as green and green as red and act accordingly.  The second are a special type of phenomenal knave, who only misreport that they see red as green and green as red but see them both normally.  Call them PK-RGIs.

So, we can see that there is a critical difference between RGIs and PK-RGIs, and that difference is entirely at the phenomenal level.  RGIs actually have a different phenomenal experience than normal people and than PK-RGIs.  PK-RGIs actually experience the world exactly the same way as normal people do, but they don’t act that way.

So, at the third-person observable functional level of behaviour, what can we say about RGIs and PK-RGIs?  Can we capture this critical difference?  Well, it turns out that we can’t.  They act the same way.  Every single test that you can run on RGIs and PK-RGIs will have them acting precisely the same, by definition.  Functionally, then, they’re identical, at least from the outside view.  Functionalism, then, would fail to capture this critical distinction:  the distinction between someone who is really having an inverted red-green experience and someone who is not having an inverted red-green experience but is acting consistently — and unwillingly — as if they are.

Note that here we can escape the issue of “consciousness about consciousness”, since the internal perception of the experience is not being called into question.  Internally, PK-RGIs very much know that they are seeing red as red and green as green.  They just never reflect that in any of their external behaviour, and so we could only determine the difference by sharing in their first-person view, which is what the functionalist and the advocate of a third-person science doesn’t want to and, really, cannot rely on.

So, functionalism doesn’t work here.  Is there any other third-person accessible perspective that can save the say?  Well, yes, there is: neuroscience.  What we can do is — in at least this case — break the “implementation-independence” of functionalism and look at the brain directly.  The hope is that through the self-reports of normal people and RGIs, we can determine where in the brain the phenomenal experiences are being generated and where they’re being reported.  Once that’s been settled, we can then happily point out what must be true:  that at the physiological level, RGIs have issues in the phenomenal experience generation faculty while PK-RGIs are generating experiences like normals but have an error in their reporting faculty.  This, then, would solve the problem, and we could tell the difference.  So functionalism would fail, but functionalism plus neuroscience would succeed.  And there was much rejoicing.

So, let’s reiterate where we are.  In order to save the third-person science of consciousness from the challenge of the PK-RGI, the advocate of that position must retreat to neuroscience and identify where in the specific implementation phenomenal experiences are generated and where they are reported/acted on and only then can they identify this important difference about how PK-RGIs don’t actually experience things the way RGIs do, but experience things like normal people do.  This is pretty much their only way out; functionalism won’t do it and nothing else can do it while remaining third-person.  So, saved by neuroscience, right?

Well, not quite, because it opens up another set of interesting problems.  In order to solve the problem they had to identify the specific mechanisms in the brain that generate phenomenal experiences, be that a specific module in the brain or a specific quality of neurons, or whatever.  Thus, they can say that the RGI case is an example where that mechanism fails and note that in the PK-RGI case that mechanism is working fine, but it’s downstream actions — things that occur after the experience is generated — that alter the behaviour of the PK-RGIs so that they look like RGIs.  So we know what produces the experiences.

Now, enter the cyborg.  Imagine that we take someone and replace whatever it is in the brain that produces the phenomenal experiences with a set of computerized mechanisms that don’t produce phenomenal experiences, but we hook it all up so that they take the same inputs, produce the same outputs, and hook into the non-phenomenal — and thus behavioural — aspects in the right way so that everything works as it did before.  To forestall an objection, I’m not saying that computerized mechanisms can’t produce phenomenal experiences, just that these don’t.  If we hook these up in the right way, it looks like we could have a person who in fact acts just as they did before the implantation of those computerized devices, but doesn’t have phenomenal experiences anymore because those devices don’t themselves produce them.

And here we can see a big problem:  at this point, we have to be committed to epiphenomenalism, the idea that our phenomenal experiences don’t actually have any causal impact on our behaviour.  We could have completely different ones — or not have any — and our behaviour wouldn’t change.  If the above cyborg is possible, then epiphenomenalism follows.  While some people might not have a problem with that — Jaegwon Kim for one — for most people this is slightly problematic.

So, the most immediate reply would be to deny that this is possible; since phenomenal experience is missing the behaviour cannot be the same, since the qualities of phenomenal experience have to matter in how we act.  The problem is that how we tried to answer the PK-RGI case seems to make this unreasonable; PK-RGI cases have a different reaction to the same phenomenal experience as others.  One could claim but that mechanism at least needs phenomenal experience as an input, but at the neurological level that would be simply be neurons firing and neural connections made that cause firings in the behavioural faculty.  If we had a module, it would be trivial to replace the phenomenal module with a computerized model that simply activates the connecting neurons without itself generating the actual experience, and if it is more distributed or is a quality of the neurons themselves at some point we pretty much have to get into things that don’t experience themselves, even if that’s just the nerves and things that directly move arms and legs and activate voice boxes.  Unless one wants to assert that everything in the body produces phenomenal experiences, if you can identify in the brain what causes phenomenal experiences you can replace those things with something that doesn’t produce those experiences yet still hooks up in the right way to all the things that don’t do phenomenal experience but ultimately implement behaviour.  This may not be a slam-dunk argument, but it seems to me to be difficult to imagine how this isn’t a consequence of a neural story.

And establishing this is important because of the next — and more interesting and important — problem:  by this, we have zombies.  No, we don’t have physical zombies like Chalmers wanted; their physical make-up is, in fact, completely different.  No, what we have are behavioural zombies, things that act exactly like they have phenomenal experiences in all ways — even to the level of consciousness about consciousness — but don’t have any at all.  Taking the third-person route and trying to explain RGIs and PK-RGIs reveals that, yes, we have zombies.

But this doesn’t seem like a problem, does it?  After all, we can look at the physiology that we discovered and detect what we have a zombie, because we can see that the physiology that’s supposed to produce phenomenal experiences is missing or impaired.  So we can tell zombies from non-zombies at the third-person level.  Putting aside the potential for epiphenomenalism, isn’t this exactly what people like Dennett have promised?

Well, it turns out that there’s a problem.  Note that all of the discoveries of how phenomenal experience works in the brain are going to have to be based on self-reports and observations of behaviour, and our discussions of RGIs and PK-RGIs have revealed that that isn’t exactly safe; we could not distinguish them at the self-report and behavioural level.  It turns out that this also holds at the zombie level; they act at the behavioural and self-report level just like non-zombies.

But who cares, right?  After all, don’t we have the physiological differences to settle this?  The problem here is that we did rely on the self-reports and behaviour to determine that for the “normal” physiology, but when confronted with the potential zombie we have to ask “Do they really lack all phenomenal experience, or are they just implementing it with a different physiology?”.  After all, their self-reports and behaviour indicate that they do really have phenomenal experiences.  If we judged based on that, we’d have to conclude that they have phenomenal experiences.  Should we reject that just because their physiology is slightly different?

Okay, okay, it might seem reasonable to say that for humans the physiology would be close enough and clear enough that we could eliminate that.  After all, if we know enough about the brain we ought to be able to tell when the mechanisms aren’t there and when they aren’t being replaced by something else.  So, then, let’s turn to another case: the Martian.

Imagine, then, that Martians appear and have a completely different physiological structure for their minds than we do.  If they have anything that could even remotely be considered a brain, it’s nothing like our brains.  Yet they act as phenomenal as we do.  The question is: from the third-person, could we tell whether they really have phenomenal experiences or if instead they are simply zombies?

Well, we couldn’t use their self-reports and behaviour, as we’ve already seen.  And we couldn’t use neuroscience and physiology, because their physiology is completely different from ours.  So, ultimately, we could not tell with the third-person science at our disposal whether these things have phenomenal experiences or are just zombies.

And to the extent that having real phenomenal experiences is required to be conscious, we couldn’t tell using any of the methods we have and any that we can currently foresee that they’re really conscious.  We’d need the first-person viewpoint for that, which is precisely what people like Dennett wanted to deny.  The first-person viewpoint, then, is critical for determining if something is conscious if determining that they’re having phenomenal experiences is critical important for determining if something is conscious.

So this leads to one path out of the problem:  deny that having actual phenomenal experiences is important for consciousness.  You can take a tack from people like Andy Brook and argue that being conscious is just being aware, and  being aware is just about having the right sort of representations, representations that let you act as if you are really seeing, say, red instead of green.  The Martians, the zombies, the RGIs, the PK-RGIs, the cyborgs and all of us have those representations, as evidenced by our behaviour and self-reports.  Since we do, we’re all conscious, and what phenomenal experiences we’re really having — if we’re having any at all — just don’t matter.

At this point, the qualia freaks will rightly cry foul.  It seems that, for us, having phenomenal experiences is really, really important to what it means to be conscious.  You don’t just get to dismiss it by definition or fiat, since that’s what all the qualia freaks thing is the defining quality of experience, and it seems that most people in their everyday lives think of it that way as well.  As an example, when I’m asleep and not dreaming I’m not experiencing anything and am not conscious.  When I’m dreaming, it’s unclear if that really counts as conscious or not.  But when I’m awake and walking around, I’m definitely experiencing and definitely conscious.  And it would be hard to imagine that experiencing things doesn’t mean that you’re conscious.

So, ultimately, my claim there would be that phenomenal experience is not necessarily sufficient for consciousness, but it’s necessary.  This could be countered with the claim that we’ve been wrong all along, and that phenomenal experience is sufficient for consciousness, but not necessary.  If you’re having phenomenal experiences, you’re conscious, but you can be conscious without having them.  That would explain why we think that phenomenal experience is necessary for consciousness, but are misled by the fact that for us they are generally always co-associated.  The examples I’ve given, then, just support their claim about consciousness.

And here is where I get to be magnanimous: I accept this, at least for the sake of argument.  Why?  Because at this point, the qualia freak and the mysterian have already won.  See, the objection to the third-person science from qualia freaks and mysterians is precisely that that third-person science will never be able to explain phenomenal experience, and that’s why consciousness will always be mysterious and resistant to third-person science.  And here, the advocate of third-person science would be accepting that, yes, they cannot explain phenomenal experience, but that doesn’t make consciousness mysterious because you don’t need phenomenal experience to be conscious.

So here it becomes clear how the view gets nailed into its coffin and then nailed to the well.  First, we nail the third-person science into its coffin by proving that it can’t say explain phenomenal experience.  And then we nail it to the wall by forcing it to accept that when it comes to consciousness they aren’t talking about and can’t talk about phenomenal experience.  Previous to this, you could look at their models and wonder where phenomenal experience came in, and there seemed to be an underlying presumption that if they got the behaviours and self-reports right, they’d have everything interesting about phenomenal experience, too.  After all, that’s part of consciousness, right?  But to them it can’t be if they want to explain consciousness.  So, no, they ain’t getting phenomenal experience for free — or possibly at all.

So, the qualia freak wins.  The opposition have to concede phenomenal experience to us and the first-person view, one way or another.  And that, really, is all we wanted.

37 Responses to “Nailing the Third-Person Science of Consciousness to the Wall.”

  1. Havok Says:

    Hi VS “riandouglas” here. Finally gotten around to making my way over here and responding.

    To (sort of) continue our discussion at Eric’s blog, while keeping things relevant to this topic, I’ll simply highlight what I think is your main problem (and where I think you’re assuming dualism to some extent):

    VS: Imagine that we take someone and replace whatever it is in the brain that produces the phenomenal experiences with a set of computerized mechanisms that don’t produce phenomenal experiences,

    Apart from a suspicion that you’d need to replace the majority (or all) of the brain, since I don’t think phenomenal experiences generally are localised anywhere, I’m with you so far.

    VS: but we hook it all up so that they take the same inputs, produce the same outputs, and hook into the non-phenomenal — and thus behavioural — aspects in the right way so that everything works as it did before.

    This I disagree with, and this is where I think you’re smuggling in some kind of dualism (and/or epiphenomenalism).
    It seems to me that if some mind is brain hypothesis is correct, then the phenomenal experiences are outputs from and inputs to the processing the brain does, and so if your replacement cybernetic parts did not produce phenomenal experiences, then they would not produce the same outputs, nor take the same inputs.

    The way I see it is, you are assuming that phenomenal experience is something other than “ordinary” inputs and outputs from brain processing, but to me that is to assume from the outset that no mind is brain hypothesis can be correct, since if one were, what else could phenomenal experiences be?
    For a simple example, when we do math in our head (2 + 2), the outputs of processing that information is the answer “4”, along with the phenomenal experience of having calculated the answer to “2 + 2”.

    VS: If we hook these up in the right way, it looks like we could have a person who in fact acts just as they did before the implantation of those computerized devices, but doesn’t have phenomenal experiences anymore because those devices don’t themselves produce them.

    Given my objection above, this is not possible, since this cyborg person would not in fact act just as they did before – their information processing would lack the inputs and outputs which comprise phenomenal experience, and therefore they could not act in the same fashion. And I suspect that if you made the cybernetic equipment sophisticated enough to actually carry out the same sort of processing that brains actually do, phenomenal experiences would result.

    Now, this objection of mine isn’t enough to “prove” that any mind is brain hypothesis is correct, but I think it is sufficient to undermine your claims to epiphenominalism if the mind is the brain.

  2. verbosestoic Says:

    Well, I think that your description here would contradict modern neuroscience. Basically, in modern neuroscience you have neurons hooked up to other neurons, and they all fire when they reach their “activation potential” (I think it’s called; it’s been a while). When they fire, they excite the neurons they are connected to, which may or may not reach their firing state, and so on and so forth. Now, it is trivial to think that I could replace one neuron with a small microprocessor that is connected in exactly the same ways to all the input neurons and output neurons, and reads the energy states given by the inputs and fires and gives out the same energy levels to the output neurons, so that from the perspective of the other neurons absolutely nothing has changed; it fires just like the neuron it’s replacing. But we clearly have no reason to think that this microprocessor is doing anything with phenomenal experiences; it literally only takes in and produces activation potentials.

    So, what we could do is replace an entire path from the photo-sensitive receptors to the nerves that activate behaviour as a reaction to seeing with these things, and from the perspective of the receptors or the output nerves they’d notice no difference; the activations at each end would be identical. But what reason would we have for thinking that any phenomenal experience is going on here? All we built in were activation potentials, not any sort of phenomenal experience.

    If we then had a cyborg with a completely cybernetic brain built out of these things, what reason would we have for thinking that there are any phenomenal experiences happening at all?

    But this is, to me, a consequence of the “mind as brain” theory: you can replace the behaviour of the neurons — and therefore the actual visible behaviour — with things that we have no reason to think produce phenomenal experiences. And that’s what makes it epiphenomenal; you get the same behaviour whether there are actually phenomenal experiences or not.

  3. Havok Says:

    VS: Now, it is trivial to think that I could replace one neuron with a small microprocessor that is connected in exactly the same ways to all the input neurons and output neurons, and reads the energy states given by the inputs and fires and gives out the same energy levels to the output neurons, so that from the perspective of the other neurons absolutely nothing has changed; it fires just like the neuron it’s replacing.

    Putting aside any problems with analog and digital information processing, I completely agree with you here. I have no problem thinking that we might be able to replace a neuron (or perhaps even a cluster of neurons) with some “artificial neuron” or “artificial neuron cluster”, and if we could have it mimic the behaviour of what it replaced, then the brain would carry on without noticing.

    VS: But we clearly have no reason to think that this microprocessor is doing anything with phenomenal experiences; it literally only takes in and produces activation potentials.

    This is where I think you are sneaking in dualism. You are assuming that phenomenal experience is not a part of the inputs and outputs of normal neuronal interaction and information processing. Of course if you assume that, then phenomenal experiences are either non-physical or epiphenomenal. But I don’t think you’re correct in assuming this.

    VS: But what reason would we have for thinking that any phenomenal experience is going on here? All we built in were activation potentials, not any sort of phenomenal experience.

    Sneaking in dualism again. Phenomenal experience, on a mind as brain hypothesis, would seem to me to be a result of the activation potentials, not something wholly separate from it.

    Do you see where I’m coming from, and why I think you’re assuming dualism from the outset?

  4. Havok Says:

    I just reread your comment, and decided to write another response.

    VS: And that’s what makes it epiphenomenal; you get the same behaviour whether there are actually phenomenal experiences or not.

    This is, again, where I think you’re smuggling in some sort of dualism. This statement (in context) assumes that phenomenal experience is something other than a result of the firings of neurons. With this assumption it is not surprising that you are able to argue your way to dualism or epiphenominalism. I don’t think this assumption is warranted, and in fact is explicitely denied by a mind as brain hypothesis (after all, if the mind is the brain, what else culd phenomenal experience possibly be?)

  5. verbosestoic Says:

    Havok,

    I think this paragraph addresses your concern:

    So, the most immediate reply would be to deny that this is possible; since phenomenal experience is missing the behaviour cannot be the same, since the qualities of phenomenal experience have to matter in how we act. The problem is that how we tried to answer the PK-RGI case seems to make this unreasonable; PK-RGI cases have a different reaction to the same phenomenal experience as others. One could claim but that mechanism at least needs phenomenal experience as an input, but at the neurological level that would be simply be neurons firing and neural connections made that cause firings in the behavioural faculty. If we had a module, it would be trivial to replace the phenomenal module with a computerized model that simply activates the connecting neurons without itself generating the actual experience, and if it is more distributed or is a quality of the neurons themselves at some point we pretty much have to get into things that don’t experience themselves, even if that’s just the nerves and things that directly move arms and legs and activate voice boxes. Unless one wants to assert that everything in the body produces phenomenal experiences, if you can identify in the brain what causes phenomenal experiences you can replace those things with something that doesn’t produce those experiences yet still hooks up in the right way to all the things that don’t do phenomenal experience but ultimately implement behaviour. This may not be a slam-dunk argument, but it seems to me to be difficult to imagine how this isn’t a consequence of a neural story.

    At some point, we will have things that simply take the energy output from the brain and neurons and activate, but that aren’t credibly things that produce phenomenal experience. If we take out the neurons and replace everything with things that we have no reason to think in and of themselves can result in phenomenal experience, then what reason do we have for thinking that that thing would have phenomenal experiences? And yet the third-person behaviour would be indistinguishable. That’s epiphenomenal.

    So, no, I’m not smuggling in dualism. I accept that phenomenal experiences, in the mind is brain theory, are the result of the firings of at least some neurons (there are some neural firings, such as for the automatic functions, that don’t in and of themselves produce what we call phenomenal experiences). That’s exactly what my argument is relying on, that while it might be credible to say that phenomenal experiences are the result of neural firings if I take out all of those neurons and replace them with simple microprocessors it’s no longer credible to think that phenomenal experiences are the result of THEIR operations as well … and yet neuroscience tells us that the behaviour, if these are set-up properly, will be identical. Thus, we have epiphenomenalism.

  6. Havok Says:

    VS, I think you’re still doing it – assuming dualism or epiphenomenalism. Perhaps I’m not being clear?

    VS: At some point, we will have things that simply take the energy output from the brain and neurons and activate,

    Well, that is ALL that neurons do, so I agree that this ought to be possible to do.

    VS: but that aren’t credibly things that produce phenomenal experience.

    Here you seem to be assuming that phenomenal experience is something other than taking input from the firings of other neurons and activating – this assumption is what I object to.

    Let me try a (likely bad) analogy):
    A computer is nothing but electrical signals, either high or low. There is nothing (at this level) corresponding to the World of Warcraft “virtual realm”. You are assuming that we could conceivable replace the computers which do respond to electrical signals, either high or low, and produce the virtual realm, with computers which respond in the same way to the electrical signals, with the same inputs and outputs, but which do not also generate this virtual realm. but this is nonsensical, because it is the responding to the electrical signals which is responsible for the virtual realm.

    In the same way, you are claiming we can replace neurons with artifical devices which respond in the exact same manner to inputs, and generate the exact same output, but which don’t produce phenomenal experience – but the phenomenal experience (it seems to me ) is a result of the (myriad) firings of neurons and nothing more, so if you were to replace a neuron with something that behaved identically, the experience would be preserved, as the behaviour is preserved. If you replace a cluster of neurons with something that behaves identically to the neurons it replaced, then this things internal processing surely must be basically the same as the processing of the cluster of neurons, and therefore phenomenal experiences and behaviour will be preserved.

    If we look at things from a higher level, where phenomenal experience is a property, rather than at the level of neurons firing, I think we can see that the experiences are no less inputs and outputs to of our behaviour than other cognitive functions (I think you reference such a distinction in another post of yours, but I don’t see the reason for such a hard distinction).

    Have I made things clearer, or muddied the waters further?

  7. verbosestoic Says:

    Well, see, my point was that I could indeed replace anything in the brain itself that you thought would be involved in producing phenomenal experiences and get the same external behaviour. So, I could replace everything after the cornea — or even the cornea itself — and everything up to the specific nerves or whatever they are that activate the voice box and when you see a red image you would still say “I see a red object”. That is paradigmatically epiphenomenal, but it doesn’t seem to be smuggling dualism in. Surely you’d agree that the phenomenal experiences we have are part of some mechanism in the brain, a mechanism that isn’t likely to be in the sort of small microprocessors that I suggest could be the replacements?

    To use your WoW example, my claim is more like this: I claim that the math co-processor in the Pentium chips is epiphenomenal to WoW behaviour because I can turn it off and the game will still play exactly the same as it did when it was on. The WoW game behaviour is what I’m claiming remains the same with my substitutions, but is not itself analogous to phenomenal experience. Thus, you can align “WoW game” with “external behaviour”, and “math co-processor” with “phenomenal experience”.

  8. Havok Says:

    VS: Well, see, my point was that I could indeed replace anything in the brain itself that you thought would be involved in producing phenomenal experiences and get the same external behaviour.

    Like neurons and neuronal interactions, right?
    You reproduce it functionally, from the neuronal level on up.
    Got it.

    VS: So, I could replace everything after the cornea — or even the cornea itself — and everything up to the specific nerves or whatever they are that activate the voice box and when you see a red image you would still say “I see a red object”.
    If you replace it with a simple recogniser, then there would likely be no recognisable phenomenal experience. But if you replace it with something that actually carries out the same information processing that my brain currently does, including all the feed back loops, etc, then I would claim that you’re also getting phenomenal experience.

    VS: That is paradigmatically epiphenomenal, but it doesn’t seem to be smuggling dualism in.

    It is exactly smuggling it in, because you are assuming that phenomenal experience is something other than information processing. You’ve assumed from the start that neurons consume inputs and produce outputs AND produce phenomenal experiences. But on a mind as brain hypothesis, the outputs are what contribute to the phenomenal experiences (along with the resulting behaviour). They’re nothing more than information processing. You assume they are something other than information processing, and therefore you assume that you can simulate the information processing without also simulating the phenomenal experience.

    VS: Surely you’d agree that the phenomenal experiences we have are part of some mechanism in the brain.

    That “some mechanism in the brain” is nothing more than the interactions of multitudes of neurons. If you simulate the information processing that goes on during those interactions, then you’re going to get phenomenal experiences – how could it be otherwise if the mind is the brain?

    VS: a mechanism that isn’t likely to be in the sort of small microprocessors that I suggest could be the replacements?

    This “mechanism” isn’t likely in individual neurons either. It’s behaviour that emerges from the relationships and interactions, not from the “hardware”, in the same way the WoW emerges from the interactions of millions of players, not from the binary logic of their computers (though it is reducible to them).

    It is the assumption of a qualitative difference between phenomenal experience and other brain/cognitive functions that I object to – it is the part of your argument that gets you to dualism or epiphenominalism, and it is this part of your argument that I don’t agree with and that you have not demonstrated.

  9. Havok Says:

    Sorry – blockquote fail in the above comment.

  10. verbosestoic Says:

    Well, now it’s the time to bring in “qualia”. Where do you think qualia comes in on your picture, which is the details of the actual experience you have. For me, without qualia you don’t have phenomenal experience — which is, I think, fairly uncontroversial — and you don’t have consciousness — which might be controversial. So, you need to tell me where you see qualia is in your model, as we have no reason to think that the things I replace the neurons with are themselves capable of qualia, even when hooked up properly.

    Do you think you can define qualia by the behaviour produced by the information processing? Then you’re a functionalist … but I’ve already pointed out how you can produce the same behaviour while having different phenomenal experiences. You can’t rely on “really, really complex information processing”, because that won’t even produce the right sort of behaviour, and so you need to link it back to the specific information processing used to produce conscious behaviours. And that’s functionalism.

    Otherwise, you can try to go about it by saying that you have to produce the entire structure of the brain, but then you run into the “alien” example above, where you deny that they have phenomenal experiences because their brain isn’t like a brain. That’s not good, either.

    Ultimately, the mind as brain theory has some traction because we can say that there’s something about neurons that gives rise to phenomenal experiences and qualia and have that be reasonable. But the links we seem to have to our nerves and the like mean that something that reproduces the electrical impulses but that we don’t think is capable of qualia can produce the same external behaviour without doing qualia and without doing phenomenal experiences. To battle this, you need to state clearly where you think qualia comes from in the brain … and, for the reasons I just gave, you can’t say that it is a property in any way of the neurons themselves.

  11. Havok Says:

    VS: So, you need to tell me where you see qualia is in your model, as we have no reason to think that the things I replace the neurons with are themselves capable of qualia, even when hooked up properly.

    You seem to be assuming qualia are non-physical here VS – assuming dualism again.
    You’re assuming that the information processing does not take the qualia and phenomenal experiences as inputs and produce them as outputs, but I’ve said that if the mind is the brain, this surely must be the case.

    I still think you’re unable to pry yourself away from dualism when formulating your arguments.

  12. verbosestoic Says:

    No, I’m not assuming that qualia are non-physical. I pointed out that the mind as brain theory has some traction here by saying that qualia is part of what neurons do. But I don’t see where qualia comes in in your model at all. I certainly DO think that qualia can be the input to, at least, an information processing model and potentially one of the outputs, as that’s what dualism basically posits. Again, what I’m missing is where exactly you think qualia happens in your model … and, really, what your model actually is.

  13. Havok Says:

    I pointed out that the mind as brain theory has some traction here by saying that qualia is part of what neurons do.

    And would therefore be a part of whatever devices you were replacing the neurons with. If you accept that qualia could be physical, then by replacing a neuron with a functional clone, the outputs would be the same, and therefore qualia and phenomenal experiences would still occur, as would the same information processing generally.

    But I don’t see where qualia comes in in your model at all.

    Qualia, phenomenal experience, and general cognitive function seem to me to be all the “same thing”. I think you’re assuming some qualitative differences here.

    I certainly DO think that qualia can be the input to, at least, an information processing model and potentially one of the outputs, as that’s what dualism basically posits.

    So where is the problem? If qualia can be inputs to information processing, then to reproduce the functionality of the information processor, qualia need to be input. If qualia can be outputs from information processing, then to reproduce the functionality of the information processor, qualia will be output.
    Unless you can demonstrate that qualia and phenomenal experience, etc, are inherently non-physical, I really don’t see where the problem lies.

    Again, what I’m missing is where exactly you think qualia happens in your model … and, really, what your model actually is.

    I’m being intentionally vague in my model, so as to cover “mind as brain” hypothesis generally.
    As to where qualia happens – the same place “normal” cognitive function takes place – from an information processing system.

  14. verbosestoic Says:

    And would therefore be a part of whatever devices you were replacing the neurons with. If you accept that qualia could be physical, then by replacing a neuron with a functional clone, the outputs would be the same, and therefore qualia and phenomenal experiences would still occur, as would the same information processing generally.

    That’s only if you presume that the “qualia” is in the electrical output of the neuron that activates the next one. We have absolutely no reason to think that’s the case, and again I pointed out that we could replace things up to the cornea and down to the nerve endings that activate the voice box and we’d still have the same behaviour. The things that are left cannot be involved in producing qualia, and again we have no reason to think that the microprocessors themselves have it.

    In short, the mind as brain theory has traction by saying that it is a property of NEURONS that we get qualia from them. If we replace the neurons with something else, there is no guarantee that THOSE things will have qualia as a property, and so you cannot simply assume that there will still be qualia. My thought experiment was explicitly saying that we replace neurons with things that we don’t think have that property, at least in and of themselves, which neatly skewers the “mind as brain” theories that rely on the properties of neurons. And this, then, will return us to just precisely what theory you have.

    Qualia, phenomenal experience, and general cognitive function seem to me to be all the “same thing”. I think you’re assuming some qualitative differences here.

    You, then, need to read this page:

    https://verbosestoic.wordpress.com/phenomenal-experience-and-cognitive-function/

    It’s where I argue that while qualia and phenomenal experience are the same thing, that doesn’t map to general cognitive function. Even in this one I argue for why you can have general cognitive function without having at least the right sorts of qualia.

    So where is the problem? If qualia can be inputs to information processing, then to reproduce the functionality of the information processor, qualia need to be input. If qualia can be outputs from information processing, then to reproduce the functionality of the information processor, qualia will be output.
    Unless you can demonstrate that qualia and phenomenal experience, etc, are inherently non-physical, I really don’t see where the problem lies.

    The other page outlines it in more detail, but basically I hold a model that qualia is indeed one sort of input that can produce representations, and the representations are what we use to produce our behaviour — and do our information processing — but that those representations can be produced by things that are not qualia. And if that’s the case, then you can’t assume that just because the information processing is happening that qualia is either an input or an output of it. And if that’s the case, then we have no reason to think that there is qualia happening at all in the case where I’ve subbed in the microprocessors that we don’t think do qualia.

    I’m being intentionally vague in my model, so as to cover “mind as brain” hypothesis generally.
    As to where qualia happens – the same place “normal” cognitive function takes place – from an information processing system.

    The problem is that as shown in your last sentence you aren’t covering mind as brain hypotheses generally. My argument works quite well against “qualia is a property of neurons” forms, but may not work against the “information processing system” forms … but then you need to be clear and detailed about how qualia and the like come out of those sorts of systems, and, for example, if there are any sorts of information processing systems that don’t have qualia. If a computer does very complex information processing but gives no sign of having conscious behaviour, does it still have qualia? How could you tell, one way or the other?

  15. Havok Says:

    That’s only if you presume that the “qualia” is in the electrical output of the neuron that activates the next one. We have absolutely no reason to think that’s the case

    You’re assuming that qualia are not or cannot result from merely the interaction of physical components (neurons, microprocessors, whatever). It’s not surprising that you find dualism more acceptible when you start from that position.

    The things that are left cannot be involved in producing qualia, and again we have no reason to think that the microprocessors themselves have it.

    And you’re assuming that qualia are not a feature of the information processing. it seems to me that “4” and the sensation of calculating “2 + 2 = 4” are both the result of processing “2 + 2”. You have given no reason to think this is not possible, yet it seems to me that is vital for your case.

    In short, the mind as brain theory has traction by saying that it is a property of NEURONS that we get qualia from them.

    In the same way that it is a property of neurons that we can get “4” from them (by calculating “2 + 2”).

    If we replace the neurons with something else, there is no guarantee that THOSE things will have qualia as a property, and so you cannot simply assume that there will still be qualia.

    I don’t think I am assuming that.

    My thought experiment was explicitly saying that we replace neurons with things that we don’t think have that property, at least in and of themselves, which neatly skewers the “mind as brain” theories that rely on the properties of neurons. And this, then, will return us to just precisely what theory you have.

    No it doesn’t, since any mind as brain hypothesis has “qualia”, phenomenal experience as ordinary outputs from and inputs to processes, in the same way other inputs and outputs are. This means that if you replace a neuron or cluster of neurons with a functional clone (using microprocessors or whatever), then the outputs will be the same, which you accept. Those outputs will then contain “qualia”.

    It’s where I argue that while qualia and phenomenal experience are the same thing, that doesn’t map to general cognitive function.

    Which I don’t think I accept. I have read that page, and I’d need to read it again, but I still think you’re assumming dualism there, as you are here.

    Even in this one I argue for why you can have general cognitive function without having at least the right sorts of qualia.

    Well, a computer can calculate the answer to “2 + 2” without anything we might label “qualia”, but since the qualia are inputs to and outputs from processing, then I don’t see that you can have a functional replica without actually reproducing qualia and phenomenal experience.

    but basically I hold a model that qualia is indeed one sort of input that can produce representations, and the representations are what we use to produce our behaviour — and do our information processing — but that those representations can be produced by things that are not qualia.

    I doubt that this is the case, and I don’t think you’ve adequately argued that this is the case.
    You seem to view qualia as being something “different”, but I really don’t see why that must be the case.

  16. verbosestoic Says:

    Havok,

    You’re assuming that qualia are not or cannot result from merely the interaction of physical components (neurons, microprocessors, whatever). It’s not surprising that you find dualism more acceptible when you start from that position.

    No, as already stated, I don’t assume that. I assume that it could at least potentially result from interactions between or activities of neurons. I don’t, as stated clearly in this page, even presume that we couldn’t build a microprocessor that could produce qualia. The most I presume is that I could find microprocessors that could take in electrical input and give out electrical output like neurons can but lack whatever it is that allows neural interactions to result in qualia. This seems a fairly safe presumption, unless you want to ascribe qualia to your PC [grin].

    And you’re assuming that qualia are not a feature of the information processing. it seems to me that “4″ and the sensation of calculating “2 + 2 = 4″ are both the result of processing “2 + 2″. You have given no reason to think this is not possible, yet it seems to me that is vital for your case.

    Well, if you hold this level of theory then you must think that a pocket calculator has qualia. If you think this, then you have a radically strong idea of qualia that doesn’t seem to match up with what qualia actually is, which is the specific experience itself. There is no reason for you to even accept that pocket calculators have qualia other than to save your theory, which makes it a fairly bad one in terms of explanations for qualia and phenomenal experience.

    In the same way that it is a property of neurons that we can get “4″ from them (by calculating “2 + 2″).

    Actually, no. The mind as brain theory has traction by saying that part of being a neuron is that it can produce qualia, and that if you hook neurons up the right way they can also do the information processing to calculate “2+2”. This allows the mind as brain theory to deny that anything that can calculate “2+2” must have phenomenal experiences — at least of the sort humans have and thus the kind we’re concerned about — while still claming that we have them when we do that.

    I don’t think I am assuming that.

    Then how do you oppose my claim? You haven’t proven that there would still be qualia, and are simply asserting that it’s doing information processing and so there will be qualia. You did, in what I reply to, seem to be saying exactly that.

    No it doesn’t, since any mind as brain hypothesis has “qualia”, phenomenal experience as ordinary outputs from and inputs to processes, in the same way other inputs and outputs are. This means that if you replace a neuron or cluster of neurons with a functional clone (using microprocessors or whatever), then the outputs will be the same, which you accept. Those outputs will then contain “qualia”.

    No, that’s a functionalist theory, not a mind as brain theory. Functionalist theories say as long as you have conscious functionality the implementation doesn’t matter, meaning that you don’t even really need a “brain”. Mind as brain theories generally say that the implementation does matter. I also address functionalist theories in this page.

    Which I don’t think I accept. I have read that page, and I’d need to read it again, but I still think you’re assumming dualism there, as you are here.

    Reading more of this comment, I think the issue is that you’re more of a functionalist, and are conflating that view with the materialist/dualist debate. Functionalism is compatible with both materialism and dualism.

    Well, a computer can calculate the answer to “2 + 2″ without anything we might label “qualia”, but since the qualia are inputs to and outputs from processing, then I don’t see that you can have a functional replica without actually reproducing qualia and phenomenal experience.

    Ergo, you’re a functionalist, not a “mind is brain” theorist. You don’t think that what it means to have a mind is to have a functioning brain, but think that what it means to have a mind is to be able to do certain functional things, that indicate the mental. Meaning that your “2+2” example is a bit off, since functionalists generally wouldn’t consider that necessarily conscious behaviour. Acting like you’re in pain would be more relevant. So, we can try to distinguish between behaviours that indicate conscious functionality — and thus “qualia” — and behaviours that don’t, and so eliminate the pocket calculator and the desktop computer from being conscious, while indeed still preserving the thought experiment I raised about replacing the neurons with functional equivalents. But then my comments about the inputs and my defeat of functionalism above come into play: I can act as if I am having phenomenal experiences that I am, in fact, not actually having, as per the phenomenal knave. And if that’s the case, then it cannot be the case that what it means to have qualia is to simply produce the right external behaviour, to act or function as if I am having that specific qualia. And then I can ask what functionality you think you can appeal to to demonstrate qualia that isn’t me actually experiencing qualia, which you can’t get from outside my head. And if that’s the case, then what it means to be qualia — even under a functionalist model — depends greatly on the properties of my actual experiences. And it is those actual experiences that suggest dualism. And none of the functionalist replies help when we get into the neuroscience and point out that the neural chains are causally closed, and so there seems to be no room for the causation of qualia that we seem to directly experience.

    I doubt that this is the case, and I don’t think you’ve adequately argued that this is the case.
    You seem to view qualia as being something “different”, but I really don’t see why that must be the case.

    Well, to claim that I have not adequately argued it, you must address the arguments.

  17. Havok Says:

    The most I presume is that I could find microprocessors that could take in electrical input and give out electrical output like neurons can but lack whatever it is that allows neural interactions to result in qualia.

    Which is exactly where you are assuming dualism.
    On a mind as brain hypothesis, qualia would be nothing more than electrical outputs and inputs of neurons. You are assuming that this is false.

  18. Havok Says:

    Sorry, still riandouglas – wordpress is mixing up my nickname.

    I can act as if I am having phenomenal experiences that I am, in fact, not actually having, as per the phenomenal knave.

    But in doing so you would be imagining what it would be like to have that phenomenal experience, and so you would indeed be having a sort of second hand phenomenal experience.

  19. verbosestoic Says:

    On a mind as brain hypothesis, qualia would be nothing more than electrical outputs and inputs of neurons. You are assuming that this is false.

    No, I accept that. I point out that those microprocessors are not, in fact, neurons, and so the mind as brain hypothesis in and of itself cannot assume that the electrical inputs and outputs of THEM are also qualia.

    Additionally, qualia is defined as being the actual experience of, say, a colour, not as being electrical inputs and outputs. You need to make a link between those outputs and the experience. I’m willing to grant it for the sake of argument to neurons, but not to microprocessors like you’d find in my laptop.

    But in doing so you would be imagining what it would be like to have that phenomenal experience, and so you would indeed be having a sort of second hand phenomenal experience.

    I need to have an idea what the representation would be, but I don’t need to actually imagine that phenomenal experience as an image in my head. That’s the whole point of both these pages.

  20. Havok Says:

    No, I accept that. I point out that those microprocessors are not, in fact, neurons, and so the mind as brain hypothesis in and of itself cannot assume that the electrical inputs and outputs of THEM are also qualia.
    Well, since you’ve defined the inputs and outputs of the neurons and the replacement devices are identical, I don’t see what other choice I would have other than to conclude that the identical outputs produce the identical sensations. Without already assuming dualism, or demonstrating that qualia are not or cannot be physical in nature, there seems to be no other option available if we’re assessing the mind as brain.

    Additionally, qualia is defined as being the actual experience of, say, a colour, not as being electrical inputs and outputs.

    The experience is build up from the electrical inputs and outputs – the overall firing patterns of the neurons.

    I’m willing to grant it for the sake of argument to neurons, but not to microprocessors like you’d find in my laptop.

    If you’re willing to grant it to neurons when considering the mind as brain, why would you not consider granting it to functionally identical devices? You seem to be assuming some sort of dualism for qualia.

    I need to have an idea what the representation would be, but I don’t need to actually imagine that phenomenal experience as an image in my head. That’s the whole point of both these pages.

    You need to have an idea of what it would be like to experience the phenomenal experience, so you can then “simulate” your response to it, and then behave as you believe you would had you experienced that experience. It seems to me that this requires a phenomenal experience (a sort of second hand one, not directly tied to sensory input) to be had in order to actually generate the behaviour.

  21. verbosestoic Says:

    Well, since you’ve defined the inputs and outputs of the neurons and the replacement devices are identical, I don’t see what other choice I would have other than to conclude that the identical outputs produce the identical sensations. Without already assuming dualism, or demonstrating that qualia are not or cannot be physical in nature, there seems to be no other option available if we’re assessing the mind as brain.

    Well, the first answer is that I’ve only defined the electrical inputs and outputs as being the same, and you don’t know — and, in fact, have little reason to believe — that THAT part of the process is what is producing sensations. The second answer is that the sensations might be part of the process — ie part of what the neurons are doing — and not just in the inputs and outputs, like mathematical co-processing is. All in all, if you put this all together it becomes clear that you aren’t talking about assessing the “mind as brain” anymore, but are aiming for a functionalist approach … except that you haven’t defined how you are going to determine what functionality indicates qualia.

    The experience is build up from the electrical inputs and outputs – the overall firing patterns of the neurons.

    By what criteria, then, do you judge that these “overall firing patterns” produce experiences at all? How could you tell the difference between Chalmers’ zombie and someone with real experiences.

    If you’re willing to grant it to neurons when considering the mind as brain, why would you not consider granting it to functionally identical devices? You seem to be assuming some sort of dualism for qualia.

    I’m willing to grant it to neurons for the sake of argument because when we stick neurons together we tend to see behaviour that indicates phenomenal experience is happening, and it seems that those things are involved in the only case I have where I absolutely KNOW that experience is happening (mine). I reject it for these microprocessors because I don’t see any reason to think that things that have these things in them — like pocket calculators and my laptop — do anything like phenomenal experiences when they are activated. You have yet to address this point.

    You need to have an idea of what it would be like to experience the phenomenal experience, so you can then “simulate” your response to it, and then behave as you believe you would had you experienced that experience. It seems to me that this requires a phenomenal experience (a sort of second hand one, not directly tied to sensory input) to be had in order to actually generate the behaviour.

    This does not seem to be the case. I merely need the knowledge of what that experience is supposed to be — ie the representation — and then I can reason out the response, as indicated in the various thought experiments in the other page.

  22. riandouglas Says:

    Well, the first answer is that I’ve only defined the electrical inputs and outputs as being the same, and you don’t know — and, in fact, have little reason to believe — that THAT part of the process is what is producing sensations.

    Actually, I have very good reasons to think that is the case – the absurdity of dualism and the apparent causal closure of the universe appear to give me all the reasoning to accept, provisionally, that this appears to be the case.

    The second answer is that the sensations might be part of the process — ie part of what the neurons are doing — and not just in the inputs and outputs, like mathematical co-processing is.

    If it is a part of what the neurons are doing, then your replacement clones, defined as they are to be functionally identical, will be doing it.
    I don’t see what “co-processors” have to do with this.

    except that you haven’t defined how you are going to determine what functionality indicates qualia.
    Except that I don’t need to – I simply need to show that your claim to have nailed phenomenal experience as either epiphenomenal or dualistic fail. And I believe I have done that.

    By what criteria, then, do you judge that these “overall firing patterns” produce experiences at all?

    I don’t have any useful criteria at present.

    How could you tell the difference between Chalmers’ zombie and someone with real experiences.

    I don’t think zombies in that sense are possible, since to mimic a brain without having “real experiences” seems to me to be nonsensical.

    I reject it for these microprocessors because I don’t see any reason to think that things that have these things in them — like pocket calculators and my laptop — do anything like phenomenal experiences when they are activated. You have yet to address this point.

    I thought I had. We don’t seem to see them for laptops etc, because they are rather simple networks of information processing units. The brain (even smaller brains of other animals) are rather more complex, and complexy interconnected, than any network of silicon processors that I can think of.

    This does not seem to be the case. I merely need the knowledge of what that experience is supposed to be — ie the representation — and then I can reason out the response, as indicated in the various thought experiments in the other page.
    And once again you’re introducing dualism. You assume that the experience is something more than the representation. You don’t seem to accept that it might be possible for it to be nothing more than the representation in the brain, and the “subjective” component of it (that pain is painful, red is red, etc) are due to the subject actually being the information process itself, rather than some outside observer (as your dualism would posit, I believe).

    And during this entire exchange, you don’t seem to have begun to address any of the problems with your own position. I’ve pointed out a few things which make substance dualism an extremely speculative option, and yet you are willing to entertain this as THE answer, even with the limited knowledge we currently have regarding neurology.

    That to me seems to be rather premature. To criticise one hypothesis because it does not have all the answers, while your own seems to provide none, doesn’t seem to be the best way in which to proceed.

  23. verbosestoic Says:

    Actually, I have very good reasons to think that is the case – the absurdity of dualism and the apparent causal closure of the universe appear to give me all the reasoning to accept, provisionally, that this appears to be the case.

    First, your reply here would not address non-substance dualisms, which would argue that mind is not brain but that it is not non-physical either.

    Second, even if we toss dualism out of the picture that does not in any way support your contention that it is the electrical inputs and outputs that define what it means to have qualia. Both functionalist and inherent to neurons theories are far more credible explanations as they have less absurd results.

    Third, your contention means that in theory pocket calculators have qualia, which is far more absurd than any of the consequences of dualism.

    If it is a part of what the neurons are doing, then your replacement clones, defined as they are to be functionally identical, will be doing it.
    I don’t see what “co-processors” have to do with this.

    They’re functionally identical in that they take in the same inputs and produce the same outputs. They are NOT functionally identical in that they do it the same way. The replacements, for example, are not biological and don’t use biochemical reactions to do that. They also have nothing like the physical structure of neurons. We have no reason to think that simply reproducing the electrical inputs produces qualia, and the processors do nothing else in any way like neurons. So we have little reason to think that the substitution results in qualia.

    The ‘mathematical co-processor” is my extension of your WoW example. If one processor uses a mathematical co-processor to produce the output, and another doesn’t, and there’s no difference in the output between them, then math co-processing, at least, is not required to produce the output. Neurons could have the equivalent of a math co-processor that actually does produce qualia and produces phenomenal experiences and might even use the results — although that seems unlikely given the structure of the brain — but the processors I add clearly wouldn’t and wouldn’t need it. So your reliance on the electrical inputs and outputs would falter if the best mind as brain theory really is that qualia is something that happens through the details of neurons.

    Except that I don’t need to – I simply need to show that your claim to have nailed phenomenal experience as either epiphenomenal or dualistic fail. And I believe I have done that.

    No, you do need to do that if your only response to my complaints is to introduce a model where you have no way of identifying qualia in the first place, since I will then simply raise the objection that for you there is no qualia or phenomenal experience AT ALL, which is far worse.

    Note, again, that I do not claim that phenomenal experience is either epiphenomenal or dualistic. The “nailing” in this page — as it states clearly — is that phenomenal experience CANNOT be judged at the third-person or even second-person level, but can only be studied at the first-person level. The epiphenomenal part is just one way to get at that, pointing out that we can look at neurons to try to find phenomenal experience but since the mind as brain theory implies that you can have similar outputs with radically different phenomenal experiences all you could do is look at the structure of the brain itself, and not the functionality … which then fails the instant you get something that acts conscious but doesn’t actually have a brain.

    I don’t think zombies in that sense are possible, since to mimic a brain without having “real experiences” seems to me to be nonsensical.

    But you need an argument for that, not merely an assertion. The frustrating thing about this discussion so far has been that you reply overmuch on assertions and less on arguments. Since you don’t even know where on your model phenomenal experiences fits in, you really have no reason to think that zombies in that sense aren’t possible.

    Note that under my view — as expressed in the other page — I CAN rule out Chalmers’ zombies, by arguing that phenomenal experience is an input — but only one specific type of input — then if you simply take it away you need something else to actually produce the representations, which would mean adding something to the organism, which means it won’t be physically identical. I can do this DESPITE, as you know, preferring a dualistic overall mindset. Can you do anything like this?

    I thought I had. We don’t seem to see them for laptops etc, because they are rather simple networks of information processing units. The brain (even smaller brains of other animals) are rather more complex, and complexy interconnected, than any network of silicon processors that I can think of.

    But why would increasing the complexity cause an actual experience to appear? And then are you denying that lower-order animals with far less complex interactions and information processing have experiences? The evidence suggests otherwise: computers don’t have anything like experience no matter how much information they process, while lower order animals feel things like pain even if their information processing is quite limited. You really, really need to define where qualia comes in, even as a hypothesis, so that we can shake out the consequences to see how credible it is. Your response here boils down to “I don’t think computers can have qualia, but if you plunk even the most simplistic things into a brain-like structure then they magically will”, but we have no reason to think that it is the structure of the neural net that does the work. Especially since few people accept that connectionist systems have any shot at actually having qualia, and yet they are closer to the structure of the human brain in terms of complexity than the brains of many animals.

    And once again you’re introducing dualism. You assume that the experience is something more than the representation. You don’t seem to accept that it might be possible for it to be nothing more than the representation in the brain, and the “subjective” component of it (that pain is painful, red is red, etc) are due to the subject actually being the information process itself, rather than some outside observer (as your dualism would posit, I believe).

    Recall your initial point, which was that having the representation of the red car so that it could be acted on REQUIRED having that actual experience of qualia. My argument has been based on it being the case that I AS A SUBJECT don’t need that; I can have a representation of a red car that allows me to act appropriately without having actually experienced it. Thus, I can read the description and come to know all the things I need to know to act appropriately to it, such as buying gray seat covers because gray goes well with red. So my argument is based not on a presumption of dualism, or even on dualism at all, but instead on an examination of MY ACTUAL EXPERIENCES. This distinction is made even more clearly for me because my visualization is so incredibly poor that I am not only unlikely TO translate the text into an image but that even if I did CRITICAL COMPONENTS WOULD BE MISSING, which would leave me unable to react appropriately to the representation. So, again, nothing to do with dualism, but everything to do with how it seems people can actually react. In a sense, I’m actually relying on real experiments and data here, and your reply is to simply assert that my actual experiences somehow are not actually happening or are not possible.

    And during this entire exchange, you don’t seem to have begun to address any of the problems with your own position. I’ve pointed out a few things which make substance dualism an extremely speculative option, and yet you are willing to entertain this as THE answer, even with the limited knowledge we currently have regarding neurology.

    That to me seems to be rather premature. To criticise one hypothesis because it does not have all the answers, while your own seems to provide none, doesn’t seem to be the best way in which to proceed.

    I don’t see any problems you’ve brought up that I have not addressed, or at least accepted as problems. I am not, in fact, criticizing one hypothesis because it does not have all the answers while preferring one that also does not provide all the answers. What I am doing is criticizing one set of hypotheses for claiming to be explanations and yet not explaining in any way what to me is actually the thing I want an explanation for, and in fact is dismissing those things as irrelevant. I’m a qualia freak; to me, to be conscious is nothing more than having phenomenal experiences. Saying, then, that you’ll explain that part later rightly makes me wonder how you think you’re explaining consciousness if you not only can’t explain qualia, but are advocating a theory that seems to contradict our actually experiences of qualia. Thus, until your theory can actually even give a hypothesis for how we get qualia I think I’m quite right to argue that it hasn’t explained consciousness at all BY MY STANDARDS. As stated in this page, if you want to accept that consciousness does not require qualia and so the question of qualia is a separate question, that’s perfectly all right. I’ll be more than happy, then, to treat them as separate questions and work on the one that interests me (qualia). It’s claiming that you can explain consciousness when it includes qualia without having any idea how to get qualia and ignoring what we know about qualia that’s invalid.

  24. riandouglas Says:

    First, your reply here would not address non-substance dualisms, which would argue that mind is not brain but that it is not non-physical either.

    When I used the term dualism in that sentence I was intending to specifically address your substance dualism claims.
    I’m not sure why property dualism and the like is a problem for me, since they basically reduce to the mind being the brain as far as I can tell, in the same way biology reduces to physics (I suppose there’s always the chance one could demonstrate non-reducible, top down emergent behaviour, or what have you, but I don’t see evidence of it at present).

    Both functionalist and inherent to neurons theories are far more credible explanations as they have less absurd results.

    Yet one of your thought experiments involves replacing neurons with a device which mimics the neuron. If you mimic the neuron, then I don’t see how an “inherent to neurons” theory can be true.
    In fact, I don’t see how an “inherent to neuron” theory can be true, as there seems to be nothing “magical” about them. Perhaps some of their behaviour cannot be duplicated by a discrete component, since they are, I believe, analog in nature, but an analog component ought to be able to mimic one.

    Third, your contention means that in theory pocket calculators have qualia, which is far more absurd than any of the consequences of dualism.

    My contention does not lead to that conclusion. You keep wanting to introduce qualia at the lowest possible level – at the neuronal level, or imputing qualia to pocket calculators. I don’t see how anything I’ve mentioned entails such ridiculous conclusions.

    We have no reason to think that simply reproducing the electrical inputs produces qualia, and the processors do nothing else in any way like neurons. So we have little reason to think that the substitution results in qualia.

    Well, we’re not duplicating just the electrical inputs and outputs. We’re having to reproduce the chemical inputs and outputs, etc – all the ways in which a single neurons interacts with the rest of the brain (and body).

    Neurons could have the equivalent of a math co-processor that actually does produce qualia and produces phenomenal experiences and might even use the results — although that seems unlikely given the structure of the brain — but the processors I add clearly wouldn’t and wouldn’t need it.

    Once again you’re assuming that qualia are something other than the interactions of neurons. The math coprocessor analogy fails, since there is no “special” quality being introduced by it – it simply allows faster processing of certain operations.

    To actually address the WoW analogy, it seems to me that you’d need to show me why I should expect the Orc’s to be present in a single register or memory location (analogous to qualia being inherent to neurons), or why the Orc’s “exist” independently of the computers running and playing the game (substance dualism). My contention is that the Orc’s are a higher level emergent property of, but completely reducible to, the operation and interaction of the computers running and playing the game (and their individual components).

    No, you do need to do that if your only response to my complaints is to introduce a model where you have no way of identifying qualia in the first place,

    No way in practice does not mean no way in theory. I see no reason to think that, should brain scanning become high resolution enough, our understanding of neurons and neuron networks become detailed enough, we couldn’t deduce that a person is having experience Y. Granted that would tell us that they’re having experience Y, but not what it was like to have experience Y (that sort of thing seems covered by the privileged access we have, being the operation of our brains, rather than being removed and simply viewing the operation of a brain).

    since I will then simply raise the objection that for you there is no qualia or phenomenal experience AT ALL, which is far worse.

    And I’ll simply reply that my general model does allow room for qualia and phenomenal experience, and for such experiences to simply be an emergent property of the brain functioning (and could, in theory, also be present in non-human brains, and in non-brains).

    but can only be studied at the first-person level.

    Though you’re probably right with regards to what it’s like (though even there I’m a little skeptical) I disagree regarding questions that you’re experiencing.

    but since the mind as brain theory implies that you can have similar outputs with radically different phenomenal experiences all you could do is look at the structure of the brain itself, and not the functionality.

    And since such a claim actually flies in the face of any mind as brain hypothesis, I don’t see that you’ve made this case. This point turns on the claim that phenomenal experiences are not (cannot?) be a result of neuronal interaction (I’ve called it smuggling in dualism). I’ve tried to point this out to you a number of times now, but I’m obviously failing, since you continue to do it.

    But you need an argument for that, not merely an assertion.

    I thought I’d provided some sketches.
    Anyway, anything which is sufficiently complicated to do everything that brains do minus the phenomenal experiences, you’re going to get (as far as I can tell) phenomenal experiences. They seem to me to be nothing more than a result of sufficiently complicated and arranged and interconnected information processing systems hooked, including massively complicated feedback loops.
    If you want me to tell you HOW this is the case, I can’t. But neither have you shown that this IS NOT possible.

    Can you do anything like this?

    I believe I can (and have). Phenomenal experience, like the other results of out information capture and processing tasks our brains carry out, are used as inputs to other processes, feed back to the same processes etc.
    If you were to take away the “phenomenal” output/representations, you would not have the same organism (and in fact, the processing capacities of the organism would be impaired, since you have removed apparently vital feedback mechanisms, representations, etc).

    But why would increasing the complexity cause an actual experience to appear?

    The “experience” would not be some qualitatively different thing, as I’ve tried to point out to you before. It’s simply a part of the complex information processing network(s) that make up the brain.
    The “experience” appears, when the information processing networks contain sufficient self monitoring. It is something within the “brain”, not something external – only the brain “experiences” it, since it has priviledged access to itself.

    And then are you denying that lower-order animals with far less complex interactions and information processing have experiences?

    Some of them likely do not. There is also probably no solid deliniation between having and not having experiences.

    Your response here boils down to “I don’t think computers can have qualia, but if you plunk even the most simplistic things into a brain-like structure then they magically will”,

    No. I think computers could have qualia, if they were complex enough and arranged in the right way. As I’ve mentioned before, perhaps we need analog processors, rather than digital. Perhaps there are some other concerns here as well (though none spring to mind right now).

    but we have no reason to think that it is the structure of the neural net that does the work.

    I pointed out above exactly why we have very good reasons to think that the structure of the neural net that does the work – there does not appear to be any other candidates available (and we seem to have looked quite extensively).
    I’m willing to entertain alternatives, but not the ridiculous.

    Recall your initial point, which was that having the representation of the red car so that it could be acted on REQUIRED having that actual experience of qualia.

    You experience the representation of the red car – I don’t see any other way around that (unless you posit it as a completely unconscious activity, which is beneath the levels at which the brain becomes aware of what it is doing (in which case you likely would not act as if you were seeing a red car, since you would have no conscious representation of it (and no experience of that representation, etc).

    Thus, I can read the description and come to know all the things I need to know to act appropriately to it, such as buying gray seat covers because gray goes well with red.

    You have an experience/representation of the red car, imaging what the gray seat covers would look like in it (another experience/representation of the red car, this time with seat covers in something like that shade of gray) etc.

    So my argument is based not on a presumption of dualism, or even on dualism at all, but instead on an examination of MY ACTUAL EXPERIENCES.

    And yet you seem to be assuming that your experience of having representations are not experiences in themselves. I find that confusing, since it seems that I experience my thoughts, including thoughts of red cars with gray seat covers (in fact, just reading your description, I had an experience of what that would/might look like in reality).

    This distinction is made even more clearly for me because my visualization is so incredibly poor that I am not only unlikely TO translate the text into an image

    But you would still represent the textual description as something in your mind, and that would be an experience.

    but that even if I did CRITICAL COMPONENTS WOULD BE MISSING, which would leave me unable to react appropriately to the representation

    Critical components such as the actual shades of the colours, likely would not be (since that is the target of your representation – matching colours). I don’t think the level of detail you are able to muster undermines my position, and I still see no realistic alternatives.

    So, again, nothing to do with dualism, but everything to do with how it seems people can actually react.

    Perhaps not dualism, but you are certainly still claiming that qualia/phenomenal experiences are qualitatively different that information processing tasks (which you accept neurons can and do do).

    In a sense, I’m actually relying on real experiments and data here, and your reply is to simply assert that my actual experiences somehow are not actually happening or are not possible.

    Not at all. I don’t doubt that, as an information processing network, representations are generated, and that they are “observed”, so to speak, by other parts of the information processing system. It is the feedback loops, or observations which lead you to “feel” these experiences, as far as I can tell.

    I’m a qualia freak; to me, to be conscious is nothing more than having phenomenal experiences.

    Which I think can be explained without resorting to substance dualism, without going beyond the mind as being what the brain does, and without them being epiphenomenal.

    Thus, until your theory can actually even give a hypothesis for how we get qualia I think I’m quite right to argue that it hasn’t explained consciousness at all BY MY STANDARDS.

    And I think I’ve given a brief sketch of how qualia might be explained by referencing nothing more than brain activity. I’ve also explained that I think you’re mistaken in thinking that qualia must be qualitatively different to other “cognitive” activities.

    I still don’t understand how or why you accept substance dualism when it has far more problems than other positions (including, as far as I can tell, my own).

  25. verbosestoic Says:

    Sorry for the delay, but I’ve been chasing other things around and it’s a long comment so it takes a while to write a response … and, of course, time flies.

    Anyway, the first thing I need to address is that you and I are talking about two different things when we talk about “mind as brain” theory. You use it, essentially, to mean any materialist theory of mind, and so any theory that isn’t dualist. I, on the other hand, mean specifically neurological theories, such as those of Jaegwon Kim and probably the Churchlands, where we can tell mind — and likely can only tell mind — by the operations of brains specifically. I DON’T mean functionalist theories, and I don’t mean emergentist theories. So, when I say that the “mind as brain” theory is epiphenomenal, I DON’T mean that functionalist theories are epiphenomenal. They aren’t. THEIR problem is that they don’t seem to be able to find any third-person observable, implementation-independent criteria for phenomenal experience that isn’t contradicted by our actual phenomenal experiences, meaning that the functionality tracked can be there even if we aren’t having those phenomenal experiences … or having any at all.

    That’s why I accuse you of moving to a functionalist view from a “mind as brain” view, and why that likely confuses you. When you start talking “complex information processing with feedback loops
    as if that refutes the “epiphenomenal” point, to me that’s defining a set of FUNCTIONAL criteria, and OF COURSE that will escape epiphenomenalism, at the price of abandoning the specifics of the brain (as functionalism is implementation independent). And you’d also have to deal with the problems functionalism has. Here, you retreat in my view to emergentism, even though my objections aren’t raised against that specific view.

    So I hope this helps to clarify what’s fitting into what here.

    When I used the term dualism in that sentence I was intending to specifically address your substance dualism claims.
    I’m not sure why property dualism and the like is a problem for me, since they basically reduce to the mind being the brain as far as I can tell, in the same way biology reduces to physics (I suppose there’s always the chance one could demonstrate non-reducible, top down emergent behaviour, or what have you, but I don’t see evidence of it at present).

    Ah, here my comment was an aside aimed at a position that I’m probably mostly in at the moment, where I claim that mind and brain are not the same thing, they interact, but I don’t really care what “substance” the mind really is. It’s a separate entity, but it doesn’t have to be non-physical. If this is the case, then most of the problems of dualism go away, and yet I still get to maintain most of the reasons I prefer a separate mind and brain. The big push for this is that the definition of what it means to be “physical” has become so broad as to be synonymous with “existent”, and I see no reason why I should be committed to letting my position be defined out of existence especially since most of the issues that Descartes posited would STILL lead us to question whether mind and body can be the same thing even with the new definition of physical.

    Yet one of your thought experiments involves replacing neurons with a device which mimics the neuron. If you mimic the neuron, then I don’t see how an “inherent to neurons” theory can be true.

    But I don’t mimic the neuron. I mimic its ROLE in the brain — mainly the electrical impulses — but it’s nothing like a neuron in terms of construction or in how it does what it does. That’s the difference here: there could be something IN NEURONS THEMSELVES that produces phenomenal experiences and what I do in that thought experiment is cut all of that out of the picture.

    In fact, I don’t see how an “inherent to neuron” theory can be true, as there seems to be nothing “magical” about them. Perhaps some of their behaviour cannot be duplicated by a discrete component, since they are, I believe, analog in nature, but an analog component ought to be able to mimic one.

    There’s nothing magical about having 2 hydrogen atoms and one oxygen item in a molecule, but try getting water — and the properties and functions of water — without doing that [grin].

    My contention does not lead to that conclusion. You keep wanting to introduce qualia at the lowest possible level – at the neuronal level, or imputing qualia to pocket calculators. I don’t see how anything I’ve mentioned entails such ridiculous conclusions.

    As said here, I can do it as a combination of neurons or at the level of each neuron and still get the same result of it being epiphenomenal. I can build the pocket calculator out into a more complicated structure of these same little processors and the point will still be valid. Essentially, because YOU allowed the discussion to stay at the levels of neurons that’s how I make the link to the pocket calculator. If you want to stick more things together that’s fine with me, but then you have to tell me how to do it so I can evaluate it, remember that it is clear that not ALL neural activations produce phenomenal experiences, so it isn’t just what the whole brain is doing all the time. That’s why I switched in this page between the neural and the module explanations; neither work well, and it seems to me that any attempt to combine them doesn’t work any better.

    Well, we’re not duplicating just the electrical inputs and outputs. We’re having to reproduce the chemical inputs and outputs, etc – all the ways in which a single neurons interacts with the rest of the brain (and body).

    Which means what, exactly? Let me be clear here: it is the fact that the brain is, if the neurological theory is right, CAUSALLY CLOSED that causes the issue, not the specifics of the biology.

    Once again you’re assuming that qualia are something other than the interactions of neurons. The math coprocessor analogy fails, since there is no “special” quality being introduced by it – it simply allows faster processing of certain operations.

    I’m not assuming it, but arguing against your contention that if I replace the neuron with a processor that if it has the same links then it MUST also be doing it. The math co-processor indicates that that simply is not the case; the internal structure may do things differently and yet still produce the identical output at that level. Thus, YOU cannot assume that there is nothing inherent to neurons that allows them to produce phenomenal experience when connected in that way, and my thought experiment used things that we clearly don’t think has that ability right now no matter how you hook them up.

    No way in practice does not mean no way in theory.

    My complaint is that you can’t actually give me a way to do it in theory, not in practice. In theory meaning that you can tell me what WOULD indicate it even if we currently can’t do it.

    I see no reason to think that, should brain scanning become high resolution enough, our understanding of neurons and neuron networks become detailed enough, we couldn’t deduce that a person is having experience Y. Granted that would tell us that they’re having experience Y, but not what it was like to have experience Y (that sort of thing seems covered by the privileged access we have, being the operation of our brains, rather than being removed and simply viewing the operation of a brain).

    And right here in this page I point out that that isn’t all that safe, but even if it was it would break down spectacularly with things that don’t have an actual, neural brain … like AIs.

    Also, if you can’t tell even for another human what their experience is like, how could you tell whether they are actually having one, or just having neurons that are no longer producing experiences going through the motions to produce the observable functionality?

    And I’ll simply reply that my general model does allow room for qualia and phenomenal experience, and for such experiences to simply be an emergent property of the brain functioning (and could, in theory, also be present in non-human brains, and in non-brains).

    What I need from you is how you can tell that that is happening, or else you can’t even do it in theory and so I am right to be skeptical that it fits in at all.

    And since such a claim actually flies in the face of any mind as brain hypothesis, I don’t see that you’ve made this case. This point turns on the claim that phenomenal experiences are not (cannot?) be a result of neuronal interaction (I’ve called it smuggling in dualism). I’ve tried to point this out to you a number of times now, but I’m obviously failing, since you continue to do it.

    And this is where you clearly misunderstand me, because I am — for the epiphenomenal argument — ASSUMING for the sake of argument that phenomenal experiences ARE the result of neural interaction, and pointing out that since the brain is causally closed that neural activation follows a strict causal chain from the light striking the retina there seems to be no way for the brain to change that causal chain depending on whether red or green is the actual result (or product, which is what you jump on me for saying but I fail to see how result means anything different).

    Anyway, anything which is sufficiently complicated to do everything that brains do minus the phenomenal experiences, you’re going to get (as far as I can tell) phenomenal experiences. They seem to me to be nothing more than a result of sufficiently complicated and arranged and interconnected information processing systems hooked, including massively complicated feedback loops.
    If you want me to tell you HOW this is the case, I can’t. But neither have you shown that this IS NOT possible.

    Again, you misunderstand me. I am not arguing impossible. I am arguing that you have absolutely no reason to think that this is the case, and so it isn’t a very PLAUSIBLE answer to the problems I’ve raised, even if it works. The main issue here, for example, is “how complicated is complicated?”; things with far less complication seem to have them, while things far more complicated than those things — hypercubes, for example — don’t.

    I believe I can (and have). Phenomenal experience, like the other results of out information capture and processing tasks our brains carry out, are used as inputs to other processes, feed back to the same processes etc.
    If you were to take away the “phenomenal” output/representations, you would not have the same organism (and in fact, the processing capacities of the organism would be impaired, since you have removed apparently vital feedback mechanisms, representations, etc).

    It turns out that I agree that phenomenal experience is an input to other processes that produce representations (see the other paper for that). And so I agree that if you take the input away, the organism would either be impaired or have to compensate. So we agree there. But my comment about the neurological theories is that the relevant is the neural outputs, and not necessarily the phenomenal experience itself. Where is this being “produced”? At individual neurons? I can then replace the neuron with something that we have no reason to think actually does it and see no difference. As a module? Replace that module entirely and we’ll see the same thing. More distributed? Replace the whole thing. That would mean that the experience is NOT, in fact, an input to the process. That’s BAD, since that’s what we agree on. And I don’t see a way out of this, and your “Well, it’s the same electrical/chemical impulses … no, it’s the same complexity of information processing … no, it’s emergent” really, really doesn’t help here, as it seems to shift away from the actual problem.

    The “experience” would not be some qualitatively different thing, as I’ve tried to point out to you before. It’s simply a part of the complex information processing network(s) that make up the brain.

    And the fact that the other name for these things is “qualia” doesn’t suggest to you that the whole problem, even among people who are not dualists, is that this IS a qualitative difference, and that the qualitative part is the thing that needs to be explained [grin]?

    I’ll try to get to the rest later, maybe tomorrow.

  26. verbosestoic Says:

    Continuing on:

    I pointed out above exactly why we have very good reasons to think that the structure of the neural net that does the work – there does not appear to be any other candidates available (and we seem to have looked quite extensively).
    I’m willing to entertain alternatives, but not the ridiculous.

    The problem is that you are calling alternatives ridiculous based on presumptions you make and I don’t, and so what seems ridiculous to you doesn’t seem so to me. Your attacks on dualism all have a materialistic underpinning; you don’t accept substance dualism because it would require something that isn’t material. I’m not a materialist, and so don’t have any reason to consider an explanation that includes something immaterial ridiculous for that reason alone. On the other hand, I am a qualia freak, and so to me explaining phenomenal experience and qualia is really what needs to be explained. Functionalism defines qualia in a way that doesn’t align with our actual experiences of qualia, and so it’s a bad explanation. Neurological theories, to me, at best don’t explain how we get it at all and at worst are epiphenomenal, and so it’s not satisfactory. Emergentism is just as mysterian as dualistic theories, so it’s not any better. So, because qualia matters more to me than it does to you, I reject theories because they don’t provide anything like an explanation. Because materialism matters more to you than it does to me, you reject non-materialist theories.

    The thing that’s important here is that because of our difference in priorities, we have different reactions to problems. You are willing to say that yeah, right now we don’t know how to fit qualia into a materialist model, but, hey, maybe will figure it out later. What’s important to you is getting a model that fits into materialistic science. For me, I’m willing to say that there’s problems with causation between an immaterial and a material thing, but again that’s something we can figure out once we prove/accept dualism. What’s important is having a role for qualia and its properties. So, I accept that my view has problems, but they aren’t problems that bother me as much as they do you. And if we both accept that, then scrapping over “ridiculous” is clearly the wrong way to go.

    You experience the representation of the red car – I don’t see any other way around that (unless you posit it as a completely unconscious activity, which is beneath the levels at which the brain becomes aware of what it is doing (in which case you likely would not act as if you were seeing a red car, since you would have no conscious representation of it (and no experience of that representation, etc).

    Yes, I would. But technically “red car” is a representation of a red car, but that isn’t a phenomenal experience OF a red car. It’s a phenomenal experience of text that DESCRIBES a red car, which allows me to build the representation of a red car. So, to me, we have this:

    Input: A phenomenal experience of a red car.
    Output: The representation of a red car, called R(red car).

    Input: A phenomenal experience of the text “red car”.
    Output: The representation of a red car, called R(red car).

    I argue that the “R(red car)” is the same in both cases, but the input that produces that representation is completely different, and the qualia is, in fact, completely different and we have two completely different phenomenal experiences. Thus, phenomenal experiences cannot be REDUCED to representations without losing that important difference. That, then, is why I resist the move to representations that I think you were making here.

    You have an experience/representation of the red car, imaging what the gray seat covers would look like in it (another experience/representation of the red car, this time with seat covers in something like that shade of gray) etc.

    Except I don’t have to imagine what it would look like, and in fact in general I normally wouldn’t. If, say, I was buying this gift I’d simply ask what colour of seat cover generally goes with red, and wouldn’t imagine it at all. And yet surely I know that the car is red, and am acting on a representation of it, despite my never having imagined it at all. We can certainly do this, and do it frequently.

    Critical components such as the actual shades of the colours, likely would not be (since that is the target of your representation – matching colours). I don’t think the level of detail you are able to muster undermines my position, and I still see no realistic alternatives.

    Um, I miss a lot more than that, as I have said. It is vague and indistinct. This is, of course, just a personal thing for me that doesn’t happen for most other people, but of course telling me what my experiences actually ARE is one of the reasons that I claim that the third-person science view is wrong, since you can’t know that by my actions (I use other methods to sub in for that).

    The interesting thing, BTW, is that for me visualization is indistinct, but sound is not. Imagining sounds seems perfectly clear until I analyze it. Thus, I have even more evidence that this is something that is different than normal, but because it can happen to me I KNOW that this is the case and so can use it to, at least, inform my own theories.

    Perhaps not dualism, but you are certainly still claiming that qualia/phenomenal experiences are qualitatively different that information processing tasks (which you accept neurons can and do do).

    But this is based on actual evidence, evidence that we can do the information processing without the specific qualia that I’m concerned about. And your best reply to that has been to deny that that actually happens, which as I’ve pointed out doesn’t seem plausible because threre are many cases where it clearly doesn’t.

    And I think I’ve given a brief sketch of how qualia might be explained by referencing nothing more than brain activity. I’ve also explained that I think you’re mistaken in thinking that qualia must be qualitatively different to other “cognitive” activities.

    I still don’t understand how or why you accept substance dualism when it has far more problems than other positions (including, as far as I can tell, my own).

    Well, you haven’t addressed my arguments for why it can’t be, as outlined in both pages (the other page was specifically designed to explain the last sentence, and this one to explain the first one). And as I already said here, I find the problems you list with substance dualism not as problematic as you do, and that’s about what we really want explained. As stated in this page, if you want to separate consciousness from qualia I don’t really mind, as long as you stop claiming to have explained qualia when you can’t answer questions you really need to answer.

  27. Havok Says:

    Quick response:

    Your attacks on dualism all have a materialistic underpinning; you don’t accept substance dualism because it would require something that isn’t material.

    That is a mischaracterisation. I don’t accept substance dualism because the evidence for the existence anything other than “material” is lacking, and the concepts themselves appear absurd.

    So, because qualia matters more to me than it does to you, I reject theories because they don’t provide anything like an explanation.

    Which again seems to me to be jumping the gun. I have a sneaking suspicion that your dualistic tendencies are not due simply to your love of qualia 🙂

    Qualia matter, and need an explanation, but we con’t currently have a one. Unlike you I don’t see the same difficulties as you do, and I’m comfortable waiting for investigation to proceed. You do not appear to be comfortable with waiting. There seems no reason to go from “Qualia haven’t been explained” to substance dualism – that is going well beyond what the evidence supports, hence my suspicion above.

    Because materialism matters more to you than it does to me, you reject non-materialist theories.

    “Materialism” doesn’t matter to me in the least. What is supportable by reasonable arguments and evidence matters to me.
    The concept of dualism appears absurd, and the arguments attempting to establish it as a viable thing are unconvincing.

  28. verbosestoic Says:

    The problem is that the only arguments I can recall seeing for why the concept is absurd ARE materialistic ones; you raise challenges on the basis of problems with the interaction of the immaterial and the material and of how the immaterial operates. The only other arguments I’ve seen are asking how it can explain the fact that if you change the brain you can change the mind and the repeated above claims that the evidence isn’t convincing. Neither of those, though, are arguments that prove conceptual absurdity, and so I have seen nothing to demonstrate that conceptual absurdity that you think you see, and that I clearly don’t see.

    BTW, interactionist dualism is completely compatible with the psychological evidence, as I’ve said before.

    Which again seems to me to be jumping the gun. I have a sneaking suspicion that your dualistic tendencies are not due simply to your love of qualia 🙂

    Well, then, what do you think it is, then? Noting that I argue that substance dualism does not, in fact, necessarily mean that the mind survives the body AND that I learned my philosophy of mind from staunch materialists.

    Qualia matter, and need an explanation, but we con’t currently have a one. Unlike you I don’t see the same difficulties as you do, and I’m comfortable waiting for investigation to proceed. You do not appear to be comfortable with waiting. There seems no reason to go from “Qualia haven’t been explained” to substance dualism – that is going well beyond what the evidence supports, hence my suspicion above.

    It’s not a matter of waiting for investigation, but of simply choosing which theory is the best one currently. Since I take qualia very seriously, the theory I will prefer is one that also takes qualia seriously and takes my actual experiences into account. Functionalism implies that I’m having experiences I’m not having in most of its forms, so its out. I think the neurological theory is epiphenomenal, as I’ve argued. But a dualistic theory:

    1) Is not epiphenomenal if it is interactionist, which is what I prefer, by definition. Thus, it takes mental causation very seriously.

    2) Covers all experiences by noting that it is the experiences that count — as they are produced by the mind and what it is doing — and doesn’t try to reduce it to third-person observable things.

    3) Can handle almost all of the psychological evidence, if not all of it (again, interactionist implies that changing mind changes brain and that changing brain changes mind).

    4) Is not refuted by our current neurological data because we have not proven it closed yet.

    It has some problems with describing the interaction between the immaterial and the material, if it must be immaterial. THAT is what I’m willing to wait on further investigation for. Again, it’s just a matter of priorities.

  29. Havok Says:

    Longer response:

    But technically “red car” is a representation of a red car, but that isn’t a phenomenal experience OF a red car.

    But the representation could well be the phenomenal experience, could it not?
    Why separate the 2 things when they seem to always go together?

    So, to me, we have this:

    I would change your examples somewhat:

    Input: The visual sensory input of a red car.
    Output: A visual representation of the “Red Car, and the experience of that representation.
    A symbolic representation of “Red Car”, and the experience of that representation.
    A linguistic representation of the words “red” and “car” combined, and the experience of that.

    Input: The visual sensory input of the text “red car”.
    Output: A linguistic representation of the words “red” and “car” combined, and the experience of that.
    A symbolic representation of “Red Car”, and the experience of that.
    A visual representation of a “Red Car”, and the experience of that.

    Let’s call them V(RC) for the visual rep+experience, S(RC) for the symbolic rep + experience & L(RC) for the liguistic rep+experience.
    If I look at a red Ferrari and a Red Limosine, the V(RC)’s in each case are going to be different, but the S(RC) could be the same, and the L(RC) is likely to be about the same.
    If I read “Red car” and “voiture rouge”, the L(RC)’s are likely to be different, the S(RC)’s the same, and perhaps the V(RC)’s similar (perhaps I imagine a Ferrari for “Red Car” and a Fiat for “voiture rouge”. Or perhaps, like you claim above, my visualisation is poor, and I imagine a “generic” sort of car on both occasions).

    Now, I don’t see how you can separate the representations and the experiences. I think the closest realistic scenario is that the phenomenal experience is our conscious becoming aware of the representation. I don’t see the experiences as having some sort of independant existence, which you seem to be claiming. As such, I don’t see how I could be aware of a representation without experiencing that awareness (and therefore having a phenomenal experience of that),

    And yet surely I know that the car is red, and am acting on a representation of it, despite my never having imagined it at all. We can certainly do this, and do it frequently.

    I dispute this claim. If you know the car is red, and have a representation of it, then you have had, I would think, an experience of that knowledge. It may not be as vivid as when you’re standing in front of your new car, but it would surely be there?

    It is vague and indistinct.

    I don’t see that the vagueness impacts my claim, and in fact would surely be expected, since we’re condensing large quantities of information down to “representations” of that information – loss of detail would be fundamental to that, surely?

  30. Havok Says:

    The problem is that the only arguments I can recall seeing for why the concept is absurd ARE materialistic ones; you raise challenges on the basis of problems with the interaction of the immaterial and the material and of how the immaterial operates

    That’s not a “materialistic” argument, it’s an evidential and theoretical one – the fact that there is no evidence for this “non-stuff”, and no plausible mechanism for “stuff” and “non-stuff” to interact (nor evidence for said interaction).

    The only other arguments I’ve seen are asking how it can explain the fact that if you change the brain you can change the mind and the repeated above claims that the evidence isn’t convincing.

    That’s the same argument.

    Most conceptions of substance dualism seem to be arguing for the existence of quite a “nothing” with various rather incredible properties, which straing credulity.

    BTW, interactionist dualism is completely compatible with the psychological evidence, as I’ve said before.

    But not with the physical (as in physics) evidence.

    Well, then, what do you think it is, then?

    The vast majority of people seem to hold some conception of substance dualism – some concept of soul, or mind being separate from the body. Many of them are unable to reasonably question that concept, due to emotional or ideological attachments to it (religion is a powerful motivating factor here, in my experience). I don’t know you well enough to say anything for certain, though you do seem to be a theist, which seems to require acceptance of the “immaterial” and of mental being distinct from physical (at least in the case of God).

    It’s not a matter of waiting for investigation, but of simply choosing which theory is the best one currently.

    And the evidence at present doesn’t support the concept of substance dualism. It might (and it could have done), but it doesn’t (as far as I can tell).

    Since I take qualia very seriously, the theory I will prefer is one that also takes qualia seriously and takes my actual experiences into account.

    I don’t see how substance dualism fits this bill. It doesn’t seem to offer any actual explanation, but rather just places the “difficult” features beyond reasonable investigation. If you were right, we could figure out how qualia effect your thinking, but could seem to have no knowledge of what they are – they would remain mysterious.

    Thus, it takes mental causation very seriously.

    But does not take the results of physics very seriously, it seems to me.

    Covers all experiences by noting that it is the experiences that count — as they are produced by the mind and what it is doing

    I don’t see how other theories of mind don’t take experiences seriously?
    Sure, they might end up showing that the experiences are “illusory” in some sense (in the same way, perhaps, as optical illusions are), but they are still taken seriously, and an explanation is sought. On the other hand, I see substance dualism as mostly giving up trying to explain, and placing the object out of reach, so to speak.

    and doesn’t try to reduce it to third-person observable things.

    I don’t see the problem with (non-naive) reductionism. Perhaps dualism is correct. Perhaps we’ll need to adopt some kind of neutral monism. But I see the only way of actually finding out being interpersonal investigation, which means third person, empirical “things” (unless you have some other, reasonable reliable and somewhat self correcting methodology?)

    Can handle almost all of the psychological evidence, if not all of it (again, interactionist implies that changing mind changes brain and that changing brain changes mind).

    I haven’t seen a convincing explanation of how mind altering drugs can be explained under interactionist dualism, let alone brain damage (which causes mind damage). Colour me unconvinced 🙂

    Is not refuted by our current neurological data because we have not proven it closed yet.

    No, but it seems to me to be a slender reed to hang your beliefs upon. A large number of ridiculous and improbably things are “not refuted” by current scientific knowledge, but that doesn’t mean that they’re reasonable beliefs to hold. The example of a teapot orbiting a distant planets come to mind 🙂

    It has some problems with describing the interaction between the immaterial and the material, if it must be immaterial.

    It also, as far as I can tell (I could be wrong or ignorant) doesn’t actually offer an explanation. It seems to me like it akin to “God of the gaps” arguments for other parts of reality we lack solid explanations for. I don’t buy them, and I don’t buy substance dualism.

  31. verbosestoic Says:

    But the representation could well be the phenomenal experience, could it not?
    Why separate the 2 things when they seem to always go together?

    Because they don’t; you can, as I have pointed out, have a representation of a red car without having a phenomenal experience of a red car.

    Look, in philosophy of mind representations are, in fact, the things that do all the hard work of storing our impressions and our knowledge and, perhaps, producing the beliefs upon which we act. Some even suggest that being conscious is nothing more than having the right sorts of representations. So when I talk about a representation, I don’t mean something like how a painting of a tree is a representation of a tree — like you meant in what I replied to — but this sort of special thing that does all this work. And so it, in some sense, has to exist, just as a computer has a bitwise representation of an image so that it can work on it.

    So, if we were to combine the phenomenal experience and the representation, we’d run into the exact sort of thing you do: having to have a completely different representation for each form of “input”. But this is problematic. First, since these representations all seem to do the same work, there’s really no reason to posit them that way. Second, we know that we don’t, in fact, store visual representations in memory, but instead regenerate them from certain markers that we do store somewhere.

    So, trying to reduce phenomenal experiences to representations doesn’t work, and more importantly for my purposes simply leaves us with representations that are divided up among what I would call the different inputs. If by reducing the experiences to the representations we still have to introduce the precise same distinctions into the representations that we were trying to eliminate, it’s a pretty good indication that what we actually have are real distinctions, and things that we cannot simply eliminate. So, then, we can see that the different inputs CAUSE representations to form, but are still different inputs, and so treating them as such makes more sense.

    None of this, BTW, is necessarily dualistic.

    I dispute this claim. If you know the car is red, and have a representation of it, then you have had, I would think, an experience of that knowledge. It may not be as vivid as when you’re standing in front of your new car, but it would surely be there?

    What do you mean by “an experience of that knowledge”? Surely you must agree that I need in no way actually ever picture that car in any way to know that it is red, and I have never denied that an inner speech thought of “The car is red” would be an experience, just that it’s not a direct phenomenal experience of the car and its redness.

    I don’t see that the vagueness impacts my claim, and in fact would surely be expected, since we’re condensing large quantities of information down to “representations” of that information – loss of detail would be fundamental to that, surely?

    First, we generally DON’T lose detail in the representations.

    Second, my remembered experiences lose details that mean that by retrieving that image I could not read from it things that I however still know and act on about the image.

  32. verbosestoic Says:

    That’s not a “materialistic” argument, it’s an evidential and theoretical one – the fact that there is no evidence for this “non-stuff”, and no plausible mechanism for “stuff” and “non-stuff” to interact (nor evidence for said interaction).

    But the problem is that the people you’re arguing with didn’t just invent it because they felt like it; they had reasons and arguments for why it doesn’t seem to be possible for it to be material, starting all the back to Descartes. Some of this problems are not yet addressed, and as you know I’ve raised new ones. The best you can say, then, is that you don’t see those problems as being necessarily unresolvable, but to do as you do and claim that this mechanism is anything more than a problem is simply to be assuming materialism. After all, if any of these issues worked out and so it was proven that the mind had to be immaterial objections that physics doesn’t accept immaterial things would mean nothing to the debate, and have no impact on the arguments given. So how can it be evidence if we could only consider it meaningful if the arguments given already do not establish that the mind is immaterial? No, it is clearly the materialistic presumption that drives raising those objections, or else they aren’t in any way meaningful.

    Most conceptions of substance dualism seem to be arguing for the existence of quite a “nothing” with various rather incredible properties, which straing credulity.

    Such as? They clearly don’t posit a “nothing”, and there aren’t any qualities that it would have that are that incredible unless, again, you make a materialistic presumption and try to deny those qualities to things that are not material.

    But not with the physical (as in physics) evidence.

    All that the physics can say on this matter is that it hasn’t seen it yet. It cannot give any strong reason for saying that it can’t be true, and moreover again if it was proven to be the case physics would update and move on, so it’s hard to see why we should care overmuch about being consistent with it, especially if we are consistent with the important data, which is the psychological as that actually describes the phenomena that we are studying. And, bluntly, it is this that I am arguing that materialism does not satisfy.

    The vast majority of people seem to hold some conception of substance dualism – some concept of soul, or mind being separate from the body. Many of them are unable to reasonably question that concept, due to emotional or ideological attachments to it (religion is a powerful motivating factor here, in my experience). I don’t know you well enough to say anything for certain, though you do seem to be a theist, which seems to require acceptance of the “immaterial” and of mental being distinct from physical (at least in the case of God).

    So, must it be these sorts of influences or could it just be the fact that when we go out and examine our actual experiences this is the concluson we are led to? Yes, we might be wrong as the appearances might be deceiving, but surely the burden of proof is on the person who wants to overturn all the evidence that our experiences are granting us, no?

    I don’t see how substance dualism fits this bill. It doesn’t seem to offer any actual explanation, but rather just places the “difficult” features beyond reasonable investigation. If you were right, we could figure out how qualia effect your thinking, but could seem to have no knowledge of what they are – they would remain mysterious.

    Wait … are you actually saying here that it is mysterious how, say, an actual experience of pain impacts our thinking? That’s absurd. It’s OBVIOUS that the actual qualities of our experiences impact our behaviour. That’s why accusing materialist theories of being epiphenomenal is such a strong argument, as it just seems obvious that feeling pain or seeing green directly impacts our behaviour.

    As for “reasonable investigation”, if you mean third-person scientific, then it does do that, but that doesn’t mean that it can’t be examined through things like introspection. I am not, therefore, saying not to study it, but am merely saying that its nature is such tht you lose the important aspects of it if you try to do it from the third-person.

    I don’t see how other theories of mind don’t take experiences seriously?
    Sure, they might end up showing that the experiences are “illusory” in some sense (in the same way, perhaps, as optical illusions are), but they are still taken seriously, and an explanation is sought. On the other hand, I see substance dualism as mostly giving up trying to explain, and placing the object out of reach, so to speak.

    How can our experiences be illusory? What I am experiencing is what I’m experiencing, and that’s that. Your argument here, then, must concede that our experiences LOOK like they’re from a separate entity than a brain and are immaterial. Which is fine, but then you’d be claiming that the actual direct evidence we have is misleading, which runs the risk of you making theories that ignore the actual evidence. Thus, you need to show EXACTLY how these are an illusion and prove that before you get to claim the default explanation slot … and materialist theories don’t do that. especially since thay are all derived from the third person view where we don’t have access to the real data.

    I don’t see the problem with (non-naive) reductionism. Perhaps dualism is correct. Perhaps we’ll need to adopt some kind of neutral monism. But I see the only way of actually finding out being interpersonal investigation, which means third person, empirical “things” (unless you have some other, reasonable reliable and somewhat self correcting methodology?)

    I like what Cognitive Science is doing, as it combines the scientific approach with the philosophical, armchair approach that takes the first-person view seriously. To me, too many scientists are latching on to the first third-person thing they can get so that they don’t have to worry about the subjective, first-person view, and I am glad that philosophers are, in fact, generally keeping them in mind. After all, I’ve shown in this page why the third-person view must leave questions that we should be able to answer if science is going to keep its promises unanswered.

    I haven’t seen a convincing explanation of how mind altering drugs can be explained under interactionist dualism, let alone brain damage (which causes mind damage). Colour me unconvinced 🙂

    That is isn’t convincing to you doesn’t mean that it doesn’t work. Again, interactionist dualism explicitly states that mind events cause brain events, and brain events cause mind events. If this is the case, then it isn’t surprising that if you change how the events in the brain are generated, it will change how the events in the mind get generated. Do that permanently, and they will change permanently. So it really simply isn’t a problem. I can go into more detail if you would be willing to read and work it out.

    No, but it seems to me to be a slender reed to hang your beliefs upon. A large number of ridiculous and improbably things are “not refuted” by current scientific knowledge, but that doesn’t mean that they’re reasonable beliefs to hold. The example of a teapot orbiting a distant planets come to mind 🙂

    Of course, that’s just one of the reasons I prefer dualism, so it isn’t like that at all, but thanks for playing.

    It also, as far as I can tell (I could be wrong or ignorant) doesn’t actually offer an explanation. It seems to me like it akin to “God of the gaps” arguments for other parts of reality we lack solid explanations for. I don’t buy them, and I don’t buy substance dualism.

    And so we come full circle. The response there is not, in fact, meant to establish dualism, but to defend it against charges that dualism is not a viable explanation due to certain problems. I am pointing out that those problems, well, aren’t, or that the problems we do have are, in fact, just things we’ll figure out later. The same error is applied to “God of the gaps” arguments; they are not meant to prove the existence of God, but instead to defend against charges that God cannot exist because of X. It should surprise no one that arguments meant to simply rebut a supposed disproof don’t, in fact, prove the thing purportedly disproven, and so all of these responses devolve to an attempt to shift the burden of proof, to start from “I have shown your view absurd!” and then retreat to “Well, you haven’t proven it!” when they fail to show it absurd.

    You started from, basically, a position that substance dualism is untenable, but have yet to actually demonstrate that.

  33. Havok Says:

    Apologies for the long post and the frequent fisking:

    they had reasons and arguments for why it doesn’t seem to be possible for it to be material, starting all the back to Descartes.

    Yet they have not established it as being immaterial, and there seem to be strong reasons for thinking that they’re not.

    Some of this problems are not yet addressed, and as you know I’ve raised new ones.

    As I’ve pointed out a few times, I think your main problem is that you assume or expect qualia to be non-material/physical, and work from there.

    The best you can say, then, is that you don’t see those problems as being necessarily unresolvable

    The problems with immaterialism/supernaturalism? No, logically substance dualism could have been true. As far as we can tell, however, it appears not to be.
    The problems with qualia and materialism? There seem to be a number of intersting and promising hypothesis to follow there.

    but to do as you do and claim that this mechanism is anything more than a problem is simply to be assuming materialism.

    No, it’s to conclude materialism (provisionally).

    After all, if any of these issues worked out and so it was proven that the mind had to be immaterial objections that physics doesn’t accept immaterial things would mean nothing to the debate, and have no impact on the arguments given.

    Actually, it would mean that physics (or at least, the body of knowledge of which physics forms a part) has expanded to accept the immaterial as a legitimate member. This may mean a revision of physics as it stands, of course, to include the capability of interaction between material and immaterial (quantum field theory would need to be revised in order to include terms for the immaterial, Feynman diagrams would need to be drawn up to illustrate these interactions, etc).

    No, it is clearly the materialistic presumption that drives raising those objections, or else they aren’t in any way meaningful.

    I disagree. Where we have explanations, they do not point to the immaterial, and so explanations which rely upon the immaterial must surely have a lower plausibility, and therefore we ough to prefer other explanations (and hence we ought to prefer “The mind is most likely material” rather than “The mind is most likely immaterial”).

    They clearly don’t posit a “nothing”

    Then what is the “it” here, VS?
    It seems to me that the immaterial is “nothing” with properties attached (“Can process information”, “can form representations”, “can interact with material things”). There’s plenty of claims, but no answers – things like the above properties are seemingly asserted as “foundational”, but why think that? Especially when we know from physics, that material things can possess basically all of the same properties? Why think the immaterial exists when there seems to be no independant reason, outside of the thing you’re trying to explain, to think that it exists? Positing an immaterial mind without demonstrating minds cannot be material, without showing interaction between the immaterial and the material, strikes me as somewhat ad-hoc in nature.

    you make a materialistic presumption and try to deny those qualities to things that are not material.

    I don’t think I presume materialism.
    Those qualities seem to be higher level properties of material things (information processing, for instance), and as such, have lower level explanations (electrons zipping through silicon, for instance). No such explanation appears even to be attempted for the properties of the immaterial. Perhaps I’m missing a rich vein of Dualist philosophy and science which attempts such explanations?

    All that the physics can say on this matter is that it hasn’t seen it yet.

    Your question assumes the existence of the immaterial.
    It would be better to say that the evidence from Physics gives us no reason to think it exists.

    It cannot give any strong reason for saying that it can’t be true,

    It does give strong reasons to think that it is not true in this world, however (absence of evidence can be evidence of absence, after all).

    and moreover again if it was proven to be the case physics would update and move on,

    But until then we can only work with the evidence we have. Evidence that is not friendly to substance dualism.

    so it’s hard to see why we should care overmuch about being consistent with it

    Really?
    I guess YEC’s shouldn’t care over much about being consistent with the geological evidence, the paleontological evidence, the atomic evidence, etc, since similar to you claim above regarding such evidence, “It cannot give any strong reason for saying that it can’t be true”.

    especially if we are consistent with the important data,

    Particle physics IS important data. If you aren’t consistent with that, then you have serious problems.

    which is the psychological as that actually describes the phenomena that we are studying. And, bluntly, it is this that I am arguing that materialism does not satisfy.

    You are arguing that, but as I’ve tried to point out (rather poorly, no doubt), you seem to assume that the phenomena to be explained are not material prior to assessing explanations. And as I’ve pointed out, it is not surprising that you get from that assumption to the conclusion that the phenomena to be explained are not material.

    Yes, we might be wrong as the appearances might be deceiving, but surely the burden of proof is on the person who wants to overturn all the evidence that our experiences are granting us, no?

    Well, since we have evidence that appearances can be deceiving regarding minds and brains (optical illusions seem to me to be a simple example of this), and since your claims REQUIRE solidly attested physics to be incorrect (thermodynamics, quantum field theory, etc), I think you surely have your own buden to shoulder.
    Now, for any actual specific hypothesis to be accepted, I would expect it to be able to explain these experiences. But in the absence of any reasonably successful hypothesis to explain these phenomena (as there do not seem to be any successful material or immaterial explanations), we’re stuck with speculating as to which road is likely to be more fruitful. And it seems to me that your proposal doesn’t stack up too well in this comparison (with being non-parsimonious, uneconomical, etc).

    Wait … are you actually saying here that it is mysterious how, say, an actual experience of pain impacts our thinking?

    No, I’m saying it would be completely mysterious as to what the feeling of pain was, how it was “generated” from simple nerve impulses, how the information of “X pain” is used in out thinking. I say this because the workings of the immaterial are left as mysteries with apparently no conceivable means of further investigation. Like other similar “God of the Gaps” or “supernatural of the gaps” arguments, it seems to be something of a science stopper.

    That’s why accusing materialist theories of being epiphenomenal is such a strong argument, as it just seems obvious that feeling pain or seeing green directly impacts our behaviour.
    But this accusation assumes that “feeling pain” or “seeing green” are inherently non-material. My computer can recognise “green”, and represent it as some kind of non-reducible “symbol” of sorts, so I see no reason to assume, as you do, that the qualia “green” is inherently non-material as you seem to.

    I am not, therefore, saying not to study it, but am merely saying that its nature is such tht you lose the important aspects of it if you try to do it from the third-person.

    But you are saying, at least in part, that it cannot be studied.
    The “important aspects” seem to me to come from our privileged access to our nervous/neurological system (being comprised of them). The nature of this privileged access could surely be studied, so that it may be possible, using some advanced neuroscience, to say that “subject A is thinking of a pink elephant” simply by imaging the brain. This will not enable the neuroscientist to know what it is like to be “subject A thinking of a pink elephant” (unless they are subject A).
    If all you are arguing ends up being “we have privileged access to ourselves”, then I agree. But you are not arguing that, you are going much further.

    What I am experiencing is what I’m experiencing, and that’s that. Your argument here, then, must concede that our experiences LOOK like they’re from a separate entity than a brain and are immaterial.

    That’s quite a leap you’ve made there, one which I don’t think I tried to make, nor would I conceede.

    especially since thay are all derived from the third person view where we don’t have access to the real data.

    What is the “real data” here?
    If (human) minds are brains, then the real data is encoded in nerons, so we do have access to it (just not the sort of provileged access the subject has, being constituted by those neurons etc).

    I like what Cognitive Science is doing, as it combines the scientific approach with the philosophical, armchair approach that takes the first-person view seriously.

    I agree. I think first person experience should be taken seriously.

    After all, I’ve shown in this page why the third-person view must leave questions that we should be able to answer if science is going to keep its promises unanswered.
    What you haven’t shown is that they cannot be answered within a materialistic/naturalistic framework, nor how your own preferred view answers or could answer them, nor why your view should be preferred (which is the claim that I think got me here from Eric’s blog to begin with).
    Science is replete with (currently) unanswered questions (how does gravitation/GR fit with QM), and the history of science is filled with answers to questions. It seems far too premature to put your money on the subatance dualism horse given the evidence we have to hand.

    That is isn’t convincing to you doesn’t mean that it doesn’t work.

    True, but it does explain why I am not convinced 🙂

    Again, interactionist dualism explicitly states that mind events cause brain events, and brain events cause mind events.

    It asserts this, but it doesn’t offer an explanation for it.
    For example, I can claim that a naturalist theory of mind explicitely states the same things. You would immediately request an explanation of how that occurs, and I could point to various threads of research. I see nothing similar forthcoming regarding substance dualism – perhaps I’m simply missing it?

    Do that permanently, and they will change permanently.

    This part only makes sense to me if the things being damaged are actually responsible for the cognitive functions that are impaired.
    For instance, I could understand how, on interactionist substance dualism, damage to a part of the brain which “communicates” or “causes” certain mental events could lead to different mental events, but I see no reason to think that such damage could effect in any way the “mental processing” that the mind is carrying out.

    I can go into more detail if you would be willing to read and work it out.

    I could try to follow along 🙂

    I am pointing out that those problems, well, aren’t, or that the problems we do have are, in fact, just things we’ll figure out later.

    This would be a fine response if we had reason to think that this were a viable avenue of investigation. But it seems we do not.

    The same error is applied to “God of the gaps” arguments; they are not meant to prove the existence of God, but instead to defend against charges that God cannot exist because of X.

    Some are intended, as you point out, to try to find a space for God amongst scientific explanations. Many are, in my experience, intended to prove the existence of God.

    You started from, basically, a position that substance dualism is untenable, but have yet to actually demonstrate that.

    No, I started from the position that substance dualism seems to be conceptually difficult, and there seems no reason to think that it is real (evidence from physics etc). So far you’ve simply claimed that the concept is not difficuly conceptually, and we can ignore the evidence which runs counter to it because it might turn out to be mistaken in the future.

    Granted we’ve both been fairly light on details, but it is a blog post 🙂

  34. verbosestoic Says:

    I’m going to try to put up a blog post in the next little while about how interactionist dualism and the psychological data are easily made compatible. I’m also going to, at the same time, write a new page outlining my “Input” view of phenomenal experience. When that’s done, then I’ll try to comment on what’s left.

    One point to make now, though, is that after looking up what “fisking” means I want to make it clear that I’m NOT objecting to line by line rebuttals, but line by line rebuttals that don’t capture the whole point. I cannot even begin to count the number of times that someone has raised an objection to something I say that I addressed later in the paragraph, which as you might imagine is incredibly frustrating for me. This case wasn’t quite that, but a case where you seemed to be reducing my complaint to a “Well, it’s not inconsistent” argument despite my listing directly other reasons and simply including that one as one of the things that informed my decision. Taken as a whole, I felt I had a much broader argument than that comment implied, which as you might imagine can be a bit annoying.

  35. verbosestoic Says:

    I’ve been fairly busy lately, and so haven’t gotten around to the posts yet, but they are coming. I just don’t know when …

  36. Functionalism and Eliminativism About Consciousness are Incompatible | The Verbose Stoic Says:

    […] actual perceived properties of qualia itself, and what we experience when we experience qualia.  But I’m okay with that, as long as they don’t use their views to contradict what our experiences are, or to define […]

  37. Illusionism as the default theory of consciousness | The Verbose Stoic Says:

    […] the representation part isn’t.  But if Dennett wants to insist that that is consciousness, I can oblige.  And if Dennett wants to go along with that move, then any sort of illusionism goes away because […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: