So, I’m currently reading “The Illusion of Conscious Will” by Daniel M. Wegner, where he’s trying to argue that our conscious will and our experience of it is an illusion and all the real work is done at the neural level (roughly). I’ll talk more about his book later, but here want to talk about something that follows from his examinations. In order to establish that our conscious will and experiences of volition are illusory, he tries to show cases where they come apart. He references the Libet experiments which are not very good examples, but then in the next chapter talks less about cases where we are trying to map neural correlates with timers but instead cases where our experiences would claim that we are making a volitional choice when we really aren’t, or cases where we don’t think our will is at all involved and it actually is. So one of his examples is that of priming, which isn’t all that big a problem for free will or conscious experience, but he also gives examples of cases where the brain is stimulated and in some cases we think it was willed and in some cases we think it wasn’t depending on what was stimulated and even the context of what was stimulated, as well as the case of schizophrenics who hear voices that they claim are external but we can prove are being generated by them, to cases where a confederate can trick people into forcing a choice but having them not see it that way. And so on and so forth. I don’t think these examples are necessarily good ones — although some are interesting — but I think they are good enough for us to grant that sometimes our experiences and our sense of what we ourselves are doing get fooled and report that we are doing things that we aren’t and that we aren’t doing things that we are. So the question is: how is this possible?
One common explanation for conscious experience is that conscious experience is just what neurons firing in the right ways to produce the right functionalities does. But if this is the case, how could we ever get a discrepancy between what our neurons are doing and what we think our neurons are doing? They are both produced by the exact same sequence in the brain. Thus, the experience would just be our experience of what we are doing. How could we get that wrong? If we can, then it would mean that our experiences of what the brain is doing and even what it is receiving from the outside world could be completely disconnected from what’s really there. And we’d have no way to check, because every test we could make relies on our experiences being at least somewhat aligned with the outside world. Even asking others to check our experiences has to assume that a) their experiences are not disconnected in the same way as mine are and b) that I’m actually getting their reports properly. So if we argue that experiences are just what we get when neurons do those things they do, but then our experiences are reporting that the neurons are doing different things from what the neurons are actually doing, then the experiences from our neurons are not accurately representing what the neurons are doing, and so those experiences seem to be pretty much useless and meaningless and, worst of all, not necessarily reflective of anything, even the external world. That’s a pretty bad outcome, so we probably want to avoid taking that tack unless we have no other choice.
This is where dualism can come in, because it provides a nice explanation for why our experiences can come apart from the neuron firings that are implementing the functionality: they aren’t the same thing. If we have a separate mind that interacts with the brain, then it’s going to be triggering parts of the brain and receiving feedback from various parts of the brain to determine what happened and what is going on so that it can react accordingly. But then we can see how they can come apart, because if the feedback from the brain is incomplete or misleading, or if the mind has to aggregate feedback from a number of sources and reason out what is actually happening, then it can interpret what it is receiving incorrectly. However, in order for this arrangement to be at all useful it cannot be completely disconnected. Most of the time, the feedback and the mechanisms that interpret it would do so correctly and, most importantly, in a useful way. So if we posit that the conscious experience part and the neural firing part are separate objects communicating with each other, we can explain why they aren’t always in agreement without risking having to conclude that conscious experiences might be or seem to be entirely unreliable.
Now, what I’m sure that most materialists are waiting to scream about is that we don’t need to have a separate or immaterial mind for this to happen. We can separate the interpretive part from the producing actions part simply by introducing — or perhaps reintroducing, since it seems to be out-of-favour at the moment despite there being no real neurological reason and much neurological reason to think it’s true — modules. One set of neural firings are producing the experiences based on the feedback they are getting from brain overall, while the other is actually producing the experience. This would give us two separate “objects” in the brain so that they can come apart at crucial times, but also in theory allows us to trust those experiences because they would need to be accurate for the system to have any use. All we have to do is find a way for that module to causally impact what we do — since dualism would presume that a dualistic mind can causally impact even the original chains directly while they’re happening, despite them being separate objects — and we’re golden.
As you might have noticed from my little aside above, that actually isn’t all that easy. In order for these to remain as separate objects, we’d have to be careful about allowing the experience module to causally impact the action-production module, because then we’d start to wonder how, if they are that intimately connected, that they can come apart wrt what is actually happening. If the module is producing those neural connections, then how come it doesn’t know what those neural connections are doing? It’s one thing if it thinks that it’s doing something and that thing doesn’t happen, or else thinks that the firing is fully-caused by its deliberative actions when an automatic process is what kicked it off, but how could it ever be fooled into thinking that it hadn’t done something when it actually did? It would surely have access to what it was doing, and the explanation for how it could be fooled into thinking it did something when it didn’t is in fact that it notes that it was activated itself and notes that the effect was produced even if the actual cause of those neural firings. So if it can interact with the action-producing parts, it’s not likely doing the entire thing itself. But then what is it doing?
One suggestion is that what we have is some kind of self-monitoring system (I think that this is pretty much Dennett’s view of what conscious experience is) that monitors and interprets what we do and what our various areas of the brain are doing and then produces an experience that reflects this. This would work pretty well at explaining why things come apart, but it’s not a very good explanation for why it gets things wrong. If this is an important module in the brain that we need for important things, it seems that it’s a pretty weak process if things can come apart that drastically. Yes, we can argue that it seems to work well-enough most of the time, but it actually has only one job, and it gets it wrong a significant amount of the time. If what it’s doing is sufficiently important, then shouldn’t it do that better? And if it isn’t and so these actions don’t matter (as people who think free will an illusion would have to assert), then why do we have it? So we’d still need to find a cause for it, and if we did that we’d have to wonder why it can be tricked so relatively easily.
A perhaps better suggestion is that the main purpose of this module isn’t to monitor us, but is instead to monitor things in the outside world, and it only monitors us or interprets what we’re doing as a side effect of doing that. This actually seems pretty natural, since we would definitely need to filter out the things we’re doing from the things that are done externally to us, and since this would be just providing us with information that we can use to feed into our actions later then we don’t need any direct cause for actions, but instead just need it to file away information that impacts later decisions. And as long as it does this better than something that can’t do that, it’s useful enough to be selected for by evolution, so some unimportant errors can be explained, and perhaps even explained by appealing to which of the errors are more useful than insisting on always getting it right. This has the advantage over self-monitoring in that we get a set and simple purpose for what this is doing while still being able to explain it not being perfectly accurate.
But wait. Do we really need it to have a strong causal role? Can’t it just be something that happens but has no causal impact? Consider that any experience-producing module is going to be a pretty expensive one, and that there must be something special about these modules so that they produce experiences because we would have shown that very important parts of the brain don’t seem to produce experiences. So it can’t just be assembled from neurons and just happen to produce experiences, or at least that’s not a very credible story. Thus, we both a) need to have a purpose for this specific module and b) need to find a reason why it has experiences, either because having experiences is required for it to do what it does or because that assembled module is something that will do that. And given that much if not most of the brain won’t produce experiences, it’s the former that seems the more likely, not the latter. So we’re really going to need both the module and the experiences to have a strong causal impact to explain the presence of these experience producing modules.
Which cycles us back to dualism, because dualism has an advantage here as it can argue that the primary defining quality of a mind is having experiences, and so experiences are going to be a fundamental part of everything the mind does, including these interpretations. The dualistic mind works through experiences and has that as its fundamental nature, so everything it does — including any causal connections it has with the brain — is going to involve experiences. So it wouldn’t need to explain why it produces experiences when the brain doesn’t. The very nature of the connection is that the mind does the experiences and the brain doesn’t. Yes, it would need to explain how it can cause anything at all, but it doesn’t have to explain how one part of the brain produces experiences while another part doesn’t.
So our experiences and our neurally-produced actions can come apart as Wegner suggests, then that suggests that we have separate objects involved here. And dualism is based on the idea that there are two separate objects involved in such things. That gives it a huge advantage in explaining what those separate objects actually are.
Short Thoughts on “The Flash” (1990)
May 25, 2021So the other short series that I decided to watch after “Birds of Prey” was “The Flash”. Again, this is a series that I had already watched and even rewatched before this, so it won’t really be a spoiler to point out that after watching it this time I’ll likely watch it again.
The series has the same sort of setting as “Batman: The Animated Series”, where it takes place in an I guess 50sish Central City but the technology they have access to as a matter of course — both in Barry’s crime lab and at Star Labs — is way too advanced for that time period, and they even have cars and other things more suited to the time than to the 50s. So it really does seem like a faux 50s than really the 50s, which is a bit distracting, especially since they never really acknowledge it in any way. I think I would have rathered they just make it the 90s and rolled with it, but then part of the things they tried to do with it — especially the noir PI Megan Lockhart — wouldn’t have worked so well, as in the setting they seem normal and expected but in a strict 90s setting they would have seemed out of place and the characters would have seemed like throwbacks, which would have hurt the characters. So it’s 6 of one, half a dozen of the other.
While the series seems to start by giving Barry the traditional romantic interest of Iris West, after the first episode she leaves the series and the romantic interests are split between Girls of the Week, the scientist Tina, and the aforementioned PI Megan Lockhart. I liked Lockhart as a character, but because of how the show is structured it really seems to make Tina the preferred pairing, because while Lockhart is fun and works well with him when she’s there, she keeps leaving and Tina is always there, and she is the one who gets the “I have to leave but will stay for you” plots which makes us think that if they would only admit how much they care for each other everything would be settled. And while I like the characters, I find the romance a bit tepid. I’d like them to get together, but am not really interested in seeing how they get together.
The super suit, in early episodes, looked absolutely terrible. While it gets better in later episodes, they still show the original one in the opening credits and it really stands out. In general, the special effects are not at all good, which can be distracting. The worst is probably the super speed stacking or removing of objects, as it’s supposed to happen in a second before anyone can react and yet it happens so slowly, with lots of time to show the reactions of the villains which makes us wonder why they don’t just do something. The high speed running shown from the perspective of Barry works pretty well to let us get the sense of the speed without relying on special effects that they didn’t have access to.
The plots are basically a mix of serious drama and off-the-wall humour, which mostly works . The drama is what you’d see from shows like “Airwolf” and “Knight Rider”, and the humour is … pretty much what you’d see from “Knight Rider” as well. So it makes for decent light entertainment, especially when the actors manage to get a handle on their characters and so stumble less through their lines.
That more humourous attitude also carries over to an important role for Mark Hamill, that of the Trickster. Remember, this was a year or two before his breakout voice acting role as the Joker in “Batman: The Animated Series”, and yet there’s a lot of similarities in terms of the voice and in terms of chewing the scenery between those two roles. It’s hard to imagine that there’s no connection between Hamill being the Trickster and Hamill becoming the Joker, even if I couldn’t find that link. Anyway, those episodes are some of the best because they leverage the absurdities of the setting and so don’t have to try to be at all serious, and the odder episodes seem to be the ones that work best for this series. And, of course, Hamill does a great job with the villainous role itself, which makes it fun to watch.
All-in-all, it’s good, light entertainment that is a bit too goofy in general to work as a drama so it’s fortunate that it spends most of its time embracing the goofiness. It definitely has its flaws in terms of special effects, acting at times, and writing, but overall it’s entertaining enough to watch without a constant rolling of the eyes. As noted above, I definitely could rewatch it again. Again.
Posted in Not-So-Casual Commentary, TV/Movies | 4 Comments »