Free Will, Reductionism, Materialism, Emergence, and the Transporter

In an attempt to avoid making all of my Philosophy posts about Richard Carrier — although his latest is going to demand a response at some point or another — I’m going to turn to this post by the blogger known as Coel.  He used to comment here frequently (or as frequently as anyone commented here, which is infrequently) but I guess he isn’t reading this blog anymore.  He’s also a frequent commenter on various atheistic blogs such as “Why Evolution is True”.  We disagreed most over consciousness and morality, and when it comes to free will we are a bit closer because he’s a compatibilist and I’m a libertarian, so the big difference is over whether he can come up with a compatibilist view that can encompass what’s important and necessary about decision-making even in a determined universe.  If he could pull that off, then he’d make me a compatibilist as well, but I don’t think he’s managed it.

Anyway, he hadn’t posted in a while and I’d stopped following his blog up until he made a couple of posts around Christmastime (or a bit before and a bit after) and I happened to check in to see them.  This is one of them, and is looking at someone criticizing people who seem to be Hard Determinists — which Coel is not — while Coel is trying to defend reductionism in general and, oddly, their Hard Determinism as well despite having major dust-ups with Jerry Coyne who seems to have similar views.  He also uses/introduces/modifies a thought experiment about Star Trek transporters.  So I’m going to go through the post and talk about the things that might be muddled, confused or problematic.

Let’s start with the views he ends up defending.  The post is responding to an article by Bobby Azarian, so he’ll be responding to what he says, so anything I quote from Azarian will be taken from Coel’s post, and just for clarity.  And Coel says this about people who seemingly would be criticizing Coel’s compatibilism as Hard Determinists:

He names Sabine Hossenfelder and Brian Greene. The “free will” that such physicists deny is “dualistic soul” free will, the idea that a decision is made by something other than the computational playing out of the material processes in the brain. And they are right, there is no place for that sort of “free will” in a scientific worldview.

I haven’t examined Greene’s work, but as it turns out I have examined Hossenfelder’s, and she doesn’t seem to be simply saying that (and so opposing dualists).  She pretty explicitly rejects the compatibilist project:

But some philosophers insist they want to have something they can call free will, and have therefore tried to redefine it.

Others have tried to argue that free will means some of your decisions are dominated by processes internal to your brain and not by external influences. But of course your decision was still determined or random, regardless of whether it was dominated by internal or external influences. I find it silly to speak of “free will” in these cases.

The first part is pretty much a summary statement of Hard Determinist opposition to compatibilists, and the last part is pretty much Coel’s view of compatibilism, and she thinks that it’s ridiculous to speak of anything like “free will” given that.  So she, at least, clearly opposes compatibilism, which means that she definitely thinks that’s the case (she actually explicitly says that free will is a useless term and people who use it don’t understand science).  So her view is strong enough to strongly clash with Coel’s and to pretty much fit the quote that Coel is opposing here.

He names, among others, David Deutsch and the philosopher Dan Dennett. But the conception of “free will” that they espouse is indeed just the computational playing out of the material brain. Such brain activity generates a desire to do something (a “will”) and one can reasonably talk about a person’s “freedom” to act on their will. Philosophers call this a “compatibilist” account of free will.

Importantly, and contrary to Azarian’s statement, this position is not the opposite to Greene’s and Hossenfelder’s. They are not disagreeing on what the brain is doing nor about how the brain’s “choices” are arrived at. Rather, the main difference is in whether they describe such processes with the label “free will”, and that is largely an issue of semantics.

I find this odd, since again one of the objections that Jerry Coyne — and other Hard Determinists — make to compatibilists is that there position is a mere semantic difference with no really meaningful distinction, which means that all they are doing is try to maintain the phrasing to appeal to the masses and potentially avoid people acting badly because they believe that they don’t have “free will”.  I don’t recall Coel simply accepting that there was no significant differences in the positions, even as he noted that the behavioural differences that supposedly followed from Coyne’s view could fit under his view as well (and for Libertarianism, I’d like to add).  The positions are indeed significantly different, in that Hard Determinists, in general, align with Libertarians in arguing that you can’t make any kind of meaningfully “free” decision if determinism is true, and thus tend to have to conclude that we don’t make any kind of meaningful decision.  For example, it’s a very common Hard Determinist position to say that our actual choices are no different in kind from cases of kleptomania or brain damage in terms of how “free” they are.  Compatibilists like Coel reject that idea and think that those cases can be meaningfully distinguished by appealing to the proper definition of “free”, which actually aligns closer with Libertarians who think it just clear and obvious that those cases are meaningfully different and no idea of human behaviour can be correct if it doesn’t result in those cases being different.  So those Hard Determinist views aren’t a simple semantic difference, so it puzzles me to see Coel argue for that here.

Coel objects to this characterization from Azarian:

Origins-of-life researcher Sara Walker, who is also a physicist, explains why mainstream physicists in the reductionist camp often take what most people would consider a nonsensical position: “It is often argued the idea of information with causal power is in conflict with our current understanding of physical reality, which is described in terms of fixed laws of motions and initial conditions.”

He takes this as a criticism of reductionism that completely misses the mark and mischaracterizes reductionism, but it doesn’t really seem to be the case.  The idea is of having information and meaning as having independent causal power from the underlying structure, and there doesn’t seem to be room for that in reductionism.  I’ll talk more about that later, but Coel here introduces his thought experiment to clarify what reductionism means:

Imagine a Star-Trek-style transporter device that knows only about low-level entities, atoms and molecules, and about how they are arranged relative to their neighbouring atoms. This device knows nothing at all about high-level concepts such as “thoughts” and “intentions”.

If such a device made a complete and accurate replica of an animal — with every molecular-level and cellular-level aspect being identical — would the replica then manifest the same behaviour? And in manifesting the same behaviour, would it manifest the same high-level “thoughts” and “intentions”? [At least for a short-time; this qualification being necessary because, owing to deterministic chaos, two such systems could then diverge in behaviour.]

If you reply “yes” then you’re a reductionist. [Whereas someone who believed that human decisions are made by a dualistic soul would likely answer “no”.]

Now, when I first saw this, I thought it was just a simple transporter example, at which point while some might think that a dualist would insist that the “soul” doesn’t come along with it — and might be lost — that isn’t actually the case.  What’s important about Cartesian dualism, at least, is that the mind/soul doesn’t have material properties, and one of those material properties is indeed “being in space”.  The mind is not in space and so doesn’t exist in any particular place.  So while it’s common to imagine that we have the soul sitting in the head that could be “left behind” in that case, that’s not what happens.  So in a  transporter case, the answer for dualists is “It depends”.  If the transporter breaks down the physical side in a way that causes the mental side to “lose” the body, then it wouldn’t be there after the transport, but that mechanism isn’t one that is necessarily likely to be lost, and by dualism it is possible for the mind to come apart from the body and then return to it later, so it depends on the details of the mechanism.  For dualists, we’d have to try it and find out, really.

As noted, though, this thought experiment is of a replica, and so essentially a clone, and not a simple transport.  This is actually a problem for dualists not because there’s a clear answer but because there isn’t one.  What would it mean to have an actual identical replica made of a physical body?  The obvious answer is that the replica wouldn’t get a mind, but as noted above it’s possible that the original mind runs both, or that a new mind is created/co-opted for the new replica.  It’s a tricky question that really does rely on how the mind is connected to the body, both originally and when things are changed.

But we aren’t talking about dualism.  We’re talking about reductionism, and Coel thinks that this thought experiment captures what the reductionist position really entails, and it doesn’t.  What Coel’s position describes here is one that is common among any materialist position about mind.  If the mental is the result of the physical, then if you duplicate all of the physical properties of a brain (or whatever physical things are needed, which in general is pretty much the brain) then you will reproduce the mind as well.  For some reason, Coel treats the above objection as saying that reductionism means that you don’t have to care about positions/places, but I don’t really see that (again, I haven’t read the article myself).  Any materialist position will say that if you reproduce the physical state you can reproduce the relevant phenomena.  That’s true for reductionism, eliminativism, and even emergentism.

So what actually is the reductionist position?  The reductionism position as differentiated from the two I mentioned above is a position that says that the concepts at the higher levels may be valid and meaningful, but you can use “bridge laws” to “reduce” those concepts to concepts at the lower level.  In short, you can use those laws to find out — in principle, at least, if potentially not in practice — how those higher level concepts are reflected in the lower levels, and you can do that all of the way down.  Eliminativists differ in that they say the concepts at the higher level are meaningless and add no value whatsoever, while emergentists argue that you can’t find the correlates to those concepts at the lower level (at least for strong emergence).  So what reductionism really means, in terms of implications, is that the higher levels are identical to the lower levels in a really serious and important way.  Emergentism says they aren’t, and eliminativism says the higher levels don’t really exist.  But if those theories are all materialist, they will all say that if you can duplicate the physical level you have duplicated the “mind”, because what else could you use to say that the two things are not the same?  There is going to have to be some kind of physical difference to point to if the things aren’t the same, or else materialism is false.

Additionally, you can have non-material reductionisms that would fail Coel’s thought experiment.  Someone could indeed posit a view of non-material mental processes where higher level mental processes reduce to lower level mental processes by bridge laws.  This would be a reductionist view by definition, but it would have to argue that if you duplicated the physical properties you wouldn’t have necessarily duplicated the mental ones and so they might not come along as the process goes along.  So this thought experiment doesn’t say much about reductionism.  Even by comparison to emergence:

Well, no. Provided you agree that the Star-Trek-style replication would work, then this counts as weak emergence. “Strong” emergence has to be something more than that, entailing the replication not working.

I looked the terms up, and my understanding of them is that weak emergence is closer to reductionism:  you can derive the higher level properties from the lower level properties in some way.  Strong emergence simply says that you can’t do that:  there is no way to determine that those properties would result from that underlying structure by analyzing the underlying structure.  What it doesn’t do is deny that the properties are ultimately the result of that underlying structure.  It just claims that they aren’t predictable from that level.  The common example of strong emergence is the feeling of wetness from water (a bad one because feelings are actually properties of consciousness, but let’s let that slide for now).  You can’t explain even in principle how to get from the properties of H2O molecules to those properties of “wetness”, but no one denies that “wetness” is the result of those properties, somehow.  And so we’d all agree that if you ever stuck hydrogen and oxygen together in that form you’d get wetness.  Strong emergence just says you aren’t going to be able to discover that without actually sticking them together and making it work.

Azarian talks about top-down causation as a clarification of the above quote, and here’s where I think Coel doesn’t really get the idea:

Top-down causation is another phrase that means different things to different people. As I see it, “top-down causation” is the assertion that the above Star-Trek-style replication of an animal would not work. I’ve never seen an explanation of why it wouldn’t work, or what would happen instead, but surely “top-down causation” has to mean more than “the pattern, the arrangement of components is important in how the system behaves”, because of course it is!

Again, no idea where the idea that the arrangement is what matters here.  What seems to be going on here is a similar objection to one I made when talking to im-skeptical, which is that in order to get proper agency we need the information processing itself to actually have causal power.  What I’d see in particular with a reductionist model is that it really looks like all of the levels have the same causal power, and so there is no additional or differing causation happening at the various levels.  We might talk about or describe the causation differently and in different terms, but it can’t be the case that a completely different type of direct causation comes into play at a different level (to forestall objections about a more Aristotlean differentiation of causation coming into play).  So what I think we could do is explain all of the relevant causations by tracing the causal chains and paths at the lowest level, which is generally at the level of physics.  It may be extremely difficult to do and may be somewhat meaningless, but it can be done.  And this seems to work for chemistry, where we can indeed describe all chemical reactions by reducing them to the causal events that joined the individual atoms together.  I would suggest that if you can’t do that then you don’t have reductionism.

However, this doesn’t seem to work for information and meaning.  It seems that in order for us to have proper agency we need causation based precisely on the concepts of information and meaning that only appear at the higher level.  Information and meaning doesn’t appear at the biological, chemical or physical levels of the brain.  So if we look there, we won’t see anything at all that maps to that information, at least not as far as we can tell.  A specific neural firing doesn’t in any way represent any kind of meaning, as we see with connectionist systems (as I’ve noted in the past, I can hook up any connectionist system to a different external system and have it work even if the “meaning” of the inputs and the outputs is completely different).  So where is causation based on “meaning” happening?  It’s not like what we have are chains of neurons that resolve themselves into things that we can identify as a piece of information or something that has meaning, like we have in chemistry with chains of atoms that we can take as a whole and call “molecules”.  So if we want our behaviour to be caused by what things mean, we need to find a way to represent meaning at the level where the “real” causation is happening — the physical level — in a way that the physical level can actually have “meaning” play a direct role in its causation.  And it’s hard to see how that can happen since meaning has no place in the underlying physical level.

He thinks we can get that meaning from comparing it to a chess computer:

If we want to talk about agency — which we do — let’s talk about the agency of a fully deterministic system, such as that of a chess-playing computer program that can easily outplay any human.

What else “chooses” the move if not the computer program? Yes, we can go on to explain how the computer and its program came to be, just as we can explain how an elephant came to be, but if we want to ascribe agency and “will” to an elephant (and, yes, we indeed do), then we can just as well ascribe the “choice” of move to the “intelligent agent” that is the chess-playing computer program. What else do you think an elephant’s brain is doing, if not something akin to the chess-playing computer, that is, assessing input information and computing a choice?

Basic chess playing computers are essentially the guy in Searle’s Chinese Room:  they took in a basic input, looked it up in some kind of look-up table, and then outputted the “right move”.  While we can argue over whether the room itself understands, the guy doesn’t and that intuition is what drives people to think the room doesn’t understand.  So it doesn’t seem to be assessing the information for meaning.  And the Deep Learning ones are connectionist systems, and again they aren’t assessing the information for meaning because again I can hook that computer up to something else that, say, tries to solve differential equations and it will gamely try to do that — and might even succeed — instead of replying that it has no idea what those inputs mean and so can’t do anything.  So that’s not an example of where the meaning of the information itself is causing the outcomes, and that’s what Azarian seems to be saying he needs.  Top-down causation, it seems to me, is the case that there has to be a causation happening at the higher level that changes the lower levels in ways that wouldn’t have happened of that causation hadn’t happened at the higher level.  Bottom-up seems to me to be what I described:  the higher level causations and differences happen because of the causations and differences at the lower and ultimately the lowest level.  So when the lower level changes the higher level also changes, but it’s not the case that there is any meaningful change at the higher level that wasn’t already reflected at the lower level.  So changes at the lower level determine the state of and therefore the changes at the higher level, but the argument is that the higher level is the one that has the concept of meaning and so is the only one that can change things on the basis of meaning, but it can’t actually do that because all the causation is happening at the lower level.

So Coel would need to show how the lowest levels can change on the basis of the meaning of the information we have, and chess computers aren’t doing that now.

Let me summarize with this:

The macrostate cannot contain more information (and cannot “do more causal work”) than the underlying microstate, since one can reconstruct the macrostate from the microstate. Again, that is the point of the Star Trek thought experiment, and if that is wrong then we’ll need to overturn a lot of science. (Though no-one has ever given a coherent account of why it would be wrong, or of how things would work instead.)

If we can get causation based on meaning in the microstate, then this will all work.  The question is if we can actually do that from the microstate.  The thought experiment doesn’t actually address this in any way, and so the objection is not vulnerable to it.  The question is about materialism and about the specific posited mechanisms of the mind, not over reductionism or emergentism.

Tags:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: