Morality Without Free Will

Last time while discussing “Living Without Free Will” by Derk Pereboom, I criticized the focus that he and others had on the problems for morality if we didn’t have free will, as if solving the issues around morality would solve the problems that we seem to face if we decide that there is no such thing as free will.  This chapter doubles down on that, focusing on discussing if notions of morality can be preserved even if we don’t have free will.  But as I’ve already argued the problems for morality follow from the problems a lack of free will would introduce in general, and thus even if you manage to preserve some notion of morality given determinism that wouldn’t mean that you’ve actually shown that free will itself isn’t problematic.  And I don’t think that you can actually do that anyway.

The key issue here is one that Pereboom raises:  the idea that ought implies can.  He somewhat dismisses this as maybe not really being a problem or a real defining trait for morality, but when we unpack what it really means in general — and not just for morality — we can see why it is necessary.  One of the key things we use morality for is to regulate behaviour, and it does so by telling us what we ought to do in order to be moral.  From that, it seems obvious why ought implies can if the ought is going to have any meaning whatsoever, because you cannot say that one ought to do something other than what they did if they couldn’t do that thing.  So, for example, if we want to say that someone ought to have dove into the water to save that drowning child, that would be hollow if they couldn’t swim and so would have been unable to save that child anyway.  We cannot morally judge someone for not doing something that they couldn’t reasonably do.  If determinism removes free will and so removes choice from us, then we can’t do anything other than what we do.  So there can be no meaningful “ought” statement that is anything other than what we were determined to do, because we couldn’t reasonably do anything else.  Someone may, then, be able to say “You ought to have done X instead of Y!” but if determinism is true then that is a statement we need not care about, since there was no way for us to actually have done X instead of Y.

To be fair, a lot of Pereboom’s defenses here are more along the lines of that we might be able to come up with some meaningful notion of morality even if we don’t have free will.  So even if we couldn’t really hold people morally responsible for their actions and couldn’t hold them morally praise- or blameworthy for their actions, maybe we could still have some notion of morality that makes talking about it meaningful and even potentially useful.  The problem with this approach is that the only way I can see to do this is to talk about morality independent of moral agents, which would undermine morality as we see it.  What we could talk about is broad principles, such as saying that murder is wrong, and perhaps we could judge specific actions by saying that what that person did is murder, murder is immoral, so what that person did was immoral.  But we couldn’t assign that immorality to the agent without violating ought implies can or without being able to blame or praise them morally for their action.  So what we’d have to understand is that saying “What that person did was immoral” is not using “person” in the sense of a moral agent as we would expect it to be used there, but is instead using “person” more similarly to how we’d talk about a rock, or a computer, as an entity that is taking the action but isn’t in any way really responsible for the action in any strong way.  They obviously wouldn’t have chosen to take that action, and so we aren’t talking about their choice or their reasons or their deliberations.  We are literally simply saying that the person there is the object that did that, and not the agent that did that.  And the problem with that, though, is that it makes no sense for us to say that something an object did is immoral.  If a rock is picked up by the wind and breaks a window, it makes no sense for us to claim that the rock did something immoral, or that the wind threw a rock at a window and broke it and so did something immoral.  They aren’t agents, have no concept of morality, and don’t make choices.  So what they do cannot be called immoral.  And yet, this seems to be pretty much the sort of things that hard determinism makes us into.  So if they cannot be considered immoral for doing things like that, then it doesn’t seem like we can be considered immoral for that either.

So compatibilists or hard determinists inspired by compatibilists can argue that those things don’t have any kind of consciousness at all, but we do and so we can be held to higher standards than simple objects can, just as we might be able to argue that an advanced AI would still be deterministic and yet still can be held responsible for their decisions in a stronger or different way than a calculator can be.  This actually doesn’t help that much, because we actually already distinguish between things that have consciousness and things that don’t.  A lion, for example, may kill a human, but we don’t consider that murder nor do we consider the lion immoral for doing so, even though we’d consider it murder and the person immoral if a human did it.  The reason we don’t consider the lion immoral is because we don’t consider the lion a moral agent, despite it having at least some agency.  It’s incapable of understanding morality and so incapable of acting for moral reasons, and so cannot be judged immoral for what it does, but we are seen to be able to understand morality and act for moral reasons, and so we can be.  So adding consciousness doesn’t get them out of the problem.  We need a special kind of consciousness to get what we consider to be a meaningful morality.

So this opens up a potential way out, as they can argue that agency isn’t what’s important, but instead that it’s morality that’s important, so what the entity needs is the ability to comprehend the somewhat abstract notion of morality and reason on the basis of that, and so doesn’t have to have agency per se.  The issue is that we also have examples where we don’t consider an action immoral even if the entity seems to understand morality but doesn’t seem to have the proper agency.  Take the classic kleptomaniac example.  What they do really is stealing, they clearly seem to understand morality since they often feel morally guilty for doing the action, but we don’t consider their stealing to be immoral because they don’t seem to have the proper agency.  They don’t really choose to steal, but instead have an irresistible compulsion to steal.  Which brings us back to “Ought implies can”:  they can’t do otherwise, and so it can’t be said that they ought to do otherwise, so it is meaningless to say that what they did was immoral.  Not only does it not make sense as per our existing concept of morality, haranguing someone and punishing them for not doing what they couldn’t do really makes no sense, nor should they actually feel guilty for not doing what they, again, couldn’t actually do.  What purpose does morality have in that case, then?

The key thing is that morality is critically driven by moral agency.  Moral agency is a combination of two things.  The first is the ability to understand what morality is and assign value to morality and classify statements and reasons and actions as moral or immoral ones.  This is what the lion lacks.  But the other thing is the agency part, which is the ability to make choices and take actions according to those moral considerations and reasons.  What hard determinism at least risks us losing is the ability to make choices or act for reasons.  But that’s not unique to morality, nor is there some special type of agency that is required there that we might be able to work around.  No, the same concerns about moral agency apply to, say, rational agency.  If I can’t act for reasons that I recognize as moral and shift my behaviour based on my classification of them, then it looks like I can’t act for reasons that I recognize as rational and shift my behaviour based on my classification of them.  The problem is with being able to act on my mental and conscious classifications, not with any specific classifications that I might have.  So if I can’t be said to act rationally, then I can’t be said to act morally either, and if we could preserve a notion of agency that would allow for rational action, then it should allow for moral action as well, at least from the perspective of agency.  So the agency problem equally hits pretty much all of our actions, and so losing moral agency is a side effect of losing agency in general, not a specific sort of agency needed for morality.

This is where the compatibilists do have an advantage, because their model is trying to preserve some meaningful notion of agency that can save all of these functionalities, and so they aren’t locked into arguing over different notions of morality that might save things.  They can try to save agency in general and so sidestep all of these issues.  And the problem with this chapter is that it spends too much time trying to come up with different notions of morality while missing that agency in general is the overarching issue here, and agency in general is crucial enough to morality — and other things — that unless you save agency in general you will never be able to come up with a notion of morality that looks at all like the one we have, and can be used the way we use it, and so you will never be able to come up with a notion of morality that aligns with the strict hard deterministic notion of agency that retains any meaning.  Thus, we need to massage agency, not morality.  And if you do that, then you pretty much are a compatibilist and not a hard determinist anymore.

Tags:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: