So, recently Richard Carrier put out a post about things to consider while doing moral reasoning. There are a number of issues with his moral system and moral reasoning that come up there, and so I want to go through some of them. The whole post is at the link if you want to look at some of the things I’m going to skip, and since it’s a bit disorganized itself my post here is probably going to jump around and back at bit, both in terms of what I reference from the post and in terms of the actual content I talk about, so be prepared.
Anyway, Carrier starts by talking about the Golden Rule, and then says this:
For example, you would not want your neighbor to neglect you if you were starving, therefore “do not neglect a starving neighbor.” This is not doing what you would not want done to you, yet amounts to positively doing what you would want done for you. Every positive action can be reframed as a negative being avoided. “Be generous” and “do not fail to be generous” are identical statements. That one is positive and the other negative is wholly irrelevant.
The problem is that in doing this he ignores a critical distinction, which is the difference between an action that is morally desirable and is morally obligatory. A maximally moral person will always want to help a starving neighbour, true. But does that mean that if they fail to help a starving neighbour that then then are acting immorally? Are they morally obliged to help a starving neighbour? It’s pretty easy to come up with cases, like the neighbour case, where while it would be morally desirable to do something and it is almost certainly the case that more moral people will do that, we also can see that it isn’t morally obligatory. Risking your life by running into a burning building to save someone’s life, for example. Or donating an organ to save someone else’s life. While we think that better people will do that, we also don’t really hold it against someone if they don’t. So there is, then, at least in theory a potential difference here, and the difference is that the negative statement always implies a moral obligation, while the positive one doesn’t. So you can’t simply take the positive statements, negate them, and come up with a statement that has the same moral force. It might, but in general the negative statement is going to have a stronger moral force than the positive one. In fact, a way to test for moral obligation is to take the positive statement, negate it, and see if it remains true. If it does, then it’s a moral obligation. If it isn’t, then it’s not. So taking Carrier’s example, we can see that “Help a starving neighbour” seems true morally, but “do not fail to help a starving neighbour” is far less certain, because it’s difficult to see how someone would necessarily be acting immorally if they didn’t.
Now, Carrier has long held a view that all moral classes — deontological, consequentialist, and Virtue Theory — are all the same, and can be reduced to each other. He continues that here, and unfortunately continues to start from a misinterpretation of Kant, assuming that Kant’s categorical imperative applies to what we would want to be the case rather than to what can be universalized without logical contradiction. It is always from the first that he derives the compatibility of Kant’s deontological view, at least, with consequentialsm:
The notion that circumstances and consequences must be disregarded is contrary to the Kantian principle itself, since we would never will to be a universal law that circumstances and consequences be disregarded.
But could we do that without logical contradiction? This is also a bit confusing, because it’s certainly the case that at least for any practical morality we are going to have to consider individual situations but deontological views will deny that the action is right simply because it produces a specific consequence or outcome. So while consequences and circumstances aren’t going to be irrelevant — and so can’t be disregarded — it’s never going to be the case that we can point to the consequence itself and say that that is what makes the action immoral or moral. Only by appealing to the general and set principles will we be able to determine what is morally right and what is morally wrong.
To be fair, most consequentialist theories have a set principle that is used to determine what the right or acceptable consequences are. But in general the distinction between those loose class of theories is that deontologists define the rules and act accordingly, and those rules aren’t adjusted if they produce a seemingly “bad” outcome, whereas for consequentialist views if the rule seems to be producing bad consequences it’s adjusted accordingly. In short, we are supposed to accept the consequences if we have justified the rule in deontological views, while in consequentialist views if the consequences don’t seem proper it’s far more reasonable for us to change the rule.
Carrier makes this comment about Kantian views in particular:
So-called “deontological” ethics (morals that derive from the nature of the act itself rather than its consequences) were first formally defended by Immanuel Kant, who declared the true moral rule to be that you ought to “act only in accordance with a rule that you can at the same time desire that it become a universal law.” Like the Golden Rule, this also leads to error when applied superficially. For example, some Kantians imagined this ruled out killing in self-defense, but in fact we “can at the same time desire that it become a universal law” that everyone be allowed to kill in self-defense. In fact, we probably would all will that to be the case.
The problem here is that the admonition against self-defense does not directly follow from the first formulation of the Categorical Imperative, but from the second: Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end. If someone is trying to kill me, and I then kill them in self-defense, aren’t I really just using their life as a means to an end, the end of my own life? That would surely be treating them as merely a means, and that is therefore immoral. While most Kantians — and most moral codes, even that of the Stoics — can probably find a way to justify self-defense when someone is deliberately attempting to kill you, the interesting question arises when we talk about “innocent” self-defense (note, I read about this on a set of questions and answers after doing a search, but am not going to bother referencing it. Just note that this isn’t entirely my own idea). If someone is innocently threatening my life — ie they don’t intend to but the consequence of their actions will result in my death — am I allowed to kill them to save my own life if that’s the only way to do so? That seems to more directly contradict the idea of treating them as ends in themselves, but then again in most moral systems and even in our moral intuitions this is a case where most people will at least wonder if doing so is morally correct, so it wouldn’t be an error. Asking what we’d want to be the case in these cases isn’t going to settle the problem, and in fact shows the futility of interpreting the first formulation in the way Carrier does, because we want moral codes to tells us what we should do regardless of what we intuitively want to be the case. Yes, we all might agree that we’d like to be able to kill people, even innocent people, in self-defense, but does that, in and of itself, mean that we should? And if we shouldn’t want that, then we should change our wants.
(To be fair to Carrier, he doesn’t apply the rule that simplistically, noting that our wants can indeed be in error. But this just leads to an inherent tension in his view where he wants to appeal to our satisfaction to justify moral precepts while having to introduce all sorts of empirical claims to insist that we are getting our satisfaction wrong.)
And recognizing the second formulation also demonstrates the big incompatibility between Utilitarian views and Kantian ones, because Utilitarianism inherently reduces all people to means to produce the end of maximizing happiness. We are all expected to sacrifice our own ends and even personal happiness to produce the most happiness overall. So while Kantians insist that no one be treated as a means to an end — including ourselves — Utiltarians insist that everyone should be treated as a means to an end, including ourselves. You can try to reconcile them by appealing to the idea that Utilitarians would say that all people should have “maximizing global happiness” as their end, but this would be a shallow reconciliation, one that doesn’t actually resolve any of the issues and conflicts between the two systems.
Which, of course, is the precise level at which Carrier reconciles them:
And yet all the merits and problems of Mill’s system are analyzable with Kant’s rule: contrary to Kant, who actually was trying to overturn teleological ethics, we “can at the same time desire that it become a universal law” that we “act so as to produce the greatest happiness for the greatest number.”
So, here, the issue is that you can’t universalize the idea that you should always treat people as a means and never as an end, if for no other reason than that you need some end to appeal to. But Utilitarianism doesn’t insist on that, but only that at times people are treated as means to an end, and that probably is universalizable. So I think Carrier is right here that the first formulation doesn’t rule out Utilitarianism. (Of course, Kant derived the second formulation from the idea of our free will, so there still might be a conflict there). But that you might be able to universalize that principle doesn’t make them interestingly compatible if Kantians reject it, and Carrier’s second attempt to reconcile them doesn’t work any better:
Similarly, when we look at problems with Mill’s rule (e.g. it can lead to “the ends justifies the means” thinking which can result in causing widespread harm in the name of a “greater good,” an outcome we don’t like and don’t want to be on the harm-receiving end of), the same follows: we do not will that it be a universal law that the ends always justifies the means, therefore we do not in fact will that it be a universal law that we “act so as to produce the greatest happiness for the greatest number,” but that we act in respect of all individuals so far as we are able, because as individuals ourselves that is how we would want to be treated. Sometimes that requires causing harm as the lesser of two evils, but only when there is no third option (and even then it is not desirable but a forced necessity we have no reason to like).
If we don’t actually will that it be a universal law that we “act so as to produce the greatest happiness for the greatest number”, then we are rejecting the basic Utilitarian principle, at which point we’d be trying to reconcile something that isn’t and can’t be Utilitarianism with Kantianism, defeating the entire point. Carrier can try to argue that sacrificing one person’s interests — or those of a minority — for the happiness of the majority will violate that rule — perhaps because most people won’t be happy in a society that advocates for that — but that’s, well, pretty much been the focus of most attempts to repair Utilitarianism, and doesn’t solve the problem: Kantians insist that you have to respect the wishes of each individual person, and Utilitarinism’s big benefit is that the wishes of each individual person must be subordinated to the happiness of the most people. Even if Carrier’s move worked, there’d still be a huge fundamental difference that would have to be resolved. So, no, they don’t reduce to each other.
Even at the level of agent-becoming this is the case: Kant might say that in doing a certain act (e.g. murder), we become a certain sort of person (e.g. a murderer), which is a consequence in and of itself that we would not like (were we to honestly admit the fact). But this is teleological thinking, and thus in accordance with Mill: part of the consequences to human happiness are indeed the consequences to the individual of becoming a certain sort of person in their actions (e.g. a mass murderer for “the greater good”), and the consequences to the society of endorsing it (e.g. a society that allows mass murder for “the greater good”), which may in and of themselves damage human happiness and thus run afoul of Mill’s own rule. Thus, no matter how you turn it, Mill and Kant were really just saying the same thing, and really just trying to explore the implications of the Golden Rule from different perspectives, neither in themselves complete.
Except that Kant would certainly not draw the conclusion that committing murder is wrong because it would make us not like who we are (a murderer) which would, presumably, make us less happy. Kant criticized the Stoics for being too focused on happiness (in his objection to Eudaimonic theories). And Mill wouldn’t argue that if you had to sacrifice the minority to save the lives of the majority that you would therefore be a “mass murderer”, because at least morally you wouldn’t be committing murder. So Carrier’s argument that they are saying the same thing relies on contorting the views to say things that they aren’t saying and would explicitly deny. If you’re willing to ignore what the views actually say, you can make them say anything, but that doesn’t mean that you’ve discovered something meaningful about them.
Carrier next turns to Virtue Theory:
The latter derives from the first logic-and-science-based moral philosopher whose work substantially survives for us to read it in its entirety: Aristotle. Aristotle would say that we ought to act in accord with those virtues of character the pursuit of which will lead to a life of personal satisfaction and fulfillment, and not in accord with those vices that will lead to a life of discontentment and dissatisfaction. Doing so will in turn produce a harmonious society, and we will enjoy the company and community of those behaving the same way.
This is again the same thing, only now we are looking not at rules of behavior but at their motivating dispositions (which David Hume would likewise focus on). This is therefore one more level deeper in analysis.
Putting aside the fact that Aristotle, in and of himself, likely wouldn’t justify the virtues simply on the basis of a harmonious society, there is reason to think that Virtue Theories and deontological/consequentialist theories don’t have to be mutually exclusive. The big issue Virtue Theories have to address is defining the virtues and vices, and it’s relatively easy to use another moral theory to define that and use the Virtue Theory to then determine what kind of person a virtuous person has to be (I take this approach in combining Kant and the Stoics). So this one isn’t necessarily ridiculous. However, there are definitely certain deontological and consequentialist theories that are incompatible with Virtue Theory in general and with certain specific Virtue Theories, so they can’t be reduced to each other, so this doesn’t seem to get us very far.
Carrier then returns to his pet theory, hypothetical imperatives espoused by Philippa Foot. This is an issue for his attempts to reduce all moral theories to each other since Kant explicitly rejects hypothetical imperatives as being capable of producing morality, but that’s not important here. What is important is the ultimate end that Foot aims at:
Foot concluded that we ought to act in accordance with true facts of the world in such a way as to maximize our ability to love our life and get along with other people. Morality is therefore a system of hypothetical imperatives, aiming at the most efficient achievement of an over-arching goal, which is a fulfilling life within a well-functioning social system.
The thing is that at least Kant and the Stoics would deny that morality is determine by what will satisfy us, instead insisting that we must be satisfied by what is moral. Thus, if we find ourselves not being satisfied by acting morally, we need to change ourselves so that we are indeed satisfied by that. Foot’s views — and certainly Carrier’s derivations of them — have this the opposite way around, where if we find ourselves not being satisfied by morality, then there’s something wrong with our view of morality. Oh, sure, Carrier also appeals to us not simply being aware of the proper “facts”, but those definitely include moral facts. Thus, it is possible for them that we need to change our view of morality if it isn’t making us happy, which both Kant and the Stoics deny. That’s a pretty fundamental disagreement to resolve, and Carrier doesn’t, in fact, resolve it.
He also says this about the view:
Foot, in my opinion, is the only philosopher who saw the forrest for the trees, and produced the most correct and usable analysis of moral reasoning and its proper roots and motivations.
Of course he’d say that, because it allows him to pursue his own satisfaction and never really have to choose between making himself happy or acting morally. But that’s not necessarily a good thing, as Enlightened Egoism not only allows for the same thing, but is explicitly the same thing, and I suspect that Carrier is not, at least openly, as sympathetic to that view as he is to others. However, his derivation of Foot seems, to me, to pretty much be Enlightened Egoism. And this follows on in his discussion of “risk”:
We might now say this in terms of risk theory: the probability of that outcome is greater on that behavior than on any alternative behavior, such that even if the outcome is not guaranteed, it is still only rational to engage the behavior that will have the greatest likelihood of the desired outcome. By analogy with vaccines that have an adverse reaction rate: when the probability of an adverse reaction is thousands of times less than the probability of contracting the disease being vaccinated against, it is not rational to complain that, when you suffer an adverse reaction from that vaccine, being vaccinated was the incorrect decision. To the contrary, it remained the best decision at the time, because the probability of a worse outcome was greater at the time for a decision not to be vaccinated. Analogously, that some evil people prosper is not a valid argument for following their approach, since for every such person attempting that, thousands will be ground under in misery, and only scant few will roll the lucky dice. It is not rational to gamble on an outcome thousands to one against, when failure entails misery, and by an easy difference in behavioral disposition you can ensure a sufficiently satisfying outcome with odds thousands to one in favor—as then misery is thousands to one against rather than thousands to one in favor. This is also why pointing to good people ending in misery is not a valid argument against being good.
The problem here is that Carrier has no grounds for talking about “evil” people, or even “good” people, outside of his moral system. So if someone is pursuing their own satisfaction, determines that an act that Carrier would call “evil” will definitely increase that, and is right about that, then Carrier has no grounds on which to claim that what they did was actually evil. He talks about risk here, but it is certainly conceivable to think of cases where the risk that being “good” will result in less satisfaction is higher than the risk that being “evil” will. In fact, almost all moral dilemmas are built around such cases, where I will be objectively worse off if I act morally, but the action is clearly the “good” thing to do. And I return to the example of Russell from the first episode of “Angel”, where he says that he follows the rules and pays his taxes, and in return he gets to do whatever he wants. Surely it was the case that the risk of Angel coming along and being willing to murder him in broad daylight was less than the probability that he’d be able to keep going on being “evil” and thus be able to pursue his interests indefinitely. How does Carrier refute that case? Even society isn’t inordinately harmed by that, and having an “evil” society can obviously benefit people who are in a position to exploit, which means that they have the power to impose it on others. As long as the society is beneficial to most people enough to forestall revolution, on what grounds can Carrier insist that these “evil” people are assessing their own interests incorrectly?
Carrier talks a bit about it later, but the general refutation of his “better society” idea is that the ideal is for all of those pro-social attitudes to hold and for everyone to act on them unless they can get away with it. Even if in practice this produces the same society — almost everyone all the time acts pro-socially because they don’t want to take the risk — there’s a difference between acting pro-socially when you perceive that it will benefit you and being a pro-social person. The former is Enlightened Egoism, and is the view Carrier holds. The latter is, at least, what Virtue Theories would insist on.
Carrier then insists that tit-for-tat and the Golden Rule are not incompatible:
Notably, contrary to some analyses, tit-for-tat does not actually contradict the Golden Rue, but actually correctly realizes it, in the same ways I noted above. To satisfy the Golden Rule, as much and what kind of mercy and forgiveness we would want, must correspond to the amount and kind we give to others. And when we think that through, we’ll realize we would not really want everyone to let us exploit them without retaliation. Because that means letting them do the same to us. And the world that would result would be disastrous for us. We would not will that to be a universal law.
The problem here, again, is that they start from two completely different principles. The Golden Rule is a guide for behaviour, and is optimistic in that in its application it actually presumes that everyone ought to — and so, hopefully will — follow it. Tit-for-tat actually assumes that people will, in fact, treat people badly if given a chance, and so enacts harsh penalties for people who do so. We can see this by looking at the Prisoner’s Dilemma. The Golden Rule would engender trust, and so we wouldn’t betray our partner because we wouldn’t want them to betray us. Thus, there would never be any reason for anyone to betray, and in an ideally Golden Rule world, betrayal in those circumstances would inconceivable. However, with the tit-for-tat approach, all that would stop someone from betraying would be knowing that the other person will retaliate when they can. We would always be willing to assume that they would betray if they felt the retaliation wasn’t going to be a strong enough motivation. And, in line with his comment above, we’d always want it to be the case that others follow the rules while we don’t, but we’d know that much of the time we couldn’t get away with it. But if we could get away with it, due to imbalances of power or secrecy, we would. And yes, we’d know or expect others to do that, too. At the end of it all, we might end up with precisely the same sort of societies — as no one would feel that they can get away with those sorts of actions — but the Golden Rule world is a world of trust, and the tit-for-tat world is a world that runs necessarily on constant vigilance and distrust. So, no, tit-for-tat does not “correctly” realize what the Golden Rules asks of us.
And, finally, let’s talk about villainy:
In the end, everyone sane enough to understand the matter wants to be the hero and not the villain in the world. Are you the sort of person you like, or the sort of person that in fact you loathe? The truth, once honestly realized rather than delusionally hiding from, may lead you to loathe rather than like yourself. And no life satisfaction can then be possible. That can only be solved in either of two ways: changing the sort of person you are (which requires a lot of continuous work, contemplation, habituation, and practice); or lying to yourself about the sort of person you actually are. But can you be comfortable knowing that maybe you are the one living a lie? That really, you are an awful person, whom even you would hate?
But what, to Carrier, counts as a villain here? How do I determine what the right person to be is if the only basis Carrier gives for determining that is my own satisfaction and that’s exactly what I’m questioning here? If I’m happy doing “evil” things and someone calls me evil for doing them, am I necessarily wrong? Yes, if I’m unhappy then I have reason to ask whether I should change, but not if I’m happy.
And this ignores the big thing about heroes: heroes are heroes not because they are happy, but because they are happy being good. A hero can sacrifice everything that generally gives our lives satisfaction and still, in the end, at least be content because they did it in the service of doing good. Carrier’s view defines good as being happy, and so at best is circular and at worst would treat the “hero” as someone misguided for sacrificing what gives our lives satisfaction. The person who gives up their greatest dream for someone else is not a good person, not the hero, but is someone who has done the wrong thing. Maybe.
Carrier can’t escape this by appealing to empirical data, because he has to settle what criteria is used for good before he can determine what the empirical data means. If he sticks to the simple idea of satisfaction — as he usually does when citing empirical data — then these problems arise. And if he tries to expand it — as he usually does when talking about “evil” and “good” people — then he needs to have a criteria for what is really satisfying, even if most people don’t think it such. Either way, villainy and heroism is far more complicated for Carrier than he will admit.
And, ultimately, that’s my comment on Carrier and morality: morality is far more complicated than Carrier will admit or understands.
Persona 3 Replay …
April 25, 2018So, after finishing Blue Reflection and getting a bit burned out on Dragon Age: Origins (I was in the Deep Roads and facing all the traps to get to the Anvil), I decided to try replaying Persona 3 and Persona 4 again, which I’ve wanted to do for ages now. I started trying to get my PSP version going so that I could play with the female protagonist again, but the battery seemed to have blistered a bit meaning that I couldn’t play it, so I dug up my PS2 version of FES and started playing it again. And was glad I did.
It’s amazing how well Persona 3 holds up. While the dungeons are still a bit boring and grindy — especially playing a NG+ with a level 99 protagonist — the social links are incredible. One of the things that Persona 3 does that other games don’t seem to manage is to set up new S-links directly from interactions inside existing S-links. While Blue Reflection had you find people to recruit as allies from missions given to you by your other allies, what it didn’t really do was just have you meet them that way, which Persona 3 does constantly. You meet Yuko after joining the Kendo team — she’s the manager — as part of Kaz’s S-link. Maya in the MMO S-link complains about a man at the mall who is the Devil S-link. Bebe returns the wallet of one of the bookstore owners, which is how you meet him. Later, Mako actually has you meet the dying person S-link. And so on. This really makes the world seem interconnected and there are numerous cases where you know your S-links but others know them as well, and not just through you.
And the S-links are both numerous and deep. Everyone has issues to address that carry on throughout the entire link, while being interspersed with lighter material and times. While the problems are important and things you need to resolve, they aren’t all there is to the characters either. Despite the interactions being a bit primitive — but no more so than you can find in any other game — the S-links are fun and you really do feel like you’re getting to know the other person. And even in Persona 3 they had the system that they perfected in Persona 5 where your S-links will let you know when they are available to be talked to and to have their links advance, with your school links approaching you at lunch time to ask if you want to hang out after school. In Persona 5, this is done through texts, and applies to all of your S-links, but the Persona 3 system is still a better start than pretty much anything else I’ve seen.
At first, the graphics turned me off, but that wasn’t because of the graphics style but instead because it seemed blurry and even a bit distorted at times, and far more so than I remember. I think this is because I was used to playing it on an Amiga monitor which is a lot sharper than the composite input of my inexpensive HD TV. Putting that aside, the graphics style is more than good enough to make the game enjoyable, and so graphics to that level on an HD system would probably be sufficient if someone wanted to put together a good Persona-style game.
Overall, it’s still a pretty good game, and better than the Persona-style games out there that I’ve played. And it’s a twelve year old game. It holds up really well.
Posted in Not-So-Casual Commentary, Video Games | 1 Comment »