So as we’ve already seen, Ophelia Benson is taking on the idea that we should take a rational and not an emotional approach towards our decisions, and particularly towards moral decisions. Unfortunately, most of the posts from her and from others don’t really seem to have a central thesis to them; they seem to be based on a strawman view of logic that says “No emotions at all” and so essentially say “Well, we don’t want to get rid of emotions completely” for various reasons like “We can’t”, “We need them”, or “Pure reason will make you a monster”.
Benson is semi-summarizing her thoughts on the issue in an article on “The Freethinker” entitled “Working Together”, which presumably puts forward the thesis that emotion and reason need to, well, work together. Unfortunately, like most of the posts on this subject just how they’re supposed to work together and what role each is to take isn’t well-defined and certainly isn’t well-argued. As I pointed out in my posts, there aren’t too many “Vulcanites” — ie people who insist on dispassionate reason — that would deny that emotional states can be relevant to an argument. However, they’d insist that when and if emotional states are relevant to an argument is again something that’s to be rationally determined. So, no, just because people feel things doesn’t mean that their feelings are relevant, but for certain arguments their feelings may indeed be relevant as facts about the world that have to be considered, in much the same way as the acceleration due to gravity needs to be considered. It’s not relevant to a discussion about what I want to eat for dinner tonight what the acceleration due to gravity is (for almost all people), but it would be relevant to my deciding if I can jump across that rooftop to escape an oncoming fire, or if I should wait here for rescue.
So when Vulcanites oppose emotion, it isn’t opposing emotions as states in the world. Instead, it’s opposing emotions as a combination of two things: a) a judgement about the world and about what the appropriate action to take in response to that is and b) a motivation to make and commit to that judgement and that action. If emotions are going to both judge the world and urge us to take immediate actions on the basis of that judgement, well, they’d better be right … which means that they’d better be in accordance with what a fully rational and unbiased assessment with all the available facts would judge to be the case and to be the appropriate action is. And the fact is that most of the time — especially for very strong emotions — they aren’t. When they’re right, they’re only right because a rational assessment would have come to the same conclusion, and when they’re wrong we know that because a rational assessment based on what we knew at the time says that it was indeed wrong. Add in that we’ll likely have acted without thinking if we rely on them and that emotional commitments can last longer than the first initial feeling of it and there’s lots of good reasons to distrust emotion and work to minimize its role in our decision-making.
So one counter to this is the idea that you can’t have any kind of reasonable reason without emotion, which Benson brings up early in her article:
For one thing, at the most basic level, it’s now understood that damage to parts of the brain responsible for emotion doesn’t result in a hyper-rational person but a dithering useless mess. Cognitive science is demonstrating that emotion is not the antithesis of rationality but a necessary part of it.
Now, I’m not totally up-to-date on the very latest work on emotion and reason — I’ve been out of coursework for a year or two due to work and life pressures — but as someone who is Stoic-leaning I’ve certainly paid attention to a lot of it, and it hasn’t actually managed to do that yet. The most common one that I’ve come across is the work of Antonio Damasio and his work with people who have the right sort of damage, but his examples aren’t convincing. One of the major ones is a card game that he set up with those who had this damage and those who didn’t. Essentially, there are a number of decks that give out various positive and negative values of money, and the goal is to have the most money when the game ends. One deck in particular has very large rewards, but also very large and frequent penalties. Pretty much everyone, at the start, sampled all the decks and learned what they had. The people with undamaged emotional centres tended to avoid the high-risk deck, while those that had the damage tended to go back to the high-risk deck frequently. This led to the people with the damage having a bad time of it in the game, often having to borrow money to even stay in it, while those who didn’t have the damage fared much better. On top of all of this, Damasio measured a skin capacitance reaction when considering the high-risk deck among those without the damage that was missing in those with the damage, which indicates an emotional reaction, and likely an aversive one that steered those without the damage away from the irrational and unsuccessful high-risk deck, while those without it acted on their own and continued to go to it.
The other example is simpler, and is likely what Benson is thinking of when she mentions “dithering useless mess” above. Damasio asked his patient which of two days would work better for him. He spent a lot of time thinking and reasoning about it, and Damasio, curious, let him work it out. He spent a long, long time dithering between the two options, until Damasio finally interrupted him and decided for him … at which point the patient seemed to be completely satisfied and went on with his life.
So why aren’t these good examples of how we need rationality? Well, for one thing, in the card game example it isn’t clear that those with the damage were actually acting irrationally. The game was not set-up as a game where you are given a certain amount of time or turns in order to maximize your winnings, but instead was set-up as a game that could end at any time. Sure, over the long term choosing the lower risk decks will obviously leave you further ahead, but if the game is going to end right this very minute and you need to be as high as possible — and not just “positive” — then you really ought to take the high risk deck and hope it works out. Think of it like pulling the goalie at the end of a hockey game when you’re down by a goal: you greatly increase the risk that you’ll be scored on, but you’re going to lose if you don’t, so you might as well. Now put yourself in the situation where the referee is going to end the game totally at random and, well, you can see that being rational might indeed make you do that as soon as someone scores a goal on you. The rational move is not one that always works out, but is the best one based on the situation you find yourself in.
Another thing is that these were people who were not trained to act rationally, and instead learned to act based on the mix of reason and emotion that our society is based on. There are things that reason has a harder time doing than emotion does, and we all have our built-in emotional ways to get around that. In the appointment example, reason is indeed going to have a hard time deciding between two choices that seem equal overall, but have different benefits and detriments. Think of the story of the donkey an equal distance from two bales of hay, except that one is hay and one is oats; the hay is more filling, but the oats taste better. How does reason decide that when both are equally desirable? Well, reason should ultimately decide that either choice works out equally well, and you should just pick one. And reason can do this, by, say, having the person note the time spent and deciding if that much time spent is efficient, and if not simply picking one at random. However, most of us don’t need to do this, because what we have is an emotional state — likely embarrassment at taking up so much of someone’s time — that kicks in and makes us pick one. Which works out really, really badly if the decision might actually matter, since we’d have to overcome the emotion to keep thinking about it, even inconveniencing the other person. So the only reason those people are a mess there is because we didn’t teach them how to replace their emotional coping mechanism with a rational one, and the emotional coping mechanism might actually well, screw things up. So, no, that’s not reason to think that we really need emotion after all, as the emotional component might actually be acting irrationally, and we ought to be able to replace what the emotion gives us with reason if we try hard enough.
Benson goes on:
But more than that, for the purposes of thinking about human-related subjects – moral, political, social – it’s not rational to exclude emotion from the discussion, because humans are emotional. If you try to talk about human affairs in the terms suitable for talking about machines or blueprints or chemistry, you will get a train wreck.
I don’t mean that people arguing or writing articles about moral or social issues should be in a heightened emotional state themselves; I mean they should not pretend the subject is a matter of pure logic or number-crunching or engineering.
I … I really don’t know what she means here. Does she just mean the “emotional states are useful facts” point I made above? If so, then I agree, but then we might not have a train wreck, and might actually be right. If she’s taking this further, as it seems, and arguing that the conclusions are going to be something other than that produced by logic or number-crunching … well, then, there’s a problem here. Even the moral system that’s mostly likely to both be right and take emotional states into consideration — Utilitarianism — uses emotional states only to calculate a number of utility that you should use to determine what’s right. And more reason-based views like Stoicism and Kantianism wouldn’t go that far. About the only ones that wouldn’t are emotivist views like that of, say, Jesse Prinz … which also tend towards subjectivism about morality, which isn’t all that great either. So much more argumentation would be needed to take the stronger view, and the weaker view is one that even Vulcanites can hold because Vulcanites can indeed be Utilitarians: the needs of the many outweighing the needs of the few, or the one.
Above all, what we should not do is claim that our argument is Pure Reason while that of our opponent is nothing but emotion. It won’t work, for a start, and it’s not likely to be true, and it’s toe-curlingly arrogant. It helps to remember that we all have enormous built-in cognitive flaws, and that it’s never safe to assume we’ve managed to correct or avoid all of them at any given time.
Fair enough; take the log out of your own eye before removing the splinter from your neighbour’s and all that. Sure, we agree on that. That being said, of course, pointing out that an argument is emotional, with facts, evidence, and reasoning is still okay; we don’t want to leave people immune to charges that they’re making an appeal to emotion when they should be making an appeal to reason, right?
It’s here that Benson starts to make a claim about where feelings come in to morality:
Morality is rooted in feelings – we want some things and want to avoid other things.
Except that “wants” aren’t “feelings”; wants are desires, and desires can be rationally assessed. In fact, we all should strive to have rational desires, and not irrational ones. While we might not be able to criticize all of someone else’s desires, it’s certainly true that we can indeed call some people’s desires “irrational”, if for no other reason than that they’re inconsistent: you want world peace and the ability to conquer other countries through war, for example. When we can’t criticize other people’s desires, it’s not because we can’t apply reason to them, but is instead because they are subjective; there are desires I have just because I have them, and they don’t lead to contradiction, and so they’re mine and mine alone. For example, that I might want to watch wrestling and not want to watch a documentary on the Etruscans doesn’t make my desire irrational or something that you can criticize because you want the opposite. As long as I have certain desires, and those desires don’t contradict more basic desires, then you can’t say that it’s wrong of me to want something that you don’t want. In fact, one of the problems with Mill’s repair work on Utilitarianism is that by introducing the concept of a quality of pleasure he starts ranking desires … even those that are, indeed, just subjective.
So where does morality come in here? Well, to me, one of the basic desires that all moral agents have is the desire to act morally, and what makes moral agents moral agents is the ability to put that desire ahead of all others. This is why I’d argue that animals can’t be moral, no matter how moral they act, because they have never demonstrated the ability to take an action because it is the moral one as opposed to being the action that they want to take that happens to be moral by our assessment of morality. So, no, morality is not rooted in feelings, as the only relevant desire is the desire to act morally, a desire that trumps all other considerations.
The goal can’t be to strip emotion out of our thinking on these subjects, but only to channel it in the right ways. That requires both reason and feeling – and as Hume pointed out, feeling has priority.
We speak not strictly and philosophically when we talk of the combat of passion and of reason. Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them – The Treatise of Human Nature 2.3.3.4
So, here we get to something that looks like a position on how reason and emotion should interact, with reason the slave of passion or emotion or, tying back to the previous quote, desire. Except that I’d argue that the opposite is true, and that all of these things not only should, but must be the slave of reason. Channeling emotion and desire in the right ways just means making our emotion and desire, above all, rational. Not necessarily objective, but at the very least rational. When I get angry, it had better only be when it is rational to be angry. When I feel depressed, it had better only be when it is rational to be depressed. When I am happy, it had better only be when it is rational to be happy. When I fall in love, it will be forever had better be when it is rational to be in love. When I act on any of these emotions, it had better be because the action is rational, and not just because the emotion says so. And all of my desires had better be rational, and all of my actions taken on the basis of my desires had better be the rational ones given all of my beliefs about the world and all of the desires I hold. And I had better always put the desire to be moral ahead of every other desire I hold. To do anything else is to give up the goal of taking the actions that are right and are based on the way the world really is, and to give up being a moral person … and I’m sure that Benson wants to take right actions based on the way the world really is and to be a moral person.
Sure, what I just said is hard. Very hard. No one, in fact, actually has managed to do that in recorded history as far as we know. But it is what we should strive for, and Benson’s view of emotion and reason working together seems to be striving to achieve the opposite … and I cannot see what good that can possibly do.