The (New) Atheist Morality

I was pondering commenting on the abortion debate since that’s come up again, but I won’t have time this week to do that, so I decided that I’d talk about something else that occurred to me during my morning walk instead.  I was thinking about the discussions of morality that I see coming from at least the most public atheists, and noted that their views tend to fall into two categories.  The first is a rough Utilitarianism that also relies on a bit of an Egoistic approach, like that of Adam Lee or Richard Carrier.  The second is a moral relativism that denies that morality is objective and has objective meaning and yet still wants to be able to criticize people for their immoral acts, desires and thoughts, like that of Bob Seidensticker, like Jonathan MS Pearce, and definitely Coel.  The first interesting thing about how they all pretty much end up in one of these two views is that there are a number of secular views that don’t end up in either of those categories, like Kant, or the Stoics or, well, most of the views studied in philosophy.  The other interesting thing is that both views end up with a fatal flaw that I’ve talked about before when talking about their specific views but never really highlighted as a general problem before (because I didn’t realize that the views were as general as they seem to be).

Let me start with the rough Utilitarian view.  Most of them — and, to be fair, many people who study moral philosophy — find the Utilitarian idea pretty reasonable and convincing (I think that even I found it interesting when I first encountered it, although I quickly found flaws in it), and so think that the idea of maximizing global well-being pretty much captures what it means to be moral.  However, they quickly run into the problem with most forms of morality, which is how to handle the case where someone says that they don’t care about that and in fact only care about maximizing their own well-being.  This is where the Egoism comes in, and they argue — not unreasonably — that for the most part acting in ways to to increase everyone’s well-being will also increase that person’s well-being, usually by making an appeal to a modified Golden Rule arguing that even if you could get away with it you wouldn’t want a world where everyone did that and it was expected, usually by adding in an argument from Hobbes that if everyone did that people would start to be afraid that it will happen to them and so will take measures to protect themselves that will cause society to break down and make co-operation too difficult.  So ultimately the idea is that everyone wants to co-operate with each other because of the benefits of that co-operation and because they need to co-operate with others to maximize their well-being.  Thus, we are justified in co-operating with others because in the long run doing that works out to be better for us.

But this merging with Egoism ultimately undermines their entire project, because while they want to justify co-operation and so some sort of altruism, as we’ve just seen the underlying justification is entirely Egoistic.  We are only justified in being “altruistic” because in the long run it benefits us the most to do that, which quite strains the notion of altruism.  Worst than that, though, is that it cannot survive the reasoning of people like Tarquin from “The Order of the Stick” or Russell from “Angel”, who argue that in their specific and particular cases they can at least bend the rules of co-operation because they can get away with it and what satisfies them most justifies the risk.  For Tarquin, he runs evil kingdoms behind the scenes with a blunt ruthlessness but in a way that allows him to survive the inevitable falls of those kingdoms.  Thus, he gets to be the power behind the throne and enjoy the fruits of being that — and being evil, since he is — without taking on the risks of being the most visible evil out there.  He can do that because he co-operates with other evil people who want similar things and they all work together to manipulate things for that end, thus providing them with that life that they all want with minimal risk.  He admits that ultimately, in that universe, some heroic party is going to come along and kill him, but he gets a lifetime of the things he wants and it’s only the very end that really sucks, but he can live with that.  He couldn’t achieve those things in a good kingdom, and as it turns out the kingdoms in that area are so unstable that a good kingdom couldn’t survive anyway.  So under the Egoistic Utilitarianism reasoning what he’s doing isn’t preventing the creation of a society that could provide him greater security and it gets him what he wants, and so it really seems to be the case that he is justified in saying that these evil actions actually end up providing him with the life he really wants.

For Russell, he is wealthy and powerful and can hire a lot of people to ensure he gets what he wants and can cover up his actual crimes, and so as he notes as long as he follows the big rules — like paying his taxes — he can do whatever he wants.  Arguably, if everyone did what he does it would be a fearful and chaotic society, but it’s also the case that most people can’t do what he does and he’s spending effort hiding it to avoid any societal consequences.  So he gets to do those evil things that he wants to do with little risk as long as he is careful to not do it too much or to the wrong people, but it’s not the case that having to think about and plan out your actions can be used as an argument against doing those things in this model.  So it does seem like this model allows him to do some of those “immoral” things that the system was designed to prevent people from doing while not facing any major consequences for doing them.

Now, people like Richard Carrier could insist that those things that they really want to do are inferior things to want, and might appeal to the fact that they have to work around the potential downsides to justify that:  they shouldn’t have to do that much work to avoid bothering people and bringing down society for something that’s really worth wanting.  However, this would force them to come up with an objective justification that isn’t just what someone wants, which is the problem they were hoping this system would avoid for them in the first place.  But more damningly is that there is, in fact, no way for them to avoid this potential consequence because Tarquin and Russell are using the correct reasoning and so the only quibbles could be over whether they are right about their specific cases, because once “altruism” is justified on the basis of personal benefit then if someone could ever achieve a real benefit from screwing over other people they are not only allowed to do that under this model but would likely be obligated to do that under this model.  Otherwise, they would be sacrificing their personal interests, which is irrational under this model.  That its defenders can argue that in most cases everyone’s personal interests really are aligned with altruism and co-operation does not save it from the philosophical objections that whenever the two come apart their system says that they should abandon altruism and co-operation in favour of their personal interests and our basic idea of morality says that morality should be the other way around, meaning that we can ask whether, at the end of the day, in trying to find a morality that they can use to appeal to people concerned with their own personal interest above all else they ended up producing something that isn’t a morality at all.

For the relativist case, they walk themselves into a contradiction when they insist that there is no objective morality but still want to make statements about what is or isn’t moral that they expect others to take seriously.  I have often made comments that if morality is relative then they can’t criticize others for being immoral only to have them say that they just did that so of course it’s possible.  The issue, though, is that they can express their views on morality all they want but if morality is relative no one has any reason to care about their moral pronouncements.  In general, the reply to this has been to look at that as a challenge of motivation or authority and so to appeal to consequences or laws to impose that one them, but the real objection is more that they can pronounce what is or isn’t moral all they want but if we realize that morality is relative and subjective their saying that can in no way be convincing if we don’t agree with them.  If someone, say, insists that AC/DC is a terrible band, because music appreciation is subjective I don’t have to agree with them and of course have no reason to accept their opinion myself.  While there are some objective considerations one can make with subjective things like that — you can talk about technical ability of the instrument playing, for example — that’s not enough to get us to the blanket condemnations that moral pronouncements tend to be.  And if morality is relative and subjective, they cut themselves off from all forms of actual moral reasoning because that would only apply if there was an objective answer to moral questions, and that’s precisely what moral relativism denies.  So they can express their moral views all they want, but no one who doesn’t already agree with them has any reason to be at all concerned about that or wonder if they need to update their own moral opinions in response to that.  Such pronouncements, then, should be met with a shrug, not with the introspective reaction that they expect from those they level them against.  The complaint is not that we need some kind of motivation, it’s that they make morality a mere matter of opinion and that outside of totally invalid impositions of opinion on people no one really needs to care about their opinion.

So those are the two main categories of atheist moral thought that I have encountered, and the two fatal problems that I see with them.  Given the vast array of moral theories that we have access to, one would think that they could have seen these problems coming … and likely picked better theories to attach themselves to.

2 Responses to “The (New) Atheist Morality”

  1. Hector Muñoz Huerta Says:

    Yes of course most relativist postures are only ways to hide from actual argumentation. The utilitarian view it seems to me is the basic moral stance of nature (accepting that individuals will always do what is convenient for them).

    As we are the most complex beings on the known universe we just can’t expect to be able to measure ourselves only by the standard basic natural moral stance.

    • verbosestoic Says:

      Utilitarianism itself doesn’t accept that individuals will do what is convenient for them, as its basic principle is that everyone should always do what’s best for everyone even if it means sacrificing their own interests to do so. EGOISM is the view that accepts that we will do what is convenient for us. It’s how they use Egoism to justify their loosely Utilitarianist view that turns it into an Egoism that ultimately means that you should indeed always do that which benefits you the most, noting that most of the time not shafting others will, at least in the long run, achieve that.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: