Richard Carrier, a long time ago, made a post defending Sam Harris and a scientific notion of morality. There is a debate between him and Coel in the comments, and I tried to put a comment there but the whole “possible imposter” thing means that I can’t post from my WordPress account and I don’t feel like trying to figure out what that one-time FTB account is right now, so I’m going to post it here as a post instead:
That’s not arbitrary if you define “moral” as “true imperative.” The only other option is to define moral as “false imperative.” So why do you want to define morality as a system of false imperatives?
I think that’s the problem here: most people would define morality, in your terms, as a system of MORAL imperatives, not just of imperatives in general. Coel seems to be looking for you to demonstrate why your starting point of “the thing you most value” is, in fact, specifically moral and leads to moral imperatives as opposed to, say, practical ones. So answering that we only want true imperatives doesn’t solve the issue, since he wants true MORAL imperatives, not just true imperatives.
This example, I think, demonstrates the problem with your formulation: let’s say that, for me, my highest value is, in fact, to be moral. I want to be moral above all else, whatever it means to be moral. Substituting that into your definition, that would mean that my highest value is to be moral, and what being moral means is to act in order to achieve the thing I value most … which is to be moral. And so, a vicious cycle: I start from wanting to be moral, and when I go to figure out what being moral means, I discover that it means, for me at least, to want to be moral above all else and to act to achieve that … but that was what I wanted to find out by looking at your definition. Oops.
Now, you can try to appeal to non-moral values to break the cycle, by arguing that wanting to be moral isn’t my highest value, that which I think will bring me the most satisfaction, but that it really is something else: material goods, living a good life in a proper society, whatever. Except that I can insist that that isn’t the case, and insist that even if being moral led me to a painful death or even eternal torment, I’d STILL think that my life would be more satisfying being moral. At that point, the argument fails unless you can attack my highest value.
Which you’d do in one of two ways. You could try arguing that the highest value of “Be moral above all else” isn’t appropriate or a good value. The problem is that it seems that a moral agent really SHOULD value being moral over anything else; why is it wrong to say that being moral is a BAD highest value to have for an agent that’s actually trying to be moral? So that’s rather suspicious. You could also argue that it’s a failure of reason or information to actually have that as a highest value, but that runs into the exact same problem: how can it be irrational or ill-informed to hold that the highest moral value we can have is to value being moral? What should a moral person value MORE than being moral? Finally, you could argue that the circularity itself is the problem … but the counter to that is that the circularity only arises because of your definition of morality. Other definitions don’t have that problem, so it seems to be an issue with your definition and not with having that as the highest value.
So, to return to this, the question is if we can have imperatives that are not moral. The above example suggests that we can, and that being moral means privileging the moral imperatives over, say, the practical or professional imperatives. And we can clearly see that we can have practical imperatives that don’t have any real connection to any moral imperatives. For example, if it is snowing and I am alone at home, I can decide whether to go out and shovel now or this afternoon and it seems that the only thing to consider is my own practical concerns … what will make me the most personally “happy”, without any real moral considerations at all.
This, I think, gets us right back to the main problem here, and it’s the old is/ought problem: when we want to know what it means to be moral, we don’t want to know what we DO value the most, but instead what we OUGHT to value the most if we are going to prioritize being moral over anything else. And it’s at least debatable that that can be done scientifically. You can pick that moral value or definition of moral, and from THERE argue that science can shake that all out, but that won’t let you escape from having to justify that definition of moral … and what Coel is asking for is what justifies the definition of moral that you are using.