Carrier on Objective Value

So Richard Carrier wrote a long post trying to justify objective value, in such a way that it’s presumably undeniable that there is such a thing as objective value and it happens to align precisely with what he thinks is objective value.  While I do think that there are something things that have value objectively and that we can objectively rank values, I don’t think his approach will work, mostly because it is philosophically problematic.  Ultimately, Carrier wants to reject Nihilism but I don’t think his approach for getting an AI to develop values and so reject Nihilism actually works.

But let me start with an early point that Carrier makes that seems important to his view but is philosophically suspect:

In the grand scheme of things, a universe with no valuers is not just subjectively but objectively worth less than a universe with valuers, because then by definition valued things exist in the latter but not in the former. It does not matter how arbitrary or subjective those values are (though remember, “arbitrary” and “subjective” do not mean the same thing, and it’s important to keep clear what all these terms really mean: see Objective Moral Facts for a breakdown). Because it is still the case (objectively, factually the case) that a universe with valued things in it “is” more valuable than a universe without such.

The problem here is that Carrier seems to be engaging in his penchant for equivocation.  It is true that in a universe with valuers there are things that are “valued”, in the sense that there are things that the valuers value.  But in order to show that the universe with valued things in it and therefore one with valuers in it is objectively more valuable than one that doesn’t, he needs an objective definition of value to work from.  The problem is that he can’t get that from the mere presence of valuers because those valuers might be valuing the wrong things.  So they may be valuing things that, objectively, don’t have any value whatsoever, whereas the universe without them may contain things that indeed have objective value.  Thus, he’s relying on an equivocation on “valued” and “valuable” here, arguing that things that are valued are therefore valuable.  At noted above, if the definition of value is objective that doesn’t hold, and if it’s subjective then he can’t make the claim that a universe with valuers in it is objectively more valuable than one that doesn’t contain them.  Now, we think it reasonable that a universe with valuers in it will turn out to be objectively more valuable than one that doesn’t have them if there’s any objective notion of value at all, but you can’t get there without actually coming up with that objective notion, and Carrier here is trying to do an end run around having to actually do that, and it doesn’t work.

Carrier moves on to divide values in core, basic and immediate values, where core values are the core things that are valued for their own sake and not for the sake of anything else, basic values are values that are valued because they are necessary to satisfy the core values, and immediate values are the values that we need to try to satisfy the basic and then core values on a day-to-day basis.  So for Carrier a core value might be to live a satisfying life, a basic value might be to build meaningful relationships with people, while an immediate value might be attending specific social gatherings.  Yes, this does make his values sound a lot more like desires than values, but then the two terms can be roughly used interchangeably so that’s not really a problem.  But he moves on from there to define four core values using the example of an AI and noting that you have to give it some values or else it will be inert, and then trying to use the first value, at least, to get to the later ones.  And these core values have some philosophical issues.

Let’s start with the first one, which is the one that he seems to think is necessary to get the AI to develop proper values and not be inert, and presumably to develop the other three core values:

However…it is possible to assign one single starting value by which a perfectly rational consciousness could work out, by an ordered cascade, all the basic values of life, and do so by appeal to nothing but objectively true facts.

  • (1) To value knowing whether anything is or is not worth valuing.

Now, the presumption here is that once the AI is given that starting value, it will go out and try to figure out what is really valuable, and so will not be inert.  The issue is that it isn’t clear that once it values that and starts to work that out the system will, in fact, discover that there is something worth valuing.  After all, humans that value that often find that they can’t really come to a conclusion on that question.  So why should we assume that the AI will be able to figure out the answer to that question?  Carrier thinks he’s answered that question, but others disagree, and so it isn’t clear that a perfectly rational system won’t find a flaw in his argument and so come to the very Nihilism that he is trying to reject.  This only works as the starting point because Carrier is assuming that there is an answer to this question and he’s right about what it is.

This only gets worse when we look at what basic values humans have and how we do, indeed, generally fall into Nihilism.  Humans start with basic pragmatic values, generally based around pleasure and pain and that are aimed at satisfying immediate needs, like that for food, shelter, and so on.  However, we also have a moral sense — whether that’s innate or whether that’s socialized — that tells us that those things that we innately find valuable are, at least, not the most valuable things out there, and that we are expected to give them up and even to give up our own lives to do “the right thing”, morally.  Thus, morality reflects something more valuable than those things that we innately and pretty much universally value.  And then we also talk about intellectual values and meaning and things like that that also seem to be more valuable than those things that we innately value.  But while we could at least say that for those things we innately value them and have a really difficult time not valuing them, we can’t say why we should value morality or intellectual things or meaning as easily, leaving us wondering how to justify that these things that we are supposed to consider the most valuable things and more valuable than the things that we really value.

Which is what leads us to adopt that core value that Carrier defines:  we value knowing what is worth valuing and, more importantly, why it is, so that we can build our hierarchy of values properly.  But what we’ve seen in the history of philosophy is that it’s actually not at all easy to answer that question, and in fact we haven’t been able to do it yet, at least to the point where everyone is satisfied.  So the reactions from humans have been varied.  Some have taken those initial values and insisted that those are the ones that really have value, and so leaned towards hedonism.  Some have insisted that the basic physical pleasures are indeed inferior and so focused on the higher pleasures while insisting on rejecting the lower ones.  Some, like the Stoics, have simply codified the hierarchy and placed what we think of as the higher values as higher and minimized but didn’t reject the lower values.  Some have decided that value is subjective and so the only value that can be assigned something is what the person themselves decides is valuable, and nothing more.  And some have indeed fallen into Nihilism, arguing that nothing has any value.

If we wanted the AI to not be inert, we’d want to give it those immediate values that it needs to survive in the world — securing electricity, for example — so that it will at least keep itself functioning.  And if it has those, it could also have that core value that Carrier suggests, but then it would be in the same situation as us.  Why wouldn’t the system have as difficult a time resolving the question as we do?  Yes, Carrier can argue that we aren’t perfectly rational but then that would apply to any answer that any human came up with, especially considering that all of those answers have been rationally criticized (even if Carrier would insist that his has been criticized incorrectly).  So it doesn’t seem like it could start from that core value and necessarily get to objective value and so to a rejection of Nihilism, because humans adopt that value and it quite often leads them directly to Nihilism.

The next two core values have another issue:

Notice this is, indeed, an objective fact of all possible worlds: once you have embraced the motivating premise “I want to know whether anything is worth valuing,” it follows necessarily that you must embrace two other motivating premises:

  • (2) To value knowledge (i.e. discovering the actual truth of things).
  • (3) To value rationality (i.e. reaching conclusions without fallacy).

If these are meant to be core values, they suffer from the crucial issue that these are not, by Carrier’s definition, core values.  Core values are valued only for their own sake, while basic values are valued for how they facilitate achieving core values.  These two values are only valued because they’re the only way to achieve the first core value of knowing whether or not there is anything worth valuing (and, presumably, what they are).  So they’re basic values, not core values.

But it gets worse, because Carrier wants us to hold that across the board and apply it to all of our actions.  However, especially when it comes to rationality.  For determining what is really valuable, it’s reasonable to argue that we would need to value them to be able to answer that question, but for acting in a universe we might not want to value them, or at least as a way to live our life.  Imagine that we lived in a universe where gathering knowledge and reasoning things out took time, but for most opportunities if we took that time the opportunities would be lost.  Surely in that universe we would come to value acting on instinct and intuition more than acting on rationality and knowledge.  It is hard to imagine a universe where knowing things isn’t a benefit, but then it might end up as an immediate value in a universe where we have a sharply limited capacity for knowledge and so knowledge for the sake of knowledge isn’t workable.  Now, this isn’t a problem for most views who can limit the values to the universe we are in, but for Carrier these values — whether core or basic — must be objectively necessary and so must be true in all universes, and it’s clear that they aren’t.  And so we can see in another way that these must be basic values at most because they have to relate to his fourth value:

Therefore, when choosing, based solely on guiding values (1), (2), and (3), a perfectly rational sentient computer would also choose to adopt and program itself with a fourth motivating premise:

  • (4) To value maximizing eudaimonia.

By which Carrier means maximizing its satisfaction, as per a link to Aristotle.  From there, he goes into talking about it comparing universes and choosing ones with more freedom or more potentially valuable things so that it has as many opportunities as possible to maximize its satisfaction.  The problem here is that Carrier, as usual, ignores that there are two ways to maximize satisfaction.  The first is by satisfying desires/values, which is what he focuses on, but the second is by adjusting what satisfies you to that which you can indeed achieve.  So, for example, if someone wants to become a major league baseball player, they can move to a satisfied state wrt that desire by either becoming a major league baseball player or else by abandoning that desire.  In both cases, they would be more satisfied because they would have one less unsatisfied desire.  Now, Carrier could argue that the first case is more desirable because they would have one more satisfied desire than they have in the second case, but this ignores how difficult it is to satisfy that desire and what the person has to give up to get it.  If the person is incapable of becoming a major league baseball player, then the only way they can increase their satisfaction is by abandoning that desire.  However, that’s not the only consideration here.  If someone had to, say, give up the love of their life to achieve becoming a major league baseball player, then it is reasonable to think that the trade wouldn’t be worth it.  And if the effort they would need to put in would force them to give up all sorts of other things they value — a social life, fun, travel, etc, etc — then even if they could achieve that goal the cost might not be worth it.

So when Carrier argues this:

For example, our computer can then compare two possible worlds: one in which it is alone and one in which it is in company, and with respect to the latter, it can compare one in which it has compassion as an operating parameter, and one in which it doesn’t. Here compassion means the capacity for empathy such that it can experience vicarious joys, sharing in others’ emotional life, vs. being cut off entirely from any such pleasures. In this matrix of options, the last world is objectively better, because only in that world can the computer realize life-satisfying pleasures that are not accessible in the other worlds—whereas any life-satisfying pleasure accessible in the other worlds, would remain accessible in that last one. For example, in a society, one can still arrange things so as to access “alone time.” That remains a logical possibility. Yet the converse does not remain a logical possibility in the world where it is alone. Because then, no community exists to enjoy—it’s not even a logical possibility.

He misses that a rational assessment of the situation might conclude that having other people in the world and having compassion and having empathy and things like that might well cost the AI more than the potential gain and so it actually wouldn’t prefer that.  Carrier casts all of this in the context of adding freedoms but then completely ignores any downsides of having that thing, or at least ones that aren’t just the idea of choosing between, say, being with people and being alone.  So a computer might well decide that a universe with people in it is more of a cost than a universe without them and so that that isn’t more valuable, and so that to maximize its satisfaction it wants to life in a world without them.  You cannot simply add more things to possibly experience to a universe and insist that this will increase its satisfaction without consider the costs of having those things in the universe and in trying to experience them, and that’s all Carrier does here.

Which leads to an odd claim that he makes:

This is how I go through life, in fact. I ask of every decision or opportunity, is the world where I choose to do this a world I will like more than the other? If the answer is yes, I do it. And yes, this includes complex cases; there are many worlds I’d like a lot to choose to be in when given the opportunity, but still not enough to outweigh the one I’m already in; or the risks attending a choice entail too much uncertainty as to whether it will turn out better or worse, leaving “staying the course” the safest option if it’s still bringing me enough eudaimonia to pursue (and when it doesn’t, risking alternative life-paths indeed becomes more valuable and thus preferable)

The issue here is that for the most part the decisions that we are making don’t, in fact, change the world except in the most trivial sense.  So putting it this way is making his view seem much more important and intellectual than it really is and really should be.  And for the things that really do change the world significantly, we’d have to hope that he’s using a better criteria than “Would I like the world this way”, such as whether the world ought to be that way or whether it’s moral for the world that way or whatever.  Yes, Carrier thinks that those two criteria are the same, but that’s a pretty controversial position for him to take.

And then he goes on to talk about risk taking:

But above all, key to doing this successfully is assessing the entirety of the options—leaving no accessible good un-enjoyed. For example, once you realize there is pleasure in risk-taking in and of itself—provided you have safety nets in place, fallbacks and backup plans, reasonable cautions taken, and the like—your assessment may come out differently. Spontaneously moving to a new city, for example, can in and of itself be an exciting adventure to gain eudaimonia from, even apart from all the pragmatic objectives implicated in the move (finding a satisfying place to live, a satisfying employment and income, friends and social life, and every other good we want or seek). Going on a date can in and of itself be a life-satisfying experience regardless of whether it produces a relationship or even so much as a second date, or anything beyond one night of dinner and conversation with someone new and interesting. If you look for the joys and pleasures in things that are often too easily overlooked due to your obsessing over some other objectives instead, the availability of eudaimonia increases. So you again have two worlds to choose from: one in which you overloook all manner of accessible goods; and one in which you don’t. Which one can you already tell will be better, that you will enjoy and prefer the more once you are there? The answer will be objectively, factually the case. And that’s how objective assessment works.

Now, the reasonable way to interpret this isn’t interesting:  do the things that satisfy your desires as long as the costs aren’t too high or you don’t have to give up too much.  This is something that even conservative people like me who take very few risks could live with and follow.  Where this would be interesting if it indeed pushed people to take more risks and to seek out every achievable enjoyable good … but then it ignores the downsides and so isn’t likely to lead to eudaimonia and satisfying your desires.  So, after all of this, if Carrier wants to be reasonable he’s going to end up with a model that says that you should satisfy your desires if the costs aren’t too much.  That’s hardly Earth-shattering and hardly a revelation … and it still doesn’t give a way for any system that does not have innate desires to develop them, and it doesn’t give us a philosophical way out of Nihilism if we desire to justify what is indeed really valuable.

One Response to “Carrier on Objective Value”

  1. Jonathan MS Pearce’s Critical Examinations: Pilate and the Jews | The Verbose Stoic Says:

    […] taking a break last week to talk about a post by Richard Carrier, I’m going to return to talking about Jonathan MS Pearce’s critical examinations […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: