Extra Credits on the Prisoner’s Dilemma

So, since I’ve been a bit busy lately, and since I had decided to check out some of the Extra Credits videos that I hadn’t been watching lately, I decided to comment on their video on the Prisoner’s Dilemma. Now, I’ve talked about the Prisoner’s Dilemma in video games wrt “Virtue’s Last Reward”, which reveals a common error made in considering the Prisoner’s Dilemma: the idea that, somehow, the only rational choice is to betray the other person, and that’s what reason would demand, which always at least indirectly implies that we need something other than reason to settle these sorts of questions, which is a position that I find … dubious, to say the least.

In the video, though, they take an interesting, if seemingly somewhat confusing, tack in trying to explain it. They explain defecting as being the only choice that you won’t regret making no matter what the other person chooses. But since if the other person chooses defect and you choose defect you end up with a worse outcome than if you had both co-operated, that doesn’t seem to make sense. I think what they mean is this: no matter which option the other person chooses, the choice to defect leaves you better off than the choice to co-operate. If they defect, if you defect you get the medium length sentence, but if you co-operate you get the longest possible sentence. And if they co-operate, if you co-operate you get a short sentence, but if you defect you get to go completely free. So, looking at it strictly from the perspective of “What happens to me if they choose X?”, defecting always works out better for you.

The reason why this isn’t necessarily the only rational decision to make is that it isn’t rational to ignore readily available facts in making your decision, and here the relevant fact is that the other person is, presumably, as rational as you are, possesses all of the same facts, and so is going through the same thought process as you are. As soon as this is understood, then it becomes obvious that the other person is going to make the choice to defect as well. Therefore, the best possible case for you — going free — isn’t going to be attainable. So what you’d want is to make it so that instead of both people choosing defect, they both choose co-operate. The reasoning above in general should work to achieve that, as that’s the best option for both of you. The only thing that could trump that is the fear that the other person is either not going to do that reasoning, or might try to take advantage of you reasoning that out, and try to defect anyway. But since, again, the same reasoning applies all that will do is lead to the defect/defect outcome.

I argued this when arguing with people over issues with the Tragedy of the Commons wrt Objectivism. Everyone would, of course, like to cheat, but they know that if everyone cheats they’ll end up with an undesirable outcome. And if they conclude that what’s best for them is to cheat, then they have no reason to think that the same reasoning won’t apply to everyone else, and so that everyone else will also cheat. This leads to the undesirable outcome. Thus, pretty much everyone is going to want to sacrifice their ability to cheat to ensure that they don’t end up in that undesirable situation. That’s the rational thing to do. So it’s only irrationality that causes people to instead rush to cheat out as much as possible — ie profit-take — from the Commons instead of looking to find ways to enforce non-cheating.

Of course, the one issue with this is when you run into someone who doesn’t care about that negative outcome. Let’s imagine a group of people looking to go out to dinner, but who all have different preferences on where they want to go. In general, a compromise is always reached because everyone understands that if they can’t find a place that’s at last moderately acceptable to everyone, they aren’t going to go out to dinner, and so they don’t want to be too stubborn about their top choice because that will scuttle the entire event. But imagine that there’s someone in the group who doesn’t care about going out to dinner that much. If they can’t go to their preferred place, they’d rather not go. This gives them incredible power in the discussion, because due to their circumstances they don’t care about the negative consequences. Unless the rest decide to go without them, either the group will go to that person’s desired restaurant or they won’t go at all. So alternative forms of persuasion are needed. In my experience, commonly social pressure/guilt and the will of the majority are mustered — pretty much reflexively in any situation that even looks like it might turn into that sort of case — to push that person to compromise. In the discussions of Objectivism, I argued that you could also add incentives to do so. For example, in my example you could use the promise of a free dessert or, in fact, appeal to the fact that a compromise restaurant has superior desserts to get them to go along with the compromise. But the big issue you run into with the rational reasoning outlined above is someone for whom the rational choice really is to risk or take the negatives.

Which leads into the examples they used from games: someone in an online team game who racks up kills at the expense of the others, or someone who is a DPS character who demands heals when the tank needs them far more. The problem with the examples is that there is an additional factor here that isn’t present in the Prisoner’s Dilemma that makes their actions more irrational: a shared goal. In both of those examples, the base presumption here is that those players won’t win unless the team wins, and so all of their actions should be directed towards that. So players at that base level, all that should matter is that all the opponents get killed or that the boss ends up being defeated. If sacrificing kills will better achieve that shared goal, then the rational move would be to sacrifice those kills, and racking up kills at the expense of that goal would clearly be irrational behaviour.

Unless, of course, there’s an external reward for racking up kills.

And in a lot of games, there is. There are rewards and trophies for kills. Many games rank players in the match itself or award points on the basis of kills. Since these can impact rankings and the like, often there’s an incentive for players to act selfishly instead of co-operating with their teammates. And, in fact, in many cases the individual rewards can be so great that it trumps the team winning. A player does better racking up kills even if their team loses. In those cases, it’s clear that acting selfishly is the rational move.

We can see this in co-operative and semi-co-operative board games. Arkham Horror is a fully co-operative game. No points are given out to the best investigator. There isn’t even an MVP award. The points are awarded to the team itself, and no points are awarded if the team doesn’t win the game. So there’s no reason for one player to try to rack up monster kills or gate closings or hog the best items or whatever. If they do, it would only be for two reasons (other than irrational competition). The first is because they believe that their character having that will be best for the team to allow the team to win the game, either to ensure that their powerful character survives longer in the Final Battle or is better equipped for their function (monster hunting, gate running, and so on). The other is that they’re worried that they’ll be eliminated in the Final Battle and so bored, although the Final Battle is pretty quick so it’s not a reasonable stance to take. The only other reasons are irrational, and will only hurt the team and thus make them less likely to achieve their shared goal.

Battlestar Galactica is a hidden traitor game. There are two teams, one of which is human and one of which is Cylon. There’s roughly a 60% chance of staying human throughout the game and a 40% chance of being a Cylon. Whether a player is a human or a Cylon is hidden until the player decides to reveal (or it is revealed through various mechanisms), and a player can start the game as a Cylon or might “pick up Cylonness” at roughly the midpoint of the game. I was involved in a long debate on boardgamegeek with someone who said that a player who had a human card at the start should play selfishly, hoarding resources and titles and positioning themselves to be in the best position possible. This was objected to on the grounds that it looked suspicious: while he wanted to do that to put himself in a strong position once he knew that he was going to stay human, it was pointed out that those were the precise same moves that a Cylon would make as well. Which was a fair point given that one of his motivations for the move was, in fact, to be ready in case he turned Cylon at the midpoint. But since the Cylons are hidden, all that he was going to do was engender mistrust in all the other players. And that would at least make things less efficient — as they couldn’t trust that player to do anything until they were sure the player was human — and so hurt the overall team game. So to the extent that it hurt the team, it was seen as a poor strategy. And the alternative of playing selfishly before the midpoint and excessively generous afterwards would lead to an easy tell for Cylonness, and so wasn’t good for them anyway.

So they have a point when they say that, in general, to trump the Prisoner’s Dilemma is to provide some sort of punishment for defecting. However, if everyone was rational then simply removing external incentives to selfish behaviour would also work. We can indeed all see that co-operating will lead to a better outcome for us … when it actually does. The problem is that in too many cases it is indeed possible for us to cheat and win.

Tags:

5 Responses to “Extra Credits on the Prisoner’s Dilemma”

  1. natewinchester Says:

    That’s not even getting into the debate about what is rational either as by definition, rationality is merely a process, it can’t be a goal or motivation. What no one wants to admit is that everyone is rational – but the goals are all different and you can’t rationally prove one goal should be preferred over another.

    Let’s take one example: Say the prisoner’s dilemma involves a father & son. Let us also say the father believes he will die tomorrow. In that case, the father will cooperate while the son will defect, because for the father, his goal is to get his son free as soon as possible so those are the rational steps for him to take towards it.

    Hence why I try to explain to people that most political disagreements are price arguments: What people are willing to pay for things they want.

    • verbosestoic Says:

      I won’t go quite as far as you, but I do argue that you can’t evaluate the rationality of an action without referencing the goals and beliefs of the person making the decision. We can evaluate what would be the optimally rational thing to do from the third-person omniscient perspective, but we can’t call a decision irrational if the person making it didn’t know or care about those considerations and had no reasonable way of knowing or caring about them in the timeframe where they had to make the decision. Your examples fall into that, it seems to me.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: