Someone else might have already thought of this, though, although if someone else has I’ve never seen it.
I was reading this essay on morality that relies heavily on the Prisoner’s Dilemma, and in the section that talks about Objectivism the fact that it would fail to settle the Prisoner’s Dilemma was a major strike against Objectivism by Adam Lee (the guy whose writing it, since I didn’t mention it before now). Anyway, my normal reply is that Objectvism adopts the idea of Enlightened Egoism precisely to defeat this purpose, as the most rational decision is not the worst case scenario where they both betray the other, but the one where they both co-operate. In doing so, something clicked for me.
Let me quickly outline the scenario. You have two prisoners, A and B. They are being interrogated by the police for a crime. They can rat out the other person or remain silent. If A and B both stay silent, they serve 1 year. If A stays silent and B confesses, A serves 3 years and B goes free. If A confesses and B stays silent, A goes free and B serves three years. If both confess, they serve 2 years. The argument is that given their positions, considered independently, they are best served by choosing to confess. If A confesses and B stays silent, A goes free, and if A confesses and B confesses, they each serve 2 years. However, if A stays silent and B stays silent, then each will service 1 year which is better than the case where both confess, but if A stays silent and B confesses then A serves three years, which is the worst option. Prudence, then, would at least get both of them to confess, but there was a better option out there for them if they had only taken it.
Again, in pondering this I realized something. This is an example of what the rational thing to do, all things considered, would be, but then states that the actual outcome works out to not be the best one possible, but that no rational argument without some sort of iteration or externalities could break the deadlock into this position. But there’s one thing that’s missing that each side should know: in at least the artificial context of the game, each person has to assume that the other person is equally informed and equally rational, and so will follow the precise same reasoning as they will. This, then, implies something amazing: there is no case where two equally informed people, who are equally rational, who are in the same situation will come up with different answers to the question. So it will never be the case that A will choose something different than B will. Thus, they will both always either choose to confess or choose to silent. It will never be the case that one of them chooses to confess while the other chooses to remain silent or vice versa. Thus, we can eliminate any of these options from consideration, as they will never actually occur in any actual scenario.
So, now we note that the purported most rational solutions, in fact, rely on the other person choosing something different than you chose. The only reason for anyone to choose to confess is the hope that the other person will stay silent and so then the person who confessed will go free. But that will never happen; if you choose to confess, so will they using the same reasoning, and vice versa.
What this means, importantly, than when considering which is the best outcome we can eliminate any cases where the choices are different and only consider the cases where the choices are the same. That leaves the choice between the case where both stay silent or both confess. In the former case, both serve 1 year. In the latter, both serve two years. Given this, staying silent is clearly the better choice and is, indeed, the one that Game Theory says is the best possible outcome given all the facts. Therefore, the Prisoner’s Dilemma is not one at all: properly reasoning fully informed and fully rational people should follow the above reasoning and conclude that they should stay silent, and both parties will, invariably, stay silent.
(The fact that it even took me this long to figure this out suggests that we aren’t actually fully rational people [grin]).
November 30, 2019 at 12:53 pm |
There is a flaw here: You don’t know the other person’s value system.
I’ll explain. Prisoner B simply wants to limit his time in prison. Not so for Prisoner A though. He says, “I absolutely will not survive in prison. Even one year will kill me. Therefore it doesn’t matter how long the sentence is; it is in my best interest to leave.” Think of the crossdresser in “And Justice for All” as an example of this sort of guy: Three years or one year doesn’t matter, he’ll never make it in prison. He’ll kill himself immediately, if it comes to that.
Prisoner B? He simply wants the lowest sentence. He’ll take that, if escaping is impossible. So he stays silent. But prisoner A squeals, because it’s his only chance of getting out. And prisoner B is screwed.
There are too many variables to solve it.
November 30, 2019 at 1:54 pm |
Well, the original thought experiment was showing that the purportedly “rational” solution would always lead to both betraying, but co-operating and staying silent was actually the better option. That has to assume that both of them are equally rational and in the same circumstances, otherwise the Dilemma couldn’t argue that the outcome was going to be that both betray. This reasoning shows that if that’s the case, all appropriately rational and informed people will quickly note that this would mean that both of them will make the same decision, and so the only live options are the ones where both make the same decision, and from there the obvious best choice is for both to stay silent.
Your option here is similar to what I’ve tended to use against people using that in, say, Tragedy of the Commons scenarios (for example, most of the criticisms of Objectivism on that score made by Adam Lee and his commenters): everyone should think that the others are going to make the same decision as them, which will lead to the terrible outcome, unless someone has a special or different circumstance where they don’t care about the downsides. Then FOR THEM reason would dictate that they betray, and so other methods would be required to get them to co-operate. In your case, it’s hard to think of a way to do that that doesn’t involve punishing them for betrayal, but for things like fishing quotas they were always insisting that regulation with punishments were the only way while I replied — consistent with Rand’s view, since I was defending it against invalid criticisms at the time — that you could also put in a system of monitoring and PAY the people who didn’t care a bit more to follow it. I disagree with Objectivism for a number of reasons — mostly that it being so self-interested makes it seem like it isn’t morality at all — but what it does seem to get right — but Hobbes gets right better — is the idea that you can indeed get people to not be stupidly selfish and so put in all sorts of constraints and even get co-operative behaviour if those selfish people realize they’re better off co-operating than competing.
But to summarize my reply here: you’re right, but if the original thought experiment allowed for that case it would have already been defeated, as the rational response would be “betray or stay silent based on your best assessment of what they are likely to do given their values and reasoning ability”. Since it ignores that to make its case against reason, I can defeat it using that assumption — they are equally informed and equally rational — against it.
November 30, 2019 at 4:26 pm
Yes, my real objection is “Like most philosophy experiments it falls apart when actually applied to the real world.”
December 1, 2019 at 6:07 am
True, but also like most philosophy experiments it works best to explore concepts and ideas than to be directly applied to real-world situations.
November 30, 2019 at 4:28 pm
As you point out, saying that both sides are being “perfectly rational” doesn’t really matter if their premises are different; in my scenario person A IS being just as rational as person B.
December 1, 2019 at 3:04 am
Yeah, that’s absolutely true. But the original thought experiment has to assume that or else it can’t say that both sides will betray and so the expected result will be less than what you could or would get by staying silent, and it argued that using reason in those cases you couldn’t get to the “Both stay silent” result. This solution shows that you can.