Probabilistic Reasoning?

So, Richard Carrier has put up a post giving advice on reasoning in general, focusing on his preferred probabilistic reasoning. I have long been suspicious of both the normative and descriptive ideas of probabilistic reasoning, which means that I not only don’t think that our reasoning actually is probabilistic, I also don’t think that our reasoning should be probabilistic. And in going over the advice, I’m only more convinced of those facts.

Let start with the initial advice:

1. Of any claim or belief you think is true or that you feel committed to or need or want to be true, ask yourself: “How do I know if I’m wrong?” Being able to answer that question; knowing you need to answer it; and the answer itself—all are invaluable. You cannot know if you are right, if you don’t even know how you’d know you were wrong.

2. The most reliable way to prove a claim or belief true is by trying to prove it false. Failing to prove it false increases the probability of it being true, and does so the more unlikely it is that those attempts to prove it false would fail—unless the belief or claim were true. The scientific method was a crucial discovery in human history because it is almost entirely based on exactly this fact.

3. A good way to simplify the task of trying to falsify a claim is to find the best opponent of that belief or claim and investigate the truth or validity of their case. Because they will likely have done a great deal of the work already, so all you have to do is logic-test and fact-check their work.

So, this all shakes out, really, to the last statement: go out and find a good opponent and deal with them, and this will be a good way to test your own theory. I’ll talk a little bit more about that in a minute, but I really have to address this part first:

For that last point, a “good” opponent is one who is informed, reasonable, and honest. So as soon as you confirm someone hasn’t been honest, or often relies on fallacious reasoning, or often demonstrates a fundamental ignorance in the subjects pertaining, you should stop listening to them. Go find someone better.

I do realize this advice can’t help the genuinely delusional; they will falsely believe any opponent is dishonest, fallacies, or ignorant regardless of all evidence otherwise, so as to avoid ever confronting what they say. Reason and evidence will have no effect on them, so advice to follow reason and evidence won’t either. The advice in this article can only help the sane.

Once you’ve found a good critic, so-defined, it can be most helpful to you to build a personal relationship with them or otherwise cultivate a charitable, sympathetic, patient dialogue with them, if either is available (it often won’t be; we all have limited time), and then make it your point to learn as much as possible, rather than reducing your interaction to combative debate. The best way to do this: instead of “refuting” them, aim to understand why they believe as they do. Then you can test for yourself the merits of their reasons, which then you will more clearly and correctly understand. This produces a good falsification test, rather than combative debate which tends toward rationalization and rhetoric, which is a bad falsification test. And you can’t verify your beliefs with a bad test.

I’ve been reading a lot of Carrier’s posts over the past while, where he addresses critics of his position (this is what he does the most often in his posts). I have rarely seen him actually do this. He rarely discusses any post by anyone who he doesn’t end up insisting are delusional, fallacious or ignorant. Even when he tries to pick out the most rational opponent he ends up pretty much dismissing them, even though he doesn’t actually understand or address their comments. About the only time he was at all charitable recently was in a series that was a debate and not a conversation. So either Carrier fits into his own definition of “genuinely delusional”, or on his blog he isn’t demonstrating the best procedure for testing his own views. Either way, we’d have to take his advice with a grain of salt, as at best it seems to be “Do as I say, not as I do”.

But that’s an aside, and one that isn’t as important. What’s more important is the idea that a good way to simplify your testing is to go out and find a good opponent and deal with what they say. The thing is that this only works if the opponent is both aware of your specific hypothesis and takes it seriously. In that case, you can look at their objections to your theory as a quick way to find the problems with it that you have to address. But if your hypothesis is novel or one that most experts don’t take seriously, you can’t do that. The best opponents won’t take it seriously enough to take the time to make really good objections to it. I wonder if some of Carrier’s frustration is because the few people who address him directly don’t consider his view important enough to attack with more than an aside, and so they use more standard rebuttals rather than deep-diving into his specific view. (There’s also the known issue with Carrier where he tends to actually miss what their argument actually is).

And there’s an additional risk with this approach. What I learned quite early in doing philosophy is that it’s much easier to make negative arguments than positive ones, meaning that it’s a lot easier to attack someone else’s argument than support your own. If your main strategy is to go out and find an opponent, then you might fall victim to the temptation of thinking that if you can refute their case and their hypothesis, then that demonstrates that your hypothesis is correct. But you still need to make a case for your own hypothesis, because just because their view is wrong doesn’t mean that yours is right. This even holds when dealing with specific objections, because again just because their objections don’t work doesn’t mean that there aren’t objections that do work. So you can’t get tempted to think that if you find the best opponent and deal with their objections and have conversations with them that your hypothesis is correct. You still need to build a positive case for your hypothesis.

And it seems to me that probabilistic reasoning also falls prey to this temptation. Because probabilities can be in a range from 0 to 1, as Carrier himself argues if you add a hypothesis that could be true, then that lowers the probability of at least one of the existing hypothesis. It has to, as the sum of all possible hypotheses must always be 1. So someone might be tempted to add hypotheses that might be true in order to weaken the existing hypotheses, and if they can lower the probabilities enough can lower the probability of any one hypothesis from a case where we’d say it’s likely true to a case where it’s likely false, even if it is still by far the most likely hypothesis. This is of course not a good way to go about demonstrating that your preferred hypothesis is true. What a person could do is come up with hypotheses that are loosely connected but distinct enough to count as separate. Then they can use their individual probabilities to weaken their opponent, but then add them all up to claim that the most likely answer is one of those in that general category.

Carrier seems to do just that when he promotes his mythicism hypothesis. A number of times, he invents possibilities that have a low probability, but that all count as supporting mythicism. This happens even if they are potentially mutually exclusive, like arguing at times that the original Christians didn’t talk about certain things and then arguing that the original Christians were lying about certain things. It is a common thread in Carrier to come up with alternative explanations for the evidence that support mythicism and then try to use that to cast doubt on historicism in and of itself. But he often does not get past the “possibility” part, for example suggesting that they could have been lying without providing a motive for the lie, which is crucial in a claim that something is a lie. Now, I am not going to provide detailed examples here because to really do so would require a detailed analysis that I don’t want to go into right now, and so it’s perfectly fine if people have a different view of those arguments. But the main reason to raise it is that Carrier is the biggest example I have of probabilistic reasoning and even on topics that I have no real interest in it looks to me like he might be prone to those sorts of errors, which I argue probabilistic reasoning and seeking out opponents leaves one prone to.

So, the summary of this part is this: always build a positive case first to the best of your ability, including trying to anticipate any objections or problems to it. Then address competing theories and additional objections.

I’m going to skip a lot of the discussion of probability to get at what I’m interested in, which is epistemology:

Probability as Degree of Belief: This is also, I conclude, just another frequency measurement, and thus reduces again to the same one definition of probability as a measure of frequency. Only now, we are talking about the frequency of being right, given a certain amount of information. For example, if you predict a 30% chance of rain, when, given the information you have, on 30% of days you make that same prediction it rains (actually, or in a hypothetical extension of the same conditions), then the frequency with which you are “right” to say “it will rain” is 30% (or 3 times out of 10), and you are simply stating that plainly (and thus admitting to the uncertainty). So it is again just a frequency.

The thing is that I don’t think that this is indeed how degrees of belief actually work. We do not assess the strength of our beliefs in this way at all. We don’t give it a probability, or at least not one that we can give as a number. Even the “30% chance of rain” probably isn’t really interpreted that way, and isn’t a degree of belief besides (it’s either explicitly a probability or else it’s a measurement of things like humidity). So if Carrier is arguing this as descriptive, it doesn’t really seem to align with how we talk about degrees of belief. And if he’s arguing that this is supposed to be normative, then he needs a better example than the weather (which is something that we think unreliable).

Science has found a large number of ways people fail at probability reasoning and what it takes to make them better at it. It’s always helpful to know how you might commonly err in making probability judgments, and how others might be, too, in their attempts to convince you of some fact or other (even experts). See, for example, my article Critical Thinking as a Function of Math Literacy. A lot of cognitive biases innate to all human beings are forms of probability error (from “frequency illusion,” “regression bias,” “optimism bias,” and the “gambler’s fallacy” and “Berkson’s paradox,” to conjunction fallacies, including the subadditivity effect, and various ambiguity effects and outright probability neglect).

Now, to tie this into our beliefs, often these are used to show that while our reasoning is probability based, we aren’t that great at probability and so make a lot of mistakes. But we could also ask if the real answer here is that we aren’t actually doing probability at all when we form beliefs. After all, catching a ball is described by calculus, but it’s clear that we aren’t actually doing calculus when we try to catch a ball, if for no other reason than that calculus ability and the ability to catch a ball aren’t correlated. People who are better at calculus aren’t better at catching balls, and the best people at catching balls are often terrible at calculus. The same thing could be said for reasoning. There is no reason to think that those who are most educated in probability are better at reasoning or that those who are the best at reasoning are also the best at probability.

The key thing to look for is actually those purported “probability errors”. Do we differ from what probabilistic reasoning would suggest due to failures to calculate the proper probability, or instead in ways that suggest that we are actually doing a different calculation? I have long held that the classic example of the farmer, the librarian and the champagne or wine is a good example of the errors showing a different method or meaning and not just showing that we are in error. The example basically asks whether a person is more likely to be a farmer or a librarian if we are told that they drink champagne. Many people answer the librarian because it is more consistent with what we know of farmers and librarians for librarians to drink champagne (yes, it’s an egregious stereotype, but not an inaccurate one). But the answer from probability is that there are just so many more farmers than librarians that even with that added piece of knowledge it’s still far more likely that the person is a farmer than a librarian. This is then cited as an example of how we mess up probability calculations. But I submit that we aren’t translating “likely” there as a probability assessment, but rather as a degree of belief. And if we follow my preferred model of the “Web of Belief”, what we are really saying is that we should prefer to believe what is most consistent with the world. So if it is the case that it is more consistent with a librarian that they drink champagne than it is for a farmer, that’s what we should prefer to believe. Consistency, then, matters more than strict probability to us.

This also applies to Carrier’s next example:

For example, what does “normal” actually mean? Think about it. What do you mean when you use the word. How frequent must something be (hence what must its probability be) to count as “normal” in your use of the term? And does the answer vary by subject? For example, do you mean something different by “normal” in different contexts? And do other people who use the word “normal” mean something different than you do? Might that cause confusion? Almost certainly, given that we aren’t programmed at the factory, so each of us won’t be calibrating a word like “normal” to exactly the same frequency—some people would count as “normal” a thing that occurs 9 out of 10 times, while others would require it to be more frequent than that to count as “normal.” You yourself might count as “normal” a thing that occurs 9 out of 10 times in one context, but require it to be more frequent than that to count as “normal” in another context. And you might hedge from time to time on how low the frequency can be and still count as “normal.” Is 8 out of 10 times enough? What about 6 out of 10? And yet there is an enormous difference between 6 out of 10 and 9 out of 10, or even 99 out of 100 for that matter—yet you or others might at one time or another use the word “normal” for all of those frequencies. That can lead to all manner of logical and communication errors. Especially if you start to assume something that happens 6 out of 10 times is happening 99 out of 100 times because both frequencies are referred to as “normal” (or “usual” or “expected” or “typical” or “common” etc.).

The fact that we are inconsistent about what probabilities or frequencies mean that something is normal or weird should suggest that maybe we aren’t actually using probabilities for that in the first place. I argue that rather we tend to use the idea of consistency again: what is normal is what is expected in those cases, and what is weird is what isn’t. As an example, someone may hear a bit of squeal from their fan belt when starting a car when it’s damp. This may only happen 1 out of every 10 times, but once they know that that happens sometimes they will call it “normal”. And this can be applied to human behaviour as well, with weird behaviour either being things that are unlikely for that specific person even if it is common for other people — my going to a party, for example — or something that is just odd in general for most people that remains “weird” even if we know that the person does that frequently. So it seems like normal and weird in these cases aren’t about frequency but are instead about consistency.

People like Carrier may protest that they can shake out all of those cases with frequencies as well. This is probably true. But in line with the comment about errors above, it really starts to look like that’s the same sort of situation as we had with calculus and catching balls: you can model our assessments here with frequencies, but it’s clear that that isn’t what we’re actually doing. The differences between a strict probabilistic reasoning and the reasoning we actually do look a lot more like a different view or meaning than they do like us simply doing probabilistic reasoning incorrectly. This, then, is a huge blow against the idea of probabilistic reasoning: we don’t seem to do it, a case thus needs to be made to show that we should do it, and since we don’t do it there’s no evidence that we are actually capable of doing it.

Finally, let’s talk about logic:

This is where trying to model your reasoning with deductive syllogisms can be educational in more ways than one. Most especially by revealing why this almost never can be done. Because most arguments are not deductive but inductive, and as such are arguments about probability, which requires a probability calculus. Standard deductive syllogisms represent reality as binary, True or False. It can function only with two probabilities, 0 and 1. But almost nothing ever is known to those probabilities, or ever could be. And you can’t just “fix” this in some seemingly obvious way, like assigning a probability to every premise and multiplying them together to get the probability of the conclusion, as that creates the paradox of “dwindling probabilities”: the more premises you add (which means, the more evidence you add), the less probable the conclusion becomes. And our intuition that there must be something invalid about that, can itself be proved valid.

This completely misunderstands how deductive syllogisms work.

For a deductive syllogism, all we’re really saying is that if the premises are true, the conclusion is also true. What it doesn’t do is talk about how to determine if the premises are true. We don’t need to add up the probability of each premise being true to get some kind of overall probability of the final conclusion. If we know that all the premises are true, then we know the conclusion is true, whatever that criteria for knowledge is. So, no, it isn’t the case that if we add more premises the less probable the conclusion is. If we know that they are true, then we still know that the conclusion is true, and if we don’t know that they are true, then we don’t know that the conclusion is true. It’s also odd to say that adding new premises is adding new evidence because a lot of the premises in a syllogism are of the form “If X, then Y”. Evidence, in that model, would fit as an X, and we would be adding it only because we’d know that it happened, and so it’s only the “If X, then Y” premises that would make us doubt the conclusion, since we’d have to validate that that is actually the case. So it will be difficult to do what Carrier suggests, which is to prefer probabilistic reasoning to deductive reasoning, since if Carrier can’t make a logical argument that says that if the his “If X, then Y” premises are true then his preferred conclusion is true it’s hard to imagine what kind of argument he could possibly make.

Also, for almost anything that’s a fact is going to be the case that it is either true or false, or either 1 or 0 as he notes above. So that deductive syllogisms can only deal with that seems to suggest that it’s actually dealing with reality. Yes, fuzzy logic was invented to deal with cases that don’t seem to fit that model, but how many cases actually apply is debatable.

The more people like Carrier try to defend probabilistic reasoning, the more suspicious I am of it and the more convinced I am that the “Web of Belief” is the right approach. It seems to capture what we do and be more useful for more cases, and doesn’t demand that we learn advanced probability just to be able to function in our everyday lives. Probabilistic reasoning seems like ball-catching calculus: we can model it that way, but it isn’t what we do and we aren’t capable to using it in that way.

2 Responses to “Probabilistic Reasoning?”

  1. Tom Says:

    I have nothing to really say when it comes to the topic here. Just wanted to bring up how Carrier really is one of the more unpleasant atheist commentators I have ever seen. Those short, clipped and smug sentences. I used to find him interesting and even kind of liked him until he got into a debate with the Catholic sci-fi writer Michael Flynn (The Ofloinn). Flynn is unfailingly polite with Carrier and Carrier’s response is to become increasingly nasty, calling Flynn things like ‘retarded douchebag’, etc. This was around the point where I lost all respect for the man.

    Sorry. Had to vent.

    • verbosestoic Says:

      Yeah, that only makes his comments in the post about finding the best opponent and engaging in polite conversation with them all the more ironic, since he rarely does anything like that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: