It’s time to take up another one of Richard Carrier’s blog posts from the year that I’ve wanted to talk about but haven’t had the time to talk about yet. This one is a bit more recent — it’s from back in August — and is a rewrite of a post he did on his previous blog about the Epistemological Endgame. So let’s start by looking at his opening paragraph describing the issue of the infinite regress in epistemology:
One of the big issues in epistemology is the problem of infinite regress. “I believe the sun will rise.” “How do you know that?” “Because it always has.” “How do you know that?” “Because my memory and human records confirm it has.” “How do you know that?” “Because I’ve examined those memories and records.” “How do you know that?” And so on. It looks like this could go on forever. It seems like any answer you give can be doubted. We can always keep asking “How do you know that?” And that isn’t the only line of regress. “I believe the sun will rise.” “How do you know that?” “Because it always has.” “How do you know something that’s always happened will continue to happen?” And so on.
The thing is — and what actually most pushed me to examine this post — is that the problem of infinite regress isn’t actually all that big a problem in epistemology anymore. The questions are still relevant, but it turns out that pretty much any modern epistemology — which includes Carrier’s own Bayesian one — don’t have the problem anymore, or at least not to the point where the questions listed above are actually issues. I’ll talk more about why that’s the case, but first let me talk about where this problem comes from, even in Carrier:
There is clearly only one sound solution to epistemological regress: the end game is always something self-evident. That is, all beliefs rest ultimately on a bunch of things you believe because they are sufficient evidence for themselves and thus no further evidence is required, and thus no further regress. This is called Cartesian Knowledge: raw, uninterpreted, present experiences, which alone have a zero probability of not existing when they exist for an observer because “they exist for an observer” is what a raw, uninterpreted, present experience is. They are thus self-evident.
So, I had never heard those things being referred to as “Cartesian Knowledge” before, so I did a search for the term. The only thing that gave me any kind of summary of that specific term was the AI summary, which said this:
Cartesian knowledge, stemming from
René Descartes, is a philosophy of certainty built on radical doubt, seeking indubitable truths through reason (rationalism) rather than senses, famously starting with “I think, therefore I am” (Cogito, ergo sum), and progressing to establish foundations for science and the external world through innate ideas and deductive logic, contrasting with empiricism.
Pretty much every other reference was just to Cartesianism itself, except for one reference to Cartesian Doubt, which says this:
Knowledge in the Cartesian sense means to know something beyond not merely all reasonable doubt, but all possible doubt
Which isn’t really what Carrier means here, but will be important later.
But ultimately whether he is using any kind of recognizable term, he’s correct that the infinite regress problem is at least at its most … problematic in Cartesianism, which I normally refer to as Cartesian Epistemology, and while I will refer here most of the time as the Cartesian Project. What Descartes was trying to do was indeed the process of Cartesian Doubt linked above, where he tunneled down to the statements where we could not doubt them and then attempted to use them to justify the things we could doubt — like our sense experiences — by making moves that we also could not doubt to justify the next step up in the chain. As people who have studied Cartesian Epistemology will know, he succeeded at solving the infinite regress down to the foundational, undoubtable principle of “I think, therefore I am”, which establishes that a thinking subject exists. He didn’t justify experiences in quite the same way, but Carrier’s analysis that “I am experiencing X at this moment” in terms of that being what a person is indeed experiencing would fit as well, as long as one does what Carrier indeed does and note that the content of that experience is still doubtable. Macbeth cannot deny that he is experiencing a visual sense experience of a dagger in front of him, but he can doubt whether there’s actually a dagger there.
So we can solve the downward progression of the infinite regress. But the issue for the Cartesian Project is the second part, which is then building upwards from there to justify things like, indeed, the content of sense experiences. And what is I think accepted in epistemology is that ultimately Descartes failed to do that. Even introducing an all-good God who would not deceive us in that way doesn’t solve the problem because, of course, the existence of that sort of God is doubtable as opposed to us having a Cartesian Demon in charge of our sense experiences and deliberately deceiving us. Carrier spends a lot of time trying to justify his endpoints on the downward progression, but in my view doesn’t manage to build it back up again.
But at any rate, the main philosophical conclusion from the Cartesian Project is that insisting on certainty for knowledge claims is a non-starter. There’s just no way to get that given our flawed mechanisms and all the doubts that they and we can come up with. So we have to abandon the desire for certainty in our justifications of belief, which means that have to accept, on the “justified true belief” model of knowledge, that it is possible for me to believe that p is true, to be justified in believing that p is true, and yet for p to ultimately end up being false. Descartes didn’t think that was the case, and early on in my philosophical career I also leaned that way — arguing that how could we claim to know that p is true if p ended up being false — but was corrected in my thinking by David Matheson who commented that I was mixing up first order knowledge — I claim to know that p — with second order knowledge — I claim to know that I know that p — and so it is possible for me to validly claim to know that p while being wrong about knowing that I know that p. And ultimately, we need to allow for someone to be justified in believing that p and yet for it to be wrong, because otherwise we could only justify a knowledge claim by certainty, which is impossible, and so we could never really know anything.
And so if we return to the questions above, the answer to “How do you know that?” follows from the epistemology. For Carrier, all he would need to do is present the evidence and the probability calculation and demonstrate that the probability is high enough for him to claim knowledge. For myself, all I need to show is that my belief was formed by a reliable, truth-forming faculty that forms beliefs that fit into my Web of Belief and end up not being contradicted by acting in reality all that often. And then if someone notes that our conclusions rely on sense data and asks how we know there isn’t a Cartesian Demon mucking around with them or how we know that we aren’t in the Matrix, again we have answers for that. Carrier will answer that the probability of those things is so low that he doesn’t need to consider them until that person can provide some real evidence or argumentation for them. For me, all I have to do is note that accepting those propositions would mean wiping out a large portion of my Web of Belief without building anything back up and without being supported by anything else in it, and so again I can refuse to consider it until they can provide some support or use it to resolve strong contradictions in the Web in a less destructive way. As we can see, this changes the nature of the questions from simply being skeptical doubts to a question about epistemology. If they don’t accept our epistemology, our answers won’t satisfy them, but it is clear that in our epistemologies we need not consider those doubts and they don’t undermine our epistemologies.
And that’s the key here: the questions that drive the infinite regress problems are indeed simply skeptical doubts, questions that ultimately force us to consider that we could be wrong. But most modern epistemologies start from the premise that we could be wrong. We do not claim certainty. And because of that, all questions of the form “Have you considered that you might be wrong?” are answered with “Of course! But obviously I don’t think I am”. Merely raising skeptical doubts was an issue for the Cartesian Project, but is not an issue for modern epistemologies. Yes, even Carrier’s epistemology.
So this pretty much eliminates his entire project in resolving the infinite regress, which is good because, as noted above, I don’t think he did it, and most of his approach is rather muddled. I think he ultimately tries to solve it with a normative claim, which doesn’t work for other reasons, but I’ll look at that in a minute, because it’s time to look at his discussion of properly basic beliefs and why theism can’t be one and why Plantinga is just wrong. His conclusion is this:
That means Cartesian knowledge is the only properly basic belief. To say something is “properly basic” is to declare that it’s something we get to assume without needing a reason to believe it—other than itself. We need another reason, at least some reason, to believe anything else. In fact anything that could be false requires a reason to believe it other than itself. Therefore only things that cannot be false can be properly basic. And that means, quite simply, Cartesian knowledge.
So, like the notion of “Cartesian knowledge”, this puzzled me, but for another reason. My impression of basic beliefs and properly basic beliefs as per Plantinga is that they, in fact, have no need to be certain. In fact, my impression of them was that they are beliefs that we hold for the most part just because we hold them, and that we are justified in continuing to believe them as long as we don’t have a defeater for them, which is a strong reason to think that they are false that cannot be a simple “Well, they could be wrong”. If we are talking about mere beliefs, it was always the case that we considered that they could be wrong, because that’s what differentiates a mere belief claim from a knowledge claim. Since they can be defeated, that implies that they are not considered to be certain. So Carrier cannot simply claim that properly basic beliefs a la Plantinga are Cartesian knowledge claims that are certain since Plantinga clearly does not consider them to be certain, and so would have to demonstrate that effectively by Plantinga’s own definition of properly basic beliefs they have to be certain. Otherwise, he’d be equivocating by using the same name for a completely different concept.
So, then, that caused me to do another quick Google search for what Plantinga’s idea of a properly basic belief actually was. And I found a good description on this Philosophy Stack Exchange question:
According to Plantinga, roughly again, a belief is warranted if it is produced by cognitive faculties designed to produce true beliefs given certain kinds of inputs in particular cognitive environments. So, for example, beliefs produced by our visual faculty in an environment with good lighting and looking at an objects near to us, are properly basic because they are warranted but not warranted inferentially from more basic propositional beliefs. Rather they are warranted by our visual faculty producing a belief about an apple on the desk and it was designed to do so in that kind of environment. (Presumably it is the mental image of the apple on a desk that is caused by our visual faculty which produces true beliefs in that environment). Other cognitive faculties involve memory, reason, etc.
So, a basic belief is a belief that we just have, and a properly basic belief is a belief that we just have but was produced by a faculty that was designed to produce true beliefs and it was produced under the conditions where that faculty produces true beliefs. To tie back into the Evolutionary Argument Against Naturalism — that Carrier references in the post as well — the issue with evolution was that it selected for utility, not truth, and so we had a defeater for our beliefs in that they were not selected for truth and so we could reasonably doubt that they actually were producing a truthful belief as opposed to simply a useful one. Noting that truth-producing faculties would be useful and so could be selected for on that basis didn’t work because we were still selecting on utility, and so the faculties were useful but not necessarily ones that produced truth. However, under this revised version of properly basic beliefs we can actually sidestep the EAAN. See, in order for us to claim to have any properly basic beliefs at all, we need to have a way to establish that our faculties are designed to produce true beliefs, or at least that they do so reliably. Them producing consistent beliefs that we can use to navigate the world is a good way to establish that. But, regardless, if we note that faculties can be “built” to produce true beliefs, and we can note that faculties that were built that way would be selected by evolution for their utility — as producing true beliefs has utility — then we no longer have the defeater for our rational faculties and their products that they might only be producing useful beliefs and not true ones. We can establish that they seem to produce true beliefs, and note that if they did evolution would select for them, and even argue that evolution is more likely to select for faculties that produce true beliefs instead of faculties that produce useful but false beliefs because it is easier to produce useful true beliefs than useful false ones. Plantinga could only argue that those faculties aren’t “designed”, but that’s a weak argument that seems to only be there to allow him to maintain his idea of a designer.
But that’s neither here nor there. The key is that a faculty that is designed to produce true beliefs doesn’t always have to produce true beliefs. Sometimes it can produce a belief in conditions where it does not reliably produce true beliefs. And since it only needs to be reliable, it can produce false beliefs on occasion and still be a reliable truth-forming faculty. And thus, for Plantinga, properly basic beliefs are beliefs that we just have, that were produced by a faculty designed to reliably produce true beliefs, and that we don’t have a defeater for. If that is the case, then for Plantinga we don’t really need to provide specific reasons to maintain that belief, and we are justified in maintaining that belief unless someone can provide a really good reason why we shouldn’t maintain that belief. And as above, it’s not sufficient to say “You could be wrong” or provide some skeptical argument against it or our reasoning or that faculty. They need a specific reason to think that the faculty got it wrong in this case.
For the theism debate, what this means is that it switches the burden of proof around. Since for most people theism is a basic belief — we learned it from our parents or culture or society — and since the faculties that produced it tend to be ones that aim at producing true beliefs — even if they don’t always manage that — then we do not need to provide proof sufficient to the atheist to maintain our belief in God, and in fact if they want us to abandon it they need to provide proof sufficient to challenge it specifically. That doesn’t mean that they can’t rationally abandon the belief in God, but it does mean that they cannot claim that theists are irrational for maintaining it without providing that really good reason to think God doesn’t exist. And that is something that atheists tend to avoid doing.
But, at any rate, Carrier’s notion of a properly basic belief does not match Plantinga’s, and so he cannot challenge Plantinga’s conclusion using it. And we still have no need of his “Cartesian knowledge”.
Ultimately, Carrier’s attempt to rebuild knowledge starting from that starting point seems to rely on the normative:
Thus, there is no regress. But the underlying normative nature of this end game must not be overlooked. In effect, my entire epistemology rests on a conjunction of just three premises, which I will greatly oversimplify for the point here:
- A: “Following certain principles will probably make things go better for me than not following them will”
- B: “If I want things to go better for me, I ought to follow the principles that will probably make things go better for me than not following them will”
- C: “I want things to go better for me.”
So, ultimately, as he said earlier in the post, he is relying on the idea that an epistemology exists to make things go better for us, and so we can justify the epistemology we use on the basis on whether it allows me to be successful in the domains that I apply it too. Now, earlier he makes a rather … bold claim:
This does get us to a realization, though: all epistemologies are fundamentally built on axioms that are, in fact, imperative propositions. In other words, every epistemology is constructed on top of a set of “I ought to believe x when y” propositions, and therefore, if it is true that any epistemology ought to be adopted by everyone, then epistemology as such is a subset of morality—and it would therefore be immoral to knowingly violate the axioms of a true epistemology. There is an ethics of belief.
Thus, it is immoral to not act according to the one true epistemology.
This is a confusion over normative claims, as just because something is normative doesn’t mean that it is moral. Just because he can toss an “I ought” in there doesn’t mean that he’s making a moral claim, especially when he’s talking about hypothetical imperatives. The statements really are “In order to be moral, I ought to do X” and “In order to be a proper knower, I ought to do X”, in the same sense as “In order to be a proper deck, it ought to have X”. But only the first one is a moral claim, made clear by it referring to morality in the statement. For epistemology, it is entirely possible that being a proper knower would involve doing things that are immoral, such as having an epistemological imperative to discover all knowledge that results in generating knowledge that harms people. We don’t really think that there are any serious conflict between morality and epistemology, but since conceptually there could be the idea that epistemology is a subset of morality doesn’t work.
Carrier could use the statement I showed above to argue that using his morality we are morally obligated to only use a proper epistemology because we are morally obligated to act in a way that accords with us achieving our true self-interest, and using the right epistemology is required to do that. But this then would highlight how odd his morality is as questions of the right epistemology don’t seem like moral ones. And he doesn’t argue this, anyway.
But there is an issue with basic his epistemology on this statement, because it looks to me like I can use it to justify things that he would not like me to justify. Let’s take his statements of a proper epistemology:
So our bottom basement of circularity arrives here, at the point when we decide on the most fundamental principle underlying all of the above, which I will call principle K:
- K: “I ought to believe x when I have (a) evidence supporting x and (b) no evidence supporting what would have to be true for me to have (a) and yet for x to be false.”
…
The contrary inductive principle ¬K would then be:
- ¬K: “I ought to doubt x when I have (a) evidence supporting x and (b) no evidence supporting anything else that would have to be true for me to have (a) and yet for x to be false.”
We are thus faced with an ultimate choice: K or ¬K? Which principle do I follow? I can try them both out right now, and immediately see that following K leads to correct predictions and the satisfaction of my desires and the fulfillment of my plans, while following ¬K does much poorly in all three respects.
The thing is that I can introduce another principle alongside K:
- I ought to believe x when a) it would benefit me to believe x, b) I do not know that x is false, and c) there are no significant negative consequences if my belief is false, regardless of whether I have sufficient evidence supporting x
This does not violate his normative imperative, because it is directly based on calculating the benefit to me. But it’s not irrational either because it is responsive to evidence. But it explicitly says that I believe something for “no reason” — which Carrier argues is irrational earlier in the post — as long as it would benefit me to do so. Carrier might be willing to go along with that, but he won’t when he sees that I can use it to justify Pascal’s Wager. See, Pascal’s Wager says that it is at least not irrational to believe in God since if we do and God exists we get infinite benefit after we die, while losing nothing after we did if He doesn’t. And in this world, the costs are negligible compared to the potential benefit, and there is no point in considering Pascal’s Wager at all if we know that God doesn’t exist. And so it seems to benefit me more to believe that God exists than that He doesn’t, and so not only am I not irrational for doing so, but I might even be morally obligated to do so.
But wait, there’s more! This also sidesteps the Outsider Test as an objection here, because we are adopting this on the basis of benefit, and one thing that we are going to want to do is minimize the costs in this world. Yes, an infinite benefit will always overwhelm it, but since we could be wrong we don’t want to spend too much on the Wager. So we are likely to either go with a belief that we already have, or with the belief of the culture that we are in, because doing the things we need to do to express that belief and gain that belief is going to be more convenient if we are a) already doing it or b) are doing the thing that most people do. So we will default to the belief of the culture that we are in or most attached to. Yes, other cultures will have other beliefs, but it’s more difficult for us to follow those beliefs, and so we will tend not to unless we get something out of that. And we can even justify not changing our beliefs if we move to those other cultures on the basis that it is easier for us to just keep doing what we’re doing. So why do we pick the one we do? Because it’s convenient. Why don’t we choose one of the other ones from other cultures? Because it’s less convenient. Why don’t we change our belief? Because it’s less convenient. And so the Outsider Test is answered.
So from Carrier’s normative basis, I can generate a principle that is perfectly consistent with that basis that allows me to believe things “for no reason” and without evidence, and then from there justify Pascal’s Wager while defeating the Outsider Test. I … don’t think that’s what he intended.
Let me add a final note on circularity, because Carrier argues that circularity isn’t an issue and is indeed how this regress ultimately ends:
So we do need to trust memory to engage in complex reasoning. We just don’t trust it because it is properly basic. We trust it for reasons. “But those reasons circularly include other memories you have to trust” is true, but the end game is the experience of all this, which includes an experience of its coherence: that it is working now is evidence that it has and will.
But ultimately the issue with circularity is always that: in order to justify something, I need to rely on it being justified already to do so. An example is in the debate between John Dewey and Bertrand Russell over sense experiences: Dewey took the standard empiricist/scientist route and argued that he took observations and then asked other people to verify those observations, and thus from there could justify that his observations were accurate, but since he gets those verifications through the senses he was relying on his sense impressions of their confirmations being accurate, but if his sense impressions were justified as being reliable enough to, well, rely on he wouldn’t need to do that. That is how circularity is an issue: one cannot rely on the very thing that someone is trying to justify to justify that thing they are trying to justify.
For memory, Carrier notes that a memory experience is an experience of memory, but has to admit that the content of that experience can be false, and we can even note that whether it actually is a memory — as opposed to a current imagining — is also open to doubt. So nothing in the memory itself — even that it is a memory — is properly basic for him (by his definition). So as per the above, it looks like he needs an argument or set of observations to justify that memory is reliable, specifically the case that it seems like relying on his memory has and continues to produce consistent results. But it also looks like that’s a complicated enough reasoning to require memory. And if it is, what he’d be doing is relying on reason to justify memory, but in order to produce that reasoning he needs his memory to be justified as being reliable, but that’s what he’s trying to use the argument to prove, and so the circularity problem arises. He cannot use reason to justify memory so that he can justify reasoning given memory.
The same problem as he has for memory arises for his more “normative” claim about desire:
-
C: “I want things to go better for me.”
Properly interpreted, C is an undeniable experience of desire and thus properly basic.
He can claim that we are experiencing “desire”, but we cannot justify the content of that desire, which means that we can’t justify that we actually desire that. Carrier tries to argue that it’s somewhat tautological:
For example, I can be wrong or confused about what “I” means and what “better for me” means, but I cannot be confused about the fact that on some construction of those two terms it is always true that “I want things to go better for me.” That realization is properly basic. Because it can’t actually be false. Even if I incidentally, irrationally, want things to go worse for me, I am then simply redefining what is better for me.
But this is false. It is entirely possible for someone to know what would makes things go better for them and yet desire for things to go worse for them. Yes, it would seem irrational, but here he needs the actual state, not the normative ought statement. Yes, rationally someone ought to want things to go better for them, but it is indeed conceptually possible for them to not actually want that, in the same way that one can know what it means to be moral and yet not want to be moral. The only way to get around that is to ditch any notion of objective benefit and argue that what is better for someone is just whatever they happen to want for themselves — which Carrier does try to argue here — but this undercuts his entire objective morality which is about differentiating between what is really best for a person versus what they believe is best for a person, since as soon as one does that then that whole “redefining what is better for me” is an invalid move as there is an objective notion of what is better for me (even if it is personal) that can be determined outside of what anyone happens to think is better for them, which makes this a non-starter. Thus, this properly basic belief in its basic form might be properly basic as per Carrier’s definition but is not at all useful, and when he tries to use it to build a new notion it ultimately fails to accomplish that.
But, ultimately, there is no longer any real problem of an infinite regress in epistemology. That was only a problem when we insisted that knowledge claims could have no possibility of being wrong, but since that means that we would have no knowledge at all that has been abandoned and most modern epistemologies — including Carrier’s — no longer have that issue and so are no longer vulnerable to the questions raised in his first paragraph. So the solution to the infinite regress problem is that there is no such problem after all.
Final Thoughts on “Suikoden”
December 31, 2025So after having some time with being on vacation and being motivated to get through the game, I managed to finish off the original “Suikoden”.
I’ve said that the plot moves very quickly, but ultimately I realized that it’s not that it moves quickly but is instead that it’s just pretty shallow. It’s just a basic plot where the son of a famous general needs to rebel against his own Emperor who is under the control of an evil magic-user. He confronts his father and gets no attempt to convince him of that evil despite it being made abundantly clear to him. We get a thread where the magic-user is using runes to control the generals, but at the end all of the generals act entirely out of loyalty. The Emperor’s motivation is revealed to actually be one of love for the magic-user. The magic-user’s motivation is ultimately depression for no reason. We get a bunch of mostly unrelated vignettes as we gather forces and Stars of Destiny who could join our group of they didn’t start so underpowered that it is mostly better to stick with one group, although grinding is at times more or less easy and so you could grind them up to the proper level if they offered anything that useful.
Collecting the Stars of Destiny was another issue, as it often involved very specific things. For the most part, a number of them joined automatically or relatively easily, while some had more involved ways to get them to join. This isn’t a bad thing, actually, although there weren’t very many hints as to what to do which made it difficult to figure out, which is bad when the game’s plot is proceeding and you aren’t sure if you want to wander around all areas trying to find all the characters. I think I got about half of them or so from the ending which talks about what everyone did after the game. Which is another issue, because for the most part I didn’t care about most of them enough to care about what happened to them after. With so many of them, we don’t get the constant interactions that we got with the companions and areas in “Dragon Age: Origins” and so we didn’t get the feelings that we got with that game, but then again there didn’t seem to be too many ways to interact with them outside or even inside of parties anyway. This is something that “Suikoden III” did much better as with four parties you can recruit them all individually and get an attachment to them, and things like the play and the baths are fun enough to give you a sense of connection to them. Maybe if I had done more with that I would have had the same feeling, but again it was vague how to get that stuff going and the time I tried with the bath nothing happened, which discouraged me from trying it. I had spent a session recruiting as many characters as I could and ultimately decided after that session that I just didn’t care that much anymore and went on to the final battle.
I replayed the final army battle a number of times to avoid losses, because I kept picking the wrong options. Then I finally learned that my ninjas and thieves could find out what the move of the army was going to be, and that my ninjas were 100% accurate, at which point I won with minimal losses. Then the final dungeon was a long slog where I used up my all enemies rune skills figuring that I’d face one enemy … only to face a multi-headed hydra. With one head that revived dead heads, which ticked me off, but fortunately it wasn’t with full health and so I managed to win through, using the many, many Mega Medicines that I bought and stocked most of my party with. Then there was the brief explanation of motives and the Emperor and magic-user jumped off the tower, and then I had to escape the crumbling tower, and then Viktor and Flik stayed behind but since I know the former was referenced in “Suikoden III” and I think the latter was as well I guess they survived.
Ultimately, it took me about 20 hours to beat the game, and that was 20 hours that I spent thinking that I would rather be playing “Suikoden III”. I suppose I have to forgive the game for its flaws given that it was the first one and there was lots of room to refine the model. Let’s see what they did with “Suikoden II”, which is up next.
Posted in Not-So-Casual Commentary, Video Games | 2 Comments »