So, I’ve been involved in a long, long discussion over at Jason Rosenhouse’s blog, in the comments of this thread. There are two people that I’m debating with at the moment: eric — whom I’ve had a lot of similar discussions with in the past — and sean samis. Rosenhouse has a policy where comments on his posts get closed after a certain amount of time, and that thread has, in fact, hit that limit. But there are things in the last couple of comments that I want to address, so I’ve decided to try to address them here. I’ll try to leave a link there, but I know that eric knows that this blog exists.
Anyway, let me give the summary of the debate so far, which will by necessity be at least a little shaded towards my perspective of the debate. This all started from a comment by Rosenhouse that multiverses were as good an explanation for the purported fine tuning argument as God was (eric continually refers to this argument as “Goddidit”), to which I started by saying that it looked like those who supported it were doing what they accused religious people were doing: inventing or adding entities to get them out of an implication that they didn’t like. Eric then said that it wasn’t like that — or, at least, it wasn’t for him — and then we started debating whether multiverses in that sense were more reasonable. Eric insisted that multiverses followed from inflationary theory, and inflationary theory was supported scientifically, so multiverses were supported scientifically and so more rational. I pointed out that it wasn’t supported scientifically by inflationary theories because there were inflationary theories that explained the evidence equally well and didn’t imply multiverses, and that even Eternal Inflation didn’t actually entail multiverses. We chased this around a bit, and turned to a discussion of what it meant for a belief to be rational, or more rational than another, mostly I guess because I was saying that his belief in multiverses or that the belief that multiverses were the explanation for fine tuning was rational, but not the only rational option. At that point, we needed to figure out what a rational belief was. Sean samis weighed in on this issue as well.
Which bring us to where we are now. Let me start with eric’s latest comment. Before we start, let me reiterate a comment that was made earlier that sent us down a bit of a rabbit hole (comment #252). After we had been debating this for a while, in 243 I pointed out that essentially when I was defending theistic belief as rational, I was doing so as a response to a charge that people who believed in God were doing so based on a belief-forming process that they shouldn’t trust to form beliefs. This traces back through most of the discussion — and previous discussions — where eric and I disagree over whether it is rational and/or acceptable to maintain a belief that you learned from your parents/culture. I told him that if he meant “rationally” in a different way, then he needed to be clear about that. His reply in 252 was this:
VS @243 re: 1) I really don’t care for purposes of our discussion whether irrational beliefs are ‘a bad thing’ or not. I would be happy with the answer that you agree that, under standard definitions of ‘rational,’ belief in God is not rational (while under your different, broader definition, it is).
Which of course led me to believe that what he was after was STRICT rationality, that it was produced by or relies directly on reason. Which isn’t a discussion that I was interested in, as I stated, since he wouldn’t be saying that beliefs formed by that process directly were invalid or that we ought to believe one over the other, which was the heart of the debate: should we not believe that God is an explanation for fine tuning, or should we believe that multiverses are a better explanation, at least? And my frustration with most of the debate is that eric consistently seems to be conflating rational in the strict sense with rational as a way of saying that one ought not hold a belief if one wants to be considered rational, whether or not that process is strictly rational. As an example, it’s possible that beliefs formed by intuition are ones that we can and ought to hold, but that intuition is not a strictly rational process. This was the example that I did use and eric never really acknowledged, and seems to be denying that there are any, as all of this examples always take processes that are both not strictly rational and ones that we think are invalid.
I think the reason for this is that eric does think that a process not being strictly rational means that beliefs produced by it are ones that we ought not hold, or at the very least that we ought not hold beliefs produced by it if we had a strictly rational process — like science — to turn to. This is probably what we should be debating, but somehow we keep running down rabbit holes.
At any rate, I don’t really want to start with that point. I want to start with the discussion of my definition of “rational belief”, which says that it is rational to hold a belief if: 1) the belief doesn’t contradict any of your other beliefs and 2) you don’t have the evidence to know that the belief is false. Eric keeps claiming that this is overly broad and that it isn’t what people mean when they say “rational”, which the above comment outlines what is really meant by rational in multiple cases. Anyway, eric said that another commenter, Gordon, didn’t think that he knew that evolution was true — ie that he hadn’t been presented with sufficient evidence to force that conclusion — and so his belief that evolution was false was therefore rational by my definition. I said that it wasn’t because knowledge was objective and could be objectively and externally determined. Eric’s reply was:
I didn’t ask about knowledge, I asked whether Gordon’s belief is rational by your definition.
But you can’t judge the rationality of a belief by my definition without talking about knowledge. I repeatedly pointed this out to eric. The main thrust is this:
1) Knowledge trumps belief.
2) Knowledge is objective: given that we both have access to the same evidence, if you are justified in saying that you know that X then _I_ have to be equally justified. If not, then you don’t know that X.
So since eric and I both accept that we know that evolution is in general true — some details of it might not be — then we can say that Gordon ought to know that evolution is in general true as well. If Gordon wants to deny that, then he has to justify a claim that we don’t really know that evolution is true, which can be done either by pointing out that we haven’t and/or can’t present the justification to him, or pointing out that our purported justification is actually wrong or doesn’t get to the level of knowledge. Beyond that, he ought to know that it’s true.
So, no, under my view Gordon’s belief that evolution is false isn’t rational, unless he can show that we don’t, in fact, have knowledge. Thus, since eric thinks that we do, for the purposes of this discussion eric has to concede that at least in reference to that example my method and his come to the same conclusion.
Gordon thinks he has been presented with no contradictory evidence, and thus his belief passes your #2 criteria. You and I think otherwise. How do we decide whether his belief has passed your #2 criteria? Do we go by G’s assessment or our own? If his own, then doesn’t your criteria for rationality allow just about everything in the door? OTOH if we go by our assessment, can I not apply the same “not his but our” standard to Gordon’s belief in God? And your belief in God?
We apply an objective perspective, one that does not depend on what we believe or think is true but on what we know is true. So it’s not a choice between what we think versus what he thinks. If that’s all it is, then neither side has knowledge and so we do have to let everyone base it on what they think. But that’s not what we have for evolution. We have much more than that. At which point, eric cannot simply say that because we can externally judge the rationality of someone else’s belief when we have knowledge and can present the justification to them that we can do that even when we don’t have knowledge. And eric does not have knowledge for multiverses, as they are considered speculative at best … and eric does not know that God doesn’t exist.
Let me illustrate this with an example. Imagine that someone shows someone a recording of their spouse having an affair. This evidence is, in general, sufficient to justify knowledge, even though recordings can be faked. So if that person refuses to believe that their spouse is having an affair, then that belief — and even mere lack of believe — is irrational; they are refusing to accept a belief that rationally they ought to accept was produced by a belief-forming faculty that produces true beliefs. In short, they ought to know that it’s true, and so ought to believe it. Now, imagine that the person showing them the recording has always been interested in them, and has been trying to break them and their spouse up for a long time now. At this point, the idea that the recording was faked becomes much more credible, and if the recording was faked then it isn’t reliable anymore. At which point, it is possible that the person would no longer be justified in knowing that their spouse had had an affair. At which point, they’d have to believe something. But what? They could decide that despite them having an ulterior motive, that the person with the recording is still credible enough to believe. Or they could show faith in their spouse and insist that they wouldn’t commit adultery. But at this point, it’s going to come down to what the person themselves believes, about their spouse, about that person, and about a lot of things. At this point, it’s going to be very difficult to decide externally what believe they ought to hold. So as long as they aren’t holding an inconsistent set of beliefs — which includes their beliefs about how to form beliefs — they ought to be considered rational for believing either … even if the method they use is more gut feeling than a full-on reasoned out response, which would be inconclusive in this case anyway.
Now, eric goes on to reiterate his definition of “rational”:
A belief based on valid reasoning from a set of well-accepted premises or observations. Since @218.
This sounds a lot like he’s saying that the belief should be valid and sound, which is pretty close to most methods that produce knowledge. Mere belief comes into play when we have evidence for conclusion, but it is either not valid, not sound, or both. And we can see that eric’s belief in multiverses is neither valid nor sound. It does not follow validly from inflation theories because it is possible for inflationary theories to be true and for multiverses to not exist. It’s also the case that not all of the premises are well-accepted, even scientifically. So by his own definition, his belief in multiverses is not a rational belief.
And interestingly by his definition conclusions based on cultural beliefs are rational. Their premises are well-accepted in society, by definition. So if someone makes a valid argument using them, then that belief is rational. And yet, eric’s big complaint is that cultural beliefs are not rational.
Now I don’t want to rely on the “well-accepted” line. There has to be room for people to reasonably believe things even though most people don’t agree. That’s the only way we can progress from wrong, but accepted ideas to right ones, from the ideas that everyone knows are true but that aren’t to the ideas that are in fact true. But if this conversation is going to go anywhere, eric needs to be clear and detailed about what he means, and not just quote a context-less dictionary definition and assume that his beliefs meet it and others don’t.
Eric believes that multiverses exist. He does not know it, even by his own definition. It does not follow directly from inflationary theories and he doesn’t have well-accepted premises to justify it. The key point of the whole debate was why his mere belief is better than the theistic mere belief, and he hasn’t shown it except as an implied “It’s scientific, so better”. But that in and of itself needs to be justified, and there is no reason to accept any scientific explanation over non-scientific explanations just because the former are scientific, since scientific beliefs are wrong all the time. I allow him to be rational in his belief while not accepting that it is the only rational belief to hold. Eric either needs to do the same or demonstrate rationality to some degree, which is what we’ve been missing in this debate.
On to sean samis. The debate between us has been more directly over whether a belief is reasonable/acceptable or not. I just want to touch on a couple of issues. From 275:
IMHO, determining whether a belief is “rational” is all about the process and the premises leading to the conclusion upon which the belief is based. The sloppier the process or the less certain the premises, the less certain the conclusion.
To my mind, saying that “belief in X is not rational” MEANS “Your belief in X was not produced by a rational process.”
I think this highlights the conflation that’s going on here. He talks a lot about the process being sloppy and the conclusion being less certain because of it, but then simply subs in “rational process” without clarifying whether he means “sloppy” — read: unreliable — or strictly rational, which is made clearer with his next statement:
VS, maybe I’m missing something but the difference you are trying to explain seems too much like hair-splitting. What is the difference between “STRICT rationality” and … whatever the alternative is? It seems the difference between someone adding numbers up in their head and someone else showing their work.
Strict rationality means that you’re just talking about whether the process relies on reason or not, as outlined above, and not judging from that whether or not the belief is reasonable to believe or that you ought to believe it. In short, when talking about strict rationality, you accept that there may be valid belief-forming process that don’t rely on reason, like perhaps intuition or emotion, even if you don’t think it’s true. As I said, unless you do try to make the link from strictly rational to what people ought to believe, rational in that sense just isn’t interesting.
And we talked a bit about proving negatives, and he replies to me in 279:
So, what you ask eric to do; to prove a negative is impossible. Science NEVER disproves explanations except in very narrow situations. Virtually every time, all science can do is say “there’s no evidence that X is true” or “the evidence does not support X”. This is why the burden falls to the proponent of the theory to prove it, or explain how it could still be true in spite of the lack of supporting evidence.
Despite my being accused of demanding certainty by eric, sean samis seems to be doing that here: saying that he can’t prove a negative because he can’t do it with certainty. But that’s not what I mean, at least, by that. I mean if you can demonstrate it to the level of knowledge. Thus, if there are alternative explanations and you hold that one of them is false, if when asked to justify that you say that you can’t prove a negative it really does sound like you’re saying that you can’t prove that your preferred alternative is true to the level of knowledge, because if you can know that one of your alternatives is true then you can know that the other alternatives are false. If we know, to take one of his examples, that the Earth is spherical, then we know that it isn’t flat, or square, or whatever.
The same thing, then, applies to the fine tuning argument. If we discovered that there were multiverses, and that the cosmological constants of the existing multiverses seems distributed in accordance with the probabilities, then we’d know that the explanation for the cosmological constant doesn’t require intelligent agency and that, therefore, it was produced by a random and natural process. This would then mean that we’d know that it wasn’t set by an intelligent creator, and so know that it wasn’t set by God. This, then, is proving a negative … at least in that part. It doesn’t require certainty or anything beyond what science already does as it produces knowledge. To do otherwise would mean that science doesn’t produce knowledge … and no one wants that.
Interactive NPC World …
February 20, 2015So, I’ve started playing a number of games in a round robin, which include Mass Effect 2 and Dragon Age: Origins, and one thing that I thought of while playing Mass Effect 2 is the issue of NPCs in the world that you can interact with. Some of them will give you quests or items or other things, while some of them will just give you a little phrase or comment and then you can move on. The issue, of course, is that you usually don’t know which is which until you actually interact with them. Which means that if you want to get all of the quests and the like you have to interact with all of the NPCs, many if not most of which just say something and let you move on.
This can get very annoying if you have a lot of NPCs and the ratio of useful to colour NPCs is low. I’ve played games where I stopped interacting with NPCs because it was so annoying separating the NPC wheat from the chaff. But the flip side isn’t much better, as if you only create NPCs when they are useful the world can seem empty and unreal, only populated by quest-vending machines and the like. And filling it with people that you can’t interact with at all — like, say, most MMOs — reduces NPCs to background.
Which might, actually, be the best way to handle it. We don’t want players to have to obsessively interact with everyone, and we want populated places to seem populated, so making them non-interactive solves that, at the expense of, well, making them not seem like actual people. It’s nice to have NPCs that are people in at least some sense, but not good if we confuse them with NPCs that matter to the overall game plot and quests and so try to get them to interact with us outside of that. If you do go that route, you have to limit the number of NPCs or annoy the player by making them interact with all of them or else risk missing out on something interesting. And if there’s one thing that players hate, it’s missing out on something interesting.
Ultimately, though, this probably is a problem of balance, striking the right balance of NPCs that you can interact with in passive ways with the ones that open up interesting opportunities in the game world. It does enhance a game to be able to talk to NPCs and have them say things like jokes and give interesting tidbits about the world. It’s just that if you are getting that when you want to make sure that you’ve hit all the quests it will get annoying after a while if there are too many of those. We want to interact with people … but not all the time. Kinda like life, I think.
Posted in Not-So-Casual Commentary, Video Games | Leave a Comment »