Archive for October, 2014

Oh, the horror …

October 31, 2014

So, today being Halloween and all, I thought that I should talk about something Halloween related. I’ve already talked about how Fatal Frame does a female protagonist well, so I thought this would be a good time to talk about how, in my opinion, it does the horror right, as it’s one of the creepiest games I’ve ever played.

The big key for Fatal Frame is its sound. As you walk around the mansion, and move from room to room, you get variations in sound, from things that are almost musical and like chimes to almost silence except for your footsteps. This leaves you with an underlying sense of creepiness and unease; you don’t really know why you feel bothered, but you do.

In addition to that, default Miku’s walking speed is not quick. The mansion is small enough that you don’t need to run everywhere, and walking slowly will be good enough to get you from place-to-place without being bored to death, but this speed allows you to take everything in and really experience the creepiness of the sound.

Another factor is that while enemies can appear at any time, they aren’t frequent. Since you might get things suddenly appearing and jumping out at you, you want to move slowly to make sure that you don’t run into something. But you usually don’t. This ramps the tension up a bit, and far more than having a constant supply of enemies to fight would.

Back to the sound, the enemies also talk and act in ways that highlight how they died. The blind ghost, for example, keeps creepily talking about not being able to see anyone. The broken neck ghost screams and falls. In general, the ghosts act in creepy ways but in ways that you should be familiar with because …

… the game delights in giving you the backstory of each of the ghosts, just to make them not only seem scary, and not only seem recognizable, but also seem tragic and human. While you know that they can kill you — and are trying to do that — you also have to feel somewhat sorry for them, because a lot of the time it isn’t their fault.

And finally, the combat system really works to add to the tension and fear. You don’t have a massive set of weapons or some kind of particle beam/trap combination to bust these ghosts, but instead have a camera. A camera with a number of special features, but still a camera. And this also means that in order to actually fight the ghosts, you have to drop into a very tunnel-visioned, FPS view: the view through the viewfinder. Which means that you have to drop into that view and try to find and shoot the ghost … and hit “panic” mode when it disappears and you have no idea where it is. This really does add to the tension, not in a “I’m going to die” but in a “Where is it? Where is it? Where is it? Put the camera down!” kind of thing.

Fatal Frame is one of my favourite survival horror games, and is pretty much the only one that I’ve ever finished. And it isn’t a game to play in the dark.

Are Video Games Art?

October 30, 2014

So, observant people would have noticed that on my last post on gaming I included a new tag, “philosophy of gaming”. Those who have read the blog for a while know that I don’t use tags an awful lot. Putting those two things together, you should have been able to guess that I was indeed going to start doing something that might look like Philosophy of Gaming in later posts … and you’d have been right. And so I start here at the top: are video games art?

In order to decide this, there’s something you have to do first, and that’s something that I think a lot of people on both sides don’t do, which is decide what “art” is in the first place. As it turns out I already did that, at least for myself. And yes, you have to go and read the whole thing to find out what my definition is.

Nah, I’m just kidding. I’ll summarize the key points but not justifications here. The first thing I do is divide the concept of art from the concept of valuable art, in order to avoid the trap of claiming that good art is art or that anything pleasant is art. You can have good art and bad art, and we don’t want to limit our definition of art to the things that work as opposed to the things that don’t. I also relate the definition of art strongly to the aesthetic. Given my separation of the value of a work of art from its definition, my definition of art is that a work is a work of art if its primary purpose — based on the intentions of the creator — is to produce a specific aesthetic experience in us, and claim that a work is a good work of art based on the aesthetic experience it does create in us. And it is important to note that this definition leaves out a number of things that are generally considered art, like Marcel Duchamp’s Fountain, Andy Warhol’s Brillo Box, or John Cage’s 4’33.

A more serious objection for our purposes here, though, is that this definition would seem to imply that the number of video games that can be considered art is vanishingly small. While a number of video games do indeed try to produce aesthetic experiences in us, they almost always do that as a means to an end, and not as an end in themselves. They want to make the game look good or sound good in order to make it enjoyable to play, which is their primary purpose. If producing a specific aesthetic experience is not a primary goal of the work, is not a goal independent of the other goals — although it need not be the only goal — then by my definition it’s not art. And by that definition, video games are, generally, not art.

Of course, a stronger counter to that is to say that movies, music, and books aren’t art by that definition, either. And while I don’t have a particular problem with that, in general people do consider those things to be art. So I’m veering quite a bit from what people think it means to be artistic, as was seen when I talked about very famous works of art that I didn’t consider art. So for the purposes of this discussion it can be accepted that my definition of art would exclude them, but then point out that the question, for now, is not really whether video games are art in some deep, objective sense, but whether they are art in the same way that movies, say, are. And to examine that, we need to look at why I excluded those works in the first place.

It’s generally the case that these sorts of works are considered art because of the point they make, not the aesthetic experience they produce. These are all noted for being lovely commentaries on art in general, and that’s a big part of their appeal. But if the main intention of the work is to made an academic or philosophical statement about something, then it’s hard to distinguish that from an essay. And an essay about art isn’t generally considered to be art itself. And if that’s the case, it seems reasonable to say that these works aren’t art as well. We can also see that in the case of movies, it’s certainly not safe to say that pure documentaries are really art; in general, documentaries get considered artistic because they do artistic things, not because they are art themselves.

At this point, we can start to see a distinction that’s being drawn in these cases. Simply straightforwardly commenting on something or making an argument is not considered artistic, but works that make a point through the medium generally are considered art. What this suggests is that we can extend the definition slightly by arguing that things that try to make a point by producing a specific experience in us are considered art, and things that simply try to make a point — even if they produce certain experiences in us — don’t count as art. By this reasoning, games like “Gone Home” and “Depression Quest” probably count as art, in the same way as movies and those other works I mentioned earlier do.

Note that I still think my own definition is preferred, and so am not changing it based on this analysis. However, since most people have a looser definition, I think that this one will work for discussion. So, video games that count as art have as one of their primary purposes either to produce a specific aesthetic experience in us, or to make a point by producing specific experiences in us (I dropped the aesthetic there to avoid issues with defining what that is). So, then, video games can be art by this common definition, and now we can move on to looking at the implications of that or of other issues around video games.

I think I’m in love …

October 30, 2014

So, I ordered Persona 4 Arena Multimax, and while waiting for it to arrive I decided to read a review of it. Well, it’s a Persona game; I was going to buy it no matter what anyway. At any rate, I read this review, read this part:

Just be warned: if you’re just here for the fighting, story mode will test your patience with seemingly interminable stretches without any actual gameplay. It’s more like a graphic novel with a few fights sprinkled in. There’s even an option to have the computer do the fighting for you, so people who just want to experience the story can do so.

Since I played the original Persona 4 Arena on Easy and did it mostly by hitting the one “combo” button over and over and over and over, and gave up on a Blazblue game and a Mortal Kombat game because I hit a part where I couldn’t win the battle … this sounds like a feature they added just for me! And, I guess, a lot of other people like me. I’m not alone!

So, yeah, totally worth buying. I think I’m in love.

Philipse on the Grand Strategy of Natural Theology.

October 29, 2014

Moving on from what we can or can’t say about the beliefs of the every day theist, Philipse in Chapter 6 describes what a natural theology is going to have to have to be credible … and implies that most of them, except perhaps Swinburne, don’t have that. He talks about three levels of generality: a) domain-specific, b) not domain-specific but not universal (eg statistics) and c) universal, meaning that it applies to all domains or all attempts to gain knowledge. He then moves on to talking about what he considers the main dilemma for natural theology: they need to have a a), a domain-specific set of methods that are justified in some way. But they don’t want to stick too close to science and other forms of scholarly work, because applying those methods to theology hasn’t worked out well for theology. However, if they don’t use those sorts of intellectually respectable methods, how will they be able to demonstrate that their methods are intellectually respectable?

The big problem is that Philipse seems to place philosophy squarely in c) and doesn’t really allow for their methods to produce a) level methods … and, in doing so, ends up limiting the intellectually respectable methods to empirical and broadly scientific ones. For the most part, he asks that the natural theologians use methods similar to those found in the sciences or in history, but not ones found in epistemology or ethics or even philosophy of mind, where empirical data is important but generally doesn’t settle anything. As such, his demand ends up being that natural theologians have to justify things scientifically or else they have to invent methods that aren’t respectable, which is a false dichotomy. It also doesn’t reflect the views of many theologians, and also the state of the field as is.

Classical theists, for example, have a full theology that cannot be evaluated empirically, or with the methods of history. But it can be evaluated with the methods of philosophy of religion, and philosophy in general. And, in fact, it has been so evaluated, for many, many, many, many years. For the most part, every religion that is strongly focused on theology has a method for looking at things, and those methods can be evaluated and justified or challenged by the philosophical field of philosophy of religion, just as philosophy of science can do that for science. Thus, how a theology validates its a) methodology if it isn’t just one of those that are commonly used is through philosophy of religion, which has been more than willing to do that for quite some time. So you have to get down to the specifics of the theology, and not just hope for something that applies to all of them.

Thus, here, Philipse ends up selecting his preferred methodologies and demanding that natural theology follow them, or else be considered not intellectually respectable. But that methodology is broadly and strongly empirical and probabilistic … and most theologies don’t accept that methodology. For good reason. Classical theists have their conceptual argument, and demonstrate the consequences of that conceptual argument, and see no need and no ability to do empirical examinations of the matter. And it does seem hard to demonstrate that an all-knowing, all-powerful, creator being exists by looking really, really hard for one. But this isn’t unique to theology, as these sorts of debates over what the right methodology is are common in ethics, epistemology, and philosophy of mind. Even given the rise of neurology, there is still much debate over whether neurology actually gets at mind or instead just studies the brain, and there are many good arguments that say that we need more than that to really get at mind. Ultimately, part of doing the philosophical work in a field is determining what your a) ought to be, and justifying that. Philipse gives a number of examples of how to validate your a) methodology, but there are more ways than that, and than the empirical.

I cannot escape the conclusion here that Philipse find Swinburne’s approach the most promising because it already uses methodologies that he considers respectable. But someone who, like me, distrusts Bayesian analyses and probabilistic justifications of beliefs is not going to feel the same way, and so his attempt to establish that something like that is needed and that Swinburne’s approach is the best one falls a bit flat.

Thus ends the first part. It’s a pretty meaty part, wading in to views and comments that are heavy and often somewhat obscure. There are definitely points in there, even points that I have criticized, that I would need to read again and gather more information on to properly understand, express, and criticize. That being said, there are fundamental disconnects between my epistemological views and Philipse’s that cannot be resolved by more understanding of what we mean, and because of that I find Philipse’s demands and little, well, overly demanding. I don’t see why I need to have the justifications and the sorts of justifications that he demands in order to have a rational belief, and even to rationally believe that my belief is rational. Philipse, it seems to me, falls into the common trap of insisting that in order for me to be rational in believing that X, I have to be able to present it that it is also rational for him to believe that X, which is an argument that I strongly deny. Indeed, his rational5 seems to encapsulate that very idea, and that was what he wanted to establish. I don’t think he did. But, at any rate, we’ll move on to the second part where he talks about theism as a theory.

Philipse on the Rationality of Beliefs

October 28, 2014

I have to admit that I found Chapter 5, where Philipse tries to outline what it would take for a belief in God to be rational, very, very confusing. I did a lot of epistemology in my day, but a lot of the distinctions he was drawing were a bit foreign to me, and so to really understand the distinctions he’s trying to make I’d probably have to go back over and read it again, and read more on the subject. That being said, one of my confusions wasn’t really due to that at all, and made me realize a major issue that’s been running through the entire debate up until now.

Philipse draws a distinction between internalist and externalist accounts of, I guess, justification. But he characterizes the externalist view as being reliablism — the idea that you are justified in believing that X if your belief that X was produced by a reliable truth-forming faculty — and contrasting it with the internalist view that justifies belief on the basis of reasons. Now, since Philipse takes the internalist view as being the most reasonable, and I’m an avowed reliablist, I wasn’t exactly going to find that view credible. But the most puzzling thing about this is that it seems to be rather self-defeating. How would we know that appealing to reasons produces true beliefs? If appealing to reasons, in any manner, is a process it has to be demonstrated to be a process that produces true beliefs, and in that sense it has to be justified by reliablism. If one rejects a reliablist justification there, it sounds like an claim that the internalist account justifies by reasons but sees no need to determine if that process of finding reasons to justify a claim produces true beliefs the majority of the time … which hardly seems like justification at all. So it doesn’t seem, to me, like you can actually divide reliablism from justification by reasons the way Philipse wants to.

But this made me realize an underlying issue with the entire exercise: there are two questions to be asked here. The first is “Is a person’s religious belief rational?”. The second is “Does that person reasonably believe that their religious belief is rational?”. I don’t want to claim that Philipse thinks that these are the same question, because there are a number of indications — including the internalist/externalist distinctions that he makes in this chapter — that he does. He just seems to think that the first question is meaningless if the person can’t answer the second question. But this gets into first and second order knowledge.

The idea is basically this: if I have a belief that is a justified true belief, then I know that proposition is true. So, in terms of first-order knowledge, the statement “I know that X” is true; I really do know that X is true. Now, the second-order knowledge statement is “I know that I know that X”, which would mean that I have a justified true belief that my belief that X is a justified true belief. Now, in this case it seems quite clear that I could merely believe that I know that X, and so have first-order knowledge but not have second-order knowledge. But this wouldn’t mean that I would no longer have that first-order knowledge, or even that my belief that I have first-order knowledge was irrational. It seems, then, that I could know that X without being able to justify, at least to the level of knowledge, that I really do know that.

Now Philipse, I imagine, will reply that what I’m saying is the externalist view, and he thinks the externalist view isn’t a good one. He’d try to assert, I think, that you can’t credibly claim to know something unless you can justify that you know it, but as seen above that gets into at least a claim that you have to have a justified belief that you know or are rational in believing that you know or rationally believe that X. The problem with this though is that you start getting into third and fourth and higher degrees of knowledge. Sticking with knowledge for a moment, if in order to know that X I have to know that I know that X, then in order to know that I know that X I have to know that I know that I know that X, and in order to know that I know that I know that X I have to know that I know that I know … well, you should be getting the idea by now. So insisting that one must know that they know something — ie be able to justify it to that level — before being rational in making that claim simply isn’t workable; it simply is not possible for us to parse out all the orders of knowledge that we’d need to be able to make that claim, and if Philipse decides to arbitrarily stop at second order knowledge then we can ask why we shouldn’t just stop at first order knowledge and call it a day.

It seems to me here that Philipse’s main concern isn’t so much about justifying the belief to yourself, but about justifying it to others. In short, it seems to me that in order to consider a belief in God rational, it has to be the case that the person has to be able to answer the question “Why is it that you think your belief in God is rational?”. But that’s a question that I answer from other people, not from myself. Thus, at this point, Philipse’s argument seems to work more as a demand from those who don’t believe in God to demonstrate that their belief is rational, and thus to justify the rationality of their belief to others. Sure, Philipse can justify it as a claim that you can’t consider a belief rational unless you yourself know why it is rational, but this is certainly not an uncontroversial claim for a number of reasons.

First, we can return to Plantinga. If Plantinga says that someone’s belief in God is rational because they do have a properly functioning sensus divintatis, and that therefore their belief was produced by a reliable truth-forming faculty, then it is just true that they know that God exists … even if they can’t prove that. And thus any evidence from people who come to different conclusions is meaningless; one is right, one is wrong, but we know not which one. If someone accepts this, then any demand Philipse would make for them to justify their belief is nothing more than a demand from someone else to prove to them that their belief is true or justified … and under the Reformed Objection that’s not acceptable. Their belief is factual or it is not, and they don’t know which yet. But it is the belief they have, and that someone else doesn’t share it is no reason for them to reject the belief they have.

Second, we can take someone who accepts the Web of Belief. The answer is almost the same: this is the belief I have. It fits in my Web of Belief without contradictions, and when I act on it no new contradictions appear. So, why should I consider it irrational if I can’t point to some kind of sufficient justification for its rationality? Being in the Web and causing no contradictions is enough to get basic rationality. So simply because someone demands a justification doesn’t mean that not being able to provide a justification means that the belief is irrational. At best, it demonstrates that I don’t know it true.

So, in both cases, I think the reply to Philipse is that unless he can demonstrate that the belief is irrational, it’s fine for me to consider it rational as long as it passes some basic tests for rationality, and that doesn’t mean Philipse evaluating the purported reasons against some standard that he himself sets. He can try to demand sufficient reasons for natural theology, because natural theology and theology in general should be about providing those sorts of reasons. But simple, every day belief need not be, as Plantinga and the Web of Belief demonstrate.

Is It Right to Make a Robin?

October 27, 2014

The second essay in “Batman and Philosophy” is “Is It Right to Make a Robin?”, by James DiGiovanna. This, shockingly, attempts to answer the question of whether it is right for Batman to take on his young wards and turn them into his partners in crime-fighting. He talks about both Dick Grayson and Jason Todd, but focuses on Jason Todd, because that’s the example that best fits his case: that of Batman taking a kid and turning him into a Robin, instead of having a kid come along in the same situation as himself and essentially volunteer to take on the role.

He focuses on the morality of Batman training a Robin, an evaluates it from the perspective of the three main moral systems: deontological, Utilitarian, and Virtue Ethics. His conclusion:

Batman is a lousy deontologist, a decent consequentialist, and, most assuredly, some kind of a virtue ethicist.

I think that he somewhat misrepresents the deontological position (which he bases on Kant) and the Virtue Ethicist position to come to this conclusion, and so my discussion here will focus on that.

In evaluating Batman’s actions in training Robin against Kant, he relies strongly on the Categorical Imperative, which is that one cannot assert that a maxim is right unless it can be made into a universal law. Now, he seems to make the common mistake of assuming that this means that you’d like the world that this produces, but that notion is in fact aimed at consistency: can you make it a universal law without it being self-defeating. So when Kant argues that one cannot universalize a maxim to lie, he doesn’t mean that if that was the case we’d have an undesirable world, but that you can’t do it without defeating any possible purpose for having that rule, because if everyone held a maxim to lie and more importantly everyone knew that that was the maxim that people were following, no one would believe the lies … and the purpose of lying is to say something untrue as if it was true and have people end up believing that it was true.

Given this, we’d have to ask if it is inconsistent for Batman to take on a Robin. And it doesn’t really seem to be the case, because a rule of “If someone willing to take on the role of your crime-fighting partner asks to do so, take them on” doesn’t become self-defeating when made a universal maxim … but amending the last part to “don’t take them on” doesn’t either. The bare consistency check, then, doesn’t really tell us much about what is or isn’t moral. In fact, DiGiovanna has to import a lot of additional moral maxims to try to demonstrate that Batman should not put a child in harm’s way, and so was wrong to train Jason Todd and Dick Grayson, at least.

So, if the consistency check doesn’t do much for us, what does? Well, Kant also advocates for his equally famous maxim, which is that one should always treat others not merely as means, but as ends in themselves. So, if we try to apply this principle, what do we come up with? Well, we’d have to note that Batman isn’t actually soliciting partners. In general, canonically Batman always wants to work alone, and has to be pushed into accepting a partner, from Dick Grayson to Barbara Gordon to Terry McGinnis in Batman Beyond. If Batman was finding and training orphans in order to further his cause, regardless of what that meant for them, then he definitely would be acting wrongly by Kant. But he doesn’t. In general, they push their way into being his partner, generally by making it clear that they will do it anyway even if Batman refuses to let them come along. Treating them as ends in themselves, and in some sense able to make their own decisions, if they can make the decision to join Batman then it’s not necessarily wrong of him to let them. But he wouldn’t be obligated to do so, because that would force him to be means to their end: their thirst for justice or revenge, or their desire to be a hero and even their desire to help others.

So, as a deontologist, Batman is definitely allowed to take on Robins as long as he doesn’t take them on as merely a means to his end, but considers them as ends in themselves. And since he generally doesn’t want partners, it seems that he doesn’t use them as a means to help him fight crime, but instead sees something inside them that means that their desire to work with him is a credible decision for them. So, no, Batman would be a lousy deontologist.

The issue with DiGiovanna’s analysis of Virtue Ethics is that he moves from the fact that Virtue Ethics is about developing the proper moral character to an idea that a person is therefore obligated to develop the moral character of anyone else. This, however, isn’t required for most forms of Virtue Ethics that I’m aware of. The Stoics, for example, claim that you are responsible for only your own actions and therefore, by implication, your own character. Unless there’s a virtue that demands that you develop the character of others, you can’t be obligated to do so. The same seems to apply to Aristotle. So it’s a fairly weak argument to say that Batman is responsible for the character development of his Robins, for good or for ill.

The closest you can get is to import the idea of the exemplar from Aristotle, and argue that Batman needs to act like an exemplar for them if they want to try to emulate him, and in that sense Batman is obligated to bring them into the crime fighting business in order to facilitate that. The issue with this is that Batman may not be a good exemplar for anyone, and the criticism of his handling of Jason Todd makes it clear that he wasn’t the right sort of exemplar for him. While there is a lot that the Robins and Batgirls can learn from Batman in order to develop a properly virtuous character, simply emulating him is not likely to lead to a good outcome, as most of them learn. So Batman doesn’t really work as an exemplar, and probably shouldn’t try to be one.

Given the right circumstances, all three moral codes can support Batman training Robins. Utilitarians can argue that under some circumstances it allows Batman to save more people … but it may be too onerous for the various Robins to justify that. If there is a virtue to develop the character of others, Batman may be obligated to develop the character of the Robins … but has to consider the possibility that he is the wrong person to do that. Finally, Batman may have an obligation to respect the chosen ends of the Robins to become crime fighters … as long as that does not reduce him as a means to their achieving that end.

Direction …

October 26, 2014

So, I was on a training course this past week. And I really wanted to take the training course because it was an in-class one, where you had someone talk about it and show you what was going on and you had labs to do to learn what to do. I wanted this because when I did online training instead I found it boring and found that I didn’t seem to learn as much. I tended to fly through it to get to the next section. And so in the class itself … I found myself rushing ahead and playing with things I wanted to play with, and being bored in the lecture sections because I couldn’t really skip ahead to the next part, at least not credibly.

This, then, creates a bit of a puzzle. I definitely wanted an in-class course, and did find it useful and preferable to taking the online course. But I didn’t really like the pacing with it, and found that for myself I preferred a faster pace than what we had. Which led me to this conclusion: what I really would have wanted was a course where I can read about what things are and what to do, jumped into the labs to try to do it, and then moved on to the next section, or had someone to ask if things didn’t work the way I expected. The only downside to that would be missing some things that the person giving the course can tell you that help you understand how it all fits together, but generally a short lecture, being told to go and do the exercises, and then another lecture would have worked well for me. And I figured out why: I’m very good at self-direction, but not very good at self-motivation.

This, of course, is nothing really new, except for the self-direction part. I’ve always known that I need deadlines in order to do things. I’m an odd sort of procrastinator; I don’t leave things to the last minute, but to the next to last minute. Essentially, I’m the sort of person who, if I think that I will need two weeks to write and essay, will start it at least three weeks before the deadline … just in case I’m wrong. Without some concrete reason to do something, however, I tend to put it off and put it off and never do it. This is why I can do all the massive readings for a course and yet haven’t gotten through the academic readings that I’ve had sitting around for months if not years. Without a concrete, solid reason to do things, I tend to let them slide. Even the recent blog run provides an example where when I feel pressure to get a post out — because I want to have one scheduled for every day — I will often write in the evening to ensure that I do have one, even if I have more time — I don’t think there’s been one case where I wrote a post for that day that day — because I don’t want to cut it too close. I found myself planning to at least make a post into the weekend, knowing that if I could get that far or through the weekend then I could probably write enough posts on the weekend to get me into if not through the next week.

I think that there are a number of traits that contribute to this behaviour for me:

1) I’m not particularly ambitious. I like a simple life, and enjoy relatively simple things. I’ve commented in the past that I’m not materialistic, or anti-materialistic but instead non-materialistic; I generally buy what I want, but don’t want much. My hobbies are simple: I like to read, watch sports on TV, watch DVDs, walk, play video and board games, and think about stuff. As long as things are reasonably together and working and reasonably clean, and I’m reasonably well-fed and feeling fine, then that’s pretty much all I want out of life. So there’s not much incentive for me to do more than that. Even the blog was started because I noticed that when I wrote about something I stopped thinking about it so much, so it let me put discussions behind me and move on to something new.

2) As stated in the linked post, I’m not someone who’s really interested in process, but instead in the end goal. This means that I justify doing something on the basis of what the end product will be, not on how enjoyable the process will be. If the process is too onerous for the end goal I’d get from it, then I won’t do it. The end goal can be positive (ie I get something at the end of it) or negative (ie I avoid something that I want to avoid) but it’s the end goal that I want, not the process. This means that the end goal has to be worth the effort of the process, and if it isn’t or doesn’t seem to be compared to what else I could be doing, I won’t do it.

3) However, I’m also someone who tends to be very dutiful and committed once I commit to something. At work, I don’t work long hours because I love my job. I like my job, but there are always things that I could be doing that I’d rather be doing than work, even if it’s just programming for myself. But I have a job, and I’ve committed to that job, and they pay me to commit to that job, so I do it. Thus, my production doesn’t really diminish much if I don’t like what I’m doing at the moment; in fact, it might even increase just to get me through that spate of dull work and into something less onerous. Because of this, once a commitment is made — even if it’s mostly “on paper” — then I carry through. Unless I can convince myself that it’s a “paper commitment”, at which point I don’t.

This combination explains the behaviour I’ve described. I need something to commit to to do something, or else I’ll just let it slide and take up with my simple pleasures. But once committed to something, I feel duty-bound to see it through, no matter how painful the process is. And I judge whether or not I commit to something by what the end goal is. A job is always a worthy end, and other things can be depending on circumstances. As long as the process doesn’t end up overwhelming the end goal despite my estimation, I will complete it. That’s also why momentum is important for me; once I start, I can keep going, but as soon as I stop it’s essentially conceding that the end goal isn’t worth doing it when there are other things to do, and so don’t start doing it again.

The things you learn on training [grin].

The future of gaming is … what?

October 25, 2014

So, if we look at the standard rhetoric around those who are either criticizing gaming or promoting diversity within it — depending on your view — one of the most common themes — which was pretty much the entirety of Leigh Alexander’s commentary — is that the traditional form of gaming is over and done with (and good riddance), and that those who argue against them are just people who are stuck in the old way of viewing gaming and are afraid of this brave new future of gaming that we’re entering. The problem that I’ve always had with these comments is that they’re often very, very light on what that future is actually supposed to be. What will games look like under their vision? What are games turning into? If I, as a Not-So-Casual Gamer, am to look to this future and decide if that’s the sort of gaming world I want to be in, it seems that I really need to know what that future is. And right now, I don’t.

Let me try to tease out some ideas of what it might be and examine them. Since the big push is on diversity, let’s start there. But not with diversity of characters (yet), but with diversity of games. One of the themes has been about getting more games beyond the standard FPS or whatever, and appealing to games like Depression Quest, Gone Home, The Stanley Parable, Papers, Please and so on as examples of games that we need more of. So, let’s start from the claim that the future of gaming will give room for games like this to be made and to shine. If that’s the case, my immediate reply is … welcome to the future! All of those games were made, and got attention from the mainstream gaming press (even I’ve heard of them, and know a lot about what they’re about). Sure, you generally won’t find them in your friendly neighbourhood video game store, or in Walmart, but digital distribution is cheaper anyway for these small market, small company games, and as it expands finding games like these on places like Steam will help them be accessible. Sure, they’ve received criticism for not being games or not being good games, but that sort of criticism is always going to exist (and I’ll get into why they may have a point a little later) but, hey, if you want gaming as a whole to be open to these games, you got it. And those who criticize the games only spread the word about those sorts of games, allowing people who might find that sort of game or gameplay appealing to find it by looking at what people complain about and saying “You know, that sounds cool to me”. So you’ve got it, and it’s only going to get better.

(Note: don’t bring up the harassment. The harassment, in my view, is associated more with feminism/social justice than with the games themselves).

But maybe that isn’t what the future is supposed to be. Maybe the future is supposed to be a world where these sorts of games are dominant, or at least are on par in the market with simple entertainment-oriented games. This would be more an argument that games should be art instead of that you can indeed have games that are art. The problem is that this is almost certainly never going to be the case, since games are pretty much primarily an entertainment medium, like movies, books, television, etc. So while I do think that you will find — and are already finding — solely or predominantly artistic games, at the end of the day these sorts of games won’t be dominant. Why? Because they aren’t actually a lot of fun to play, just like artistic movies aren’t a lot of fun to watch. And, in general, things that try to make a point aren’t maximally entertaining, because they always put making their point ahead of being entertaining. This doesn’t mean that they have to be dull or boring or anything, or that entertainment can’t make a point, but it’s all about focus: if you have to choose between getting your point across clearly and making your point in an entertaining way, if you are trying to make a point you’ll choose, generally, “Make the point clearly” and if you’re trying to entertain you’ll choose “Be entertaining”. Trying to do both, as Miles O’Brien’s mother said about eating and talking, means that you won’t do either one as well as you could have. And the mainstream of gaming has always been about entertainment, just as the mainstream of movies has been. Artistic games are always going to be a side genre in games: valuable, but not something that the average game player is going to seek out.

Now, a counter here would be that if we look at movies, at least people say that artistic movies are still movies. For some of the more avant-garde or experimental ones, that isn’t true. But that sort of criticism has to be expected. After all, there are major philosophical debates over what makes art art, and over whether certain forms of art really count. We’ve just started considering whether games themselves are or can even be art, even in the same way as movies and books and … well, you get the idea. There’s no real criteria for what makes a game a game, and debates over the matter tend to get bogged down in definitions that leave out many things that everyone thinks of as games. Perhaps we need a Philosophy of Video Games to dig down into this and figure it all out, or at least put the discussions on solid academic ground. Or perhaps not. But we need to work this out, and insisting that, at the end of the day, the future will fit your view is at best premature. Maybe we’ll discover that video games can’t be art, for some reason. Maybe all games will be art, even the shooter that has nothing more than that. Who knows?

At any rate, these visions of the future are rather blurred and hazy (kinda like what I see when I take off my glasses). Do we have specifics of the sorts of games that people want to see more of? Well, let me look at Anita Sarkeesian, because she has a couple, although how much she wants these to be the way games are done and how much of these are games she just would like to see once in a while is debatable. From Damsels in Distress Part 3:

A true subversion of the trope would need to star the damsel as the main playable character. It would have to be her story. Sadly, there are very few games that really explore this idea. So as a way to illustrate how a deconstruction could work let’s try a thought experiment to see if we can create a hypothetical game concept of our own.

Clip- The Legend of the Last Princess- Mini Animation

“Like many fairy tales, this story begins once upon a time with the kidnapping of a princes. She dutifully waits for a handsome hero to arrive and rescue her. Eventually, however, she grows tired of the damseling and decides it’s high time to save herself. Of course if she’s going to be the protagonist of this particular adventure she’s going to need to acquire a slightly more practical outfit. After her daring escape, she navigates the forbidden forest, leveling up her skills along the way. Upon reaching her kingdom, she discovers the inevitable yet unexpected plot twist; the royal counsel has usurped power and were responsible for her kidnapping. Branded a traitor and an outlaw in her own land, she unlocks new disguises and stealth abilities to infiltrate the city walls. She makes her way through the final castle to confront the villainous council, and abolish the monarchy forever.”

A story idea like this one would work to actively subvert traditional narrative expectations. The princess is placed in a perilous situation but instead of being made into the goal for a male protagonist, she uses her intelligence, creativity, wit and strength to engineer her own escape and then become the star of her own adventure.

Now, immediately thereafter Sarkeesian says that not all games have to start this kind of hyper individualistic woman … but she never says whether she wants this to be common or not. And I’d say that this kind of game might be interesting, and would be worth pursuing … but would always be a minority of games. The reason is that the “start imprisoned and escape to start the plot” sort of game is one that is somewhat limited in what you can do; there are only so many ways you can allow the protagonist to escape without making the villains look stupid. Starting the protagonist out in the world allows for far more options, so in general you’d pull this idea out when it really adds to the story you’re telling, and it’s been done well in a number of cases (see, for example, Baldur’s Gate 2). The specifics that would apply to a female protagonist and those subversions would wear out really, really quickly; they only work when it’s unexpected, but if it became the norm that, say, a female protagonist changes into a more practical outfit it’d be reduced to being like donning armour, which would lessen its effect. So, do I think that the future of gaming will have room for games like this? Yes. Do I think they’ll be common? No.

Let’s move on to the next one, from Women as Background Decoration Part 2:

There is a clear difference between replicating something and critiquing it. It’s not enough to simply present misery as miserable and exploitation as exploitative. Reproduction is not, in and of itself, a critical commentary. A critique must actually center on characters exploring, challenging, changing or struggling with oppressive social systems.

But the game stories we’ve been discussing in this episode do not center or focus on women’s struggles, women’s perseverance or women’s survival in the face of oppression. Nor are these narratives seriously interested in any sort of critical analysis or exploration of the emotional ramifications of violence against women on either a cultural or an interpersonal level.

The truth is that these games do not expose some kind of “gritty reality” of women’s lives or sexual trauma, but instead sanitise violence against women and make it comfortably consumable.

Now, to be clear, I’m certainly not saying stories seriously examining the issues surrounding domestic or sexual violence are off limits for interactive media – however if game makers do attempt to address these themes, they need to approach the topic with the subtlety, gravity and respect that the subject deserves.

She then goes on to talk about “Papo and Yo”:

Though not about the abuse of women, the 2012 indie title Papo & Yo is an example of a game that respectfully deals with the very serious issue of alcoholism and domestic violence against children.

The game does so by telling its story from the point of view of a protagonist directly affected by the trauma of abuse, not someone on the outside coming to their rescue. It focuses on the journey of a figure who is struggling through a traumatic situation and attempting to deal with the repercussions of violence. It makes that struggle to cope and survive central to both the narrative and gameplay – not peripheral set dressing to a story about something else. And critically, the game employs powerful metaphoric imagery to make its point instead of relying solely on sensationalized or exploitative depictions of the abuse itself.

Papo & Yo is an intense and at times gut-wrenching game that doesn’t sugarcoat or glamorize violence. In this way it’s an honest and emotionally resonant experience for players.

The key here is that Sarkeesian seems to be pushing games as a sort of commentary or critique or expression of values. In fact, she says that in the next paragraph:

We must remember that games don’t just entertain. Intentional or not, they always express a set of values, and present us with concepts of normalcy.

Taken with the first paragraph, her view of games seems to be that they don’t just or ought not just reflect, reproduce or represent societies and societal attitudes. They must advocate for values — and, presumably, proper values — and critique the existing societal structure and attitudes. And my reply to that is that games can do that, but that they don’t have to do that. Games can try to reflect the common societal views in an uncritical way, as nothing more than a framing device for people to simply have some fun and maybe even pick up some interesting perspectives on things, not as a challenge to the dominant views, but as a supplment to them. A bit like talking to someone from a completely different part of the world; you learn about their culture and how it works without it feeling like a challenge to your culture or trying to challenge theirs.

So, again, there’s room for these sorts of games, but that doesn’t mean that they ought to become the norm … and they probably won’t. Because after a hard day at work when I want to play something just to have some fun, the last thing I want is for the game to be constantly trying to challenge my view of the world, whether I agree with it or not.

Finally, let’s look at diversity in games. How is that future going to look? Well, I don’t really know. Game culture itself has already started looking at and talking about these things, and things have changed. I don’t feel that the sort of diversity pushed by most diversity advocates is going to be successful, because in its intense focus on the negative it simply encourages tokenism, rather than putting diversity in where it makes sense and telling stories where that diversity is a required element, and where that story can’t really be told any other way. While I criticized the criticism of Assassin’s Creed: Unity for not allowing people to play as a female avatar, I did agree with and appreciated the commentaries that pointed out that in that time period they had an amazing opportunity that they squandered by not going with a female protagonist. In order to get that sort of diversity, more of that sort of thing has to be done, where you point out opportunities and let game designers hit their heads and exclaim “I could have had a V8!”.

But I don’t want to go any further on that for now, because this would be getting into my view on how the future of games should go and that’s not what this post is about. And I don’t think I really have an answer to what this future that’s inevitably coming is supposed to be yet. Maybe those who are pushing for this could take some time out of ranting about gamers to outline this. At the end of the day, the response from most gamers might well be “Oh, that’s what we want, too”.

The Interacting Game …

October 24, 2014

I was musing over the new issues with video games, and thinking about the previous issues, and one thing jumped out at me: while they tended to talk about issues that pretty much all forms of media have, they also tended to claim that it was worse with video games. And when they didn’t talk about them corrupting the youth, they tended to focus on one particular facet: their interactivity. Which Anita Sarkeesian talked a lot about in her video on women as background decoration:

…but since video games are an interactive medium, players are allowed to move beyond the traditional role of voyeur or spectator. Because of its essential interactive nature, gaming occupies a unique and potentially more detrimental position vis-a-vis the portrayal and treatment of female characters.

A viewer of non-interactive media is restricted to gazing at what the media makers want them to see. Similar to what we might see in video game cutscenes, the audience is only afforded one fixed perspective. But since we’re talking about interactive gameplay within a three-dimensional environment, we need to consider the fact that players are encouraged to participate directly in the objectification of women through control of the player character, and by extension control of the game camera. In other words, games move the viewer from the position of spectator to that of participant in the media experience.

On a very basic level, we can think of non-interactive media as engaging audiences in forms of “passive looking”, while video games provide players the chance to partake in forms of “active looking” or “active observing”.

These active viewing mechanics encourage players to collaborate with developers in sexual objectification by enabling gamers to scope out and spy on non-playable sex objects.

This is especially sad because interactive media has the potential to be a perfect medium to genuinely explore sex and sexuality.

I should note that this kind of misogynistic behavior isn’t always mandatory; often it’s player-directed, but it is always implicitly encouraged.

In order to understand how this works, let’s take a moment to examine how video game systems operate as playgrounds for player engagement. Games ask us to play with them. Now that may seem obvious, but bear with me. Game developers set up a series of rules and then within those rules we are invited to test the mechanics to see what we can do, and what we can’t do. We are encouraged to experiment with how the system will react or respond to our inputs and discover which of our actions are permitted and which are not. The play comes from figuring out the boundaries and possibilities within the gamespace.

So in many of the titles we’ve been discussing, the game makers have set up a series of possible scenarios involving vulnerable, eroticized female characters. Players are then invited to explore and exploit those situations during their play-through.

So whereas in traditional media, viewers might see representations of women being used or exploited, gaming offers players the unique opportunity to use or exploit female bodies themselves. This forces gamers to become complicit with developers in making sexual objectification a participatory activity.

While these come from many different places in the video, the main thrust is essentially this: the player isn’t just watching the violence or sexualization, but are actually doing it. And this supposedly makes the harm worse, and has more of an effect on the player. Which is pretty much the same sort of argument that people made about violence: you aren’t just watching it, you’re doing it … and that’s much, much worse.

And yet, in all of the various scares over just plain violence … that doesn’t seem to be the case. No one has been able to make a case that participating in these actions is worse or has more of an impact on a person than watching it. And Sarkeesian doesn’t provide any evidence of that either; all of her studies are about observing, not participating in the actions. So, at least, we’re going to need some evidence that participating really is worse. And while it may seem like common sense or just obvious, no one has been able to actually establish it yet, at least in a way that’s convinced anyone. So if you fail to prove what seems to be common sense, maybe that common sense isn’t sense at all. And I think there might be a reason to think that participating in these things is, in fact, less likely to impact the person than watching does.

There are three main ways you can participate in a game:

1) As yourself, in that world.
2) As the PC, in that world (ie playing the role of that specific, fleshed out character who is not you).
3) As yourself, playing a game.

In the more immersive games — which are the ones that should be the worst if interactivity is really a problem — you’re going to be playing as 1) or 2). Let’s start with 1). As you play the game, you are making the choices and doing the things that you, yourself would do, and so all of your choices reflect who you actually are. Thus, if one has an optional choice to, say, enslave someone, if you do that it reflects what you, as a person, would do … and if that disgusts you, then you wouldn’t do it. That’s assuming that it’s a free choice, and that the game isn’t forcing you to make that choice. If the game forces you to make that choice and you wouldn’t make that choice, it breaks immersion in the same way as a “But Thou Must!” does: you are being forced to do something that you think is a really bad move to do. The only exception to this case is when the story is structured so that it’s actually a difficult choice. For example, you’re forced to kill a kitten, or an entire city will be killed. If you choose to kill the kitten, that’s a choice that you’d make … but you’re doing it to save an entire city. These sorts of dilemmas are actually good things, and things that we want to see more of in games.

So, in 1), if you are the sort of person who wants to murder random civilians or rape and objectify women, then you will in the game … but, then, you were already the sort of person who wants to do that, so it can’t have much impact on you.

In 2), you take on the role of the PC, which may be one that the game defines for you or one that you define yourself for the game. For example, I tend to play as Corran Horn in most Star Wars RPGs. Here, you take the actions that that character would make, even if that isn’t what you’d do yourself. As Corran Horn, I played a lot more confrontationally than I would myself. Playing the original Knights of the Old Republic as Corwin from the Amber series, I turned at the end to gain power for myself, which I wouldn’t have done playing as me (why in the world would I want that power?). And when I played as a Sith woman … well, that was nothing like me [grin].

So, in these cases, you play as the character, not as yourself. And so when you participate in murder or objectification or whatever, again you take the actions that the character would take, not that you would take. So having the option to play as a completely brutal thug, or a complete degenerate, is something that these sorts of players desire not because they want to be that way themselves (usually) but because it can be fun to take on another role for a while and not be yourself. And note that if you want more female protagonists in games, you have to accept that this playstyle is not only possible, but common, or else male characters will not play as female protagonists … at least in any game where being immersed in the game is desirable.

So here, since most people learn quite quickly the difference between fantasy and reality, the actions you take in the game have relatively little impact on you, because you aren’t playing as you, but as someone else. Seducing Carth Onasi has no chance of making me attracted to men, because it’s not me that does it, but instead that female character. So here, again, it doesn’t seem like it can have much impact on you.

So, we turn to 3). These are the least immersing types of games, because in these games you play them like a game: you calculate your moves not based on what you want to do, or what the character would do, but on what gives you the most points or gets you through the game the most efficiently. So, if we take the example from the Grand Theft Auto series where you can pick up a prostitute to recharge your health and kill her to get your money back, in this mode the player is treating that like a way to recharge your health for free. It doesn’t matter that it’s a prostitute or a life drink with a glowing aura when you drink it. You’re doing it to game the system, and so in this case you really treat the prostitute like an object … because at that point it is an object, like your party members and everyone and everything else in the game. Because you’re treating it like a game, and not like anything real. And, again, things we do in games aren’t things that we think we want to do or would do in real life.

Now, these things aren’t always easy to divide into neat categories, as sometimes you play as yourself and at other times — when immersion is broken — you play it as playing a game. But the key difference between a video game and a movie is that at the times when you are most immersed in it, and when its setting is filling your consciousness the most … in a video game, that’s when who you are is most involved, and when you are imposing the most on the game and what is happening. When the game stops letting you be yourself or the persona you yourself have chosen to adopt, that’s when you stop being immersed and remember that it’s just a game. Whereas in a movie, in something you just observe, when you are most immersed is when your consciousness believes that this is, in fact, just how the world is. So, in that case, it seems reasonable to posit that when you watch a movie, you might learn things from it just like you learn them from the world: often passively. While for video games you don’t learn things passively because you are actively involved in it. To explain this further, many of our attitudes we adopt simply because that’s how the world is and we just get it by osmosis. These become subconscious biases and these are the hardest to overcome if they’re wrong. Thus, the more actively involved you are, the less of these passive attitudes you would adopt … at least, passively and subconsciously. And games require you to interact far more with the world that you’re observing than a movie does.

Now, this is all speculation, and much psychological work needs to be done to decide what is the case. But it isn’t obvious that actually participating in an activity or being forced to do that is worse than simply observing it, and that’s what a lot of the panic around video games relies on. It seems like common sense that actually doing a bad thing is worse than just watching it, but that may not work when doing that bad thing is part of a game as opposed to something that you know and accept as real. Media depictions may matter, but participating in it may move it from depiction to role play … and we all know the difference between what we role play and what we’d do.

Don’t we?

Phillipse on the Reformed Objection

October 23, 2014

So, a while ago I took up the challenge of reading Phillipse’s “God in the Age of Science?”. It didn’t go that well. What happened is that I was going along fine, things were going well … and then I hit the section on Plantinga (Chapters 3 and 4). And I wanted to say stuff about it. And, as is usual for me, I just never got around to writing that post. Now, I could have just gone ahead and kept reading, but I had also noticed that when I did that I, in general, never went back to write up those little things that I wanted to talk about. So I decided to wait. And I waited … and waited … and waited.

So, here’s the post. I’ve decided not to go back and re-read the chapters in detail, so this is mostly from memory with some spot checking, so I might be misremembering or misinterpreting him. But I don’t think it matters much for what I have to say anyway.

The most interesting thing is that what Phillipse relies on against Plantinga is essentially a variation of the geography argument: you could think that you have a sensus divinitatis and feel justified in that claim, except that there are other people who have come to a different conclusion than you have, which means that it isn’t justified. This is an interesting tack to take because as we’ve already seen Plantinga has already taken on that argument and found it wanting. So it’s interesting that Phillipse is relying on an argument against Plantinga that Plantinga has already dealt with … and doesn’t seem to have addressed that point. And he points out that he ran his chapter by Plantinga, and yet still didn’t feel the need to address that argument. What gives?

Well, it turns out that the argument doesn’t depend on any kind of Geography Argument at all, but is instead an argument that if two people claim to be using the same method and come to different conclusions, then at least one of those people are wrong. So if A uses their sensus divinitatis to conclude that the Christian God exists, and if B uses their capacity to come to the conclusion that the Hindu god exists, then we have an issue, as both are using the same method here; you can’t appeal to the method itself to settle the tie. So, then, we need some kind of external justification to claim that A’s capacity is working and is correct, or vice versa. And it doesn’t look like we can get that without having some kind of rational argumentation, or a rational natural theology. And Phillipse’s whole point here is that you can’t use this sort of argument to do an end run around needing a rational natural theology.

Now, as one of my initial objections stated, this might work against a knowledge claim, insisting that theists can’t use this to claim that they know God exists. This doesn’t work at all against someone who merely wants to feel that their belief is rational. Because while Phillipse talks a lot about how you have no reason to choose your conclusions over those of theirs, that only matters if you are making a universal knowledge claim. If you are just trying to decide what to believe, you have every reason to trust your conclusion more: it’s your conclusion. If you read the Bible and just feel that a certain conclusion is true, then the fact that someone else tells you that they get the opposite reading isn’t going to and ought not sway you from your conclusion. It may cast doubt on your conclusion, but it doesn’t prove their conclusion either. And there’s no real reason to force yourself to a neutral stance just because someone else comes to the opposite conclusion. So this doesn’t impact theists who aren’t making knowledge claims at all.

And the discussions of how the sensus divinitatis might be like sense perception or memory are more revealing. Phillipse tries to argue that perceptions contain a link to truth and to truth making that this capacity couldn’t have. But we all know that the truth of sense perceptions is not exactly justified itself. So, if we imagine that the sensus divinitatis works like sense perception, that means that when someone reads the Bible or sees that wonderful natural sight the truth of God’s existence seems to come onto them full blown. It just seems obvious to them that God exists and has certain properties. And if that’s the case, then we have to ask ourselves: what would we think if we saw something, and someone standing beside us said that they saw something different? In general, if I see something, I am justified in claiming to know that I see that, and from there am justified in saying that the thing exists and exists as I saw it. If someone else says that they saw something different, but we can’t check it in any other way, am I no longer justified in claiming to know that that thing exists? Are they? Sure, at least one of us is wrong, but all that means is that we are wrong about our knowledge claim, not that we aren’t justified in claiming knowledge. Unless you insist that knowledge requires certainty and that you can’t claim to know something unless you are certain that you are correct that you know it, which pretty much everyone rejects.

Now, it can be argued that with sense perceptions we have a way of testing our conclusions and settling which of us is right, which can’t be done with the sensus divinitatis. The problem is that we don’t really have that for sense impressions; every test we could do to test our sense perceptions requires us to assume that our sense perceptions are correct in the first place, which then is assuming what we were trying to prove. The sensus divinitatis has a different problem; we could use our sense perceptions to test it, but it doesn’t really make claims that are amenable to testing by sense perception. So it looks like, in that sense, we have a similar problem for both, for different reasons.

The key might be in what Phillipse specifically says:

…what is present in perception and triggers these basic beliefs is not identical with their truth-makers. … these Christians are reading the Bible; they are not reading God.

This sounds like a claim that when we see the world, we see the world, but that’s not the case for the sensus divinitatis; when we read the Bible, we don’t experience God. But that there’s really a world to see is in fact the challenge for sense perception, and the claim listed above is that it might just spring on us as a fully-formed conclusion that God exists from reading the Bible. So that doesn’t seem like a promising line of argumentation. However, what I think he might be getting at here is that the reason we trust our sense experiences is because they, in and of themselves, present the idea of an external world to us and their conclusions are indistinguishable from that — ie the instant we have a sense experience we believe that they are telling us about an external world, no matter what experience we have — it seems that in general when reading the Bible we wouldn’t come to the conclusion that God exists except for the fact that the Bible itself tells us that explicitly. We don’t read the Bible and think “Ah, God!” as an inherent part of the reading, but instead read the Bible telling us that God exists and that triggers our belief that God exists. So, in this case, the idea is not spawned in us by the Bible simply by experiencing the Bible, but is instead spawned in us by the Bible telling us and arguing for the conclusion. Thus, we always have to doubt our experience, and wonder if we would have the same experience without the argument. This isn’t true for sense experience, which is why that can be a basic belief and the sensus divinitatis can’t be.

How far this gets Phillipse is unclear. He might have good cause to make against using this sort of revelation as a knowledge claim, but that won’t impact belief. And the parallels with sense perception are a lot closer than he seems to admit. But from here we move on to more natural theology, and then into the bulk of his argument.