Grand Slam of Curling: Boost National

November 20, 2017

So, this weekend was the Boost National. Jennifer Jones won 8-7 over Casey Scheidegger in a match that wasn’t quite as close as the final score indicated, as Scheidegger was pretty much 3 – 4 points behind most of the game (or, at least, pretty much every time Jones got a chance to make some shots). The interesting thing is that in pretty much every end the team with hammer scored multiple points, except for two steals (in the 6th, which pretty much put it away for Jones, and in the 8th when Jones only needed to get rid of one of the two rocks to win the game).

Also, Jones’ team pretty much struggled the entire time. Other than Katelyn Lawes, I’d seen every member of their team with low numbers and missing shots they should have made at some point during the week. Jennifer Jones over the tournament had low numbers for her. Despite that, they’ve now won 14 in a row on the Grand Slam, but don’t seem to be on top form. Unless you’re in a league where every win gets you points, playing poorly but winning anyway is not how you want to go into a big tournament, like the Roar of the Rings which will determine which team will represent Canada at the upcoming Olympics. The reason is that if you aren’t playing that well and are still winning there is a tendency to fall into a mindset that everything is okay and that maybe you need a few small tweaks. After all, you’re still winning, right? But a lot of that might come down to luck or happening to make the right shot at the right time or hitting teams that make mistakes at key moments, either in shot-making or in strategy. If you stop getting the breaks, those wins can turn into losses in a hurry. At least if you are losing, you know that you need to make some adjustments.

Speaking of which, Rachel Homan managed to get back to the playoffs, but then had a disappointing exit in the quarter-finals, losing to Scheidegger (who, admittedly, has been playing very well). Her play was at times a bit odd, as if she didn’t really know what she wanted to do and had no real plan. The commentators noted that she might be overanalyzing things, and I think that she might be trying to predict what her opponent is going to based on what she would do, and then gets surprised when they do something different. In the game against Scheidegger, Homan at one point ignored a stone to place a guard, which Scheidegger then removed, which surprised the commentators as it was a very conservative move, but after that Homan didn’t seem to have any overarching strategy, and seemed to be just reacting. While trying to see where your opponent is going to go is important, maybe Homan just needs to have confidence that she can make pretty much every shot they leave her and plan more on ensuring that she will always be left a shot instead of trying to outstrategize her opponents.

And speaking of strategizing, I can’t recall seeing a draw that Val Sweeting won that didn’t heavily rely on her stealing points, and thus in general on her opponents making mistakes and missing shots. Overhearing her discussions at one point, it seemed like she was planning for that, as she talked about them making a double and rolling out. Sure, you can have forced doubles where the shooter will definitely be lost, but it didn’t seem like one of those and so that her opponent might have been able to stick her shooter, and thus she was planning for them making a mistake. This might explain her results, then, where she relies on her opponents making mistakes and when they don’t she struggles. She went 4 – 0 in the round robin but, like Homan, lost disappointingly in the quarters.

Next up is the big one, the Roar of the Rings which will determine which teams represent Canada at the Olympics.

Advertisements

Does Science Justify Itself?

November 17, 2017

So, over at his own blog, Coel has recently posted a post taking on an article by Susan Haack arguing that science needs philosophy for justification, at least. Coel is a notorious scientismist, and so will obviously take umbrage to that argument. What is interesting is that that post of his was posted less than a week after I posted my post on how science may not be trustworthy and that it needs to do some things more philosophically in order to avoid making the mistakes it tends to make, meaning that the two posts, completely independently, take directly opposing views on science, its reliability, and how science and philosophy need to proceed. I made a comment there pointing out some of the issues I raised in my post, but here I’d like to go through Coel’s post directly and respond to some of his comments.

Let me start here:

The second claim, though, is that where science is limited in scope, other “ways of knowing”, such as philosophy, can arrive at reliable knowledge. It is this second claim, not the first, that scientism denies.

(It’s also worth pointing out that advocates of scientism intend a broad definition of science that includes rational analysis as much as empirical data, and thus encompasses modes of enquiry that others might regard as outside science.)

As I’m sure I’ve pointed out to him before, this expansion of the definition of science is essentially cheating. It’s pretty easy to justify a claim that science is the only way of knowing if anything else that ever produced any form of knowledge gets counted as science. This move is made even worse by the fact that philosophy has a far better claim to that mantle than science does, since it can do pretty much every method ever done — including strict empirical science — while science has more restrictions on its methodology, as Coel himself concedes. If the scientismist’s definition of science is so expansive that it includes philosophy and all other possible ways of knowing into it, then it’s probably not a good definition … and, in fact, doing so makes the question of whether or not philosophy is a way of knowing or can arrive at reliable knowledge as well as science does meaningless, since philosophy itself would be science by that definition, not matter what methodology it was using.

So Coel can’t “cheat” by redefining science any time anything else manages to find out something. He needs to have a solid definition of science and the scientific method and what that entails and implies, and what assumptions it makes, that we can use to distinguish other potential ways of knowing from science so we can evaluate if they exist and if they can work. If he just wants to expand the definition to include all possible ways of knowing in science, then I’ll adopt philosophism and insist that the only way of knowing is philosophy, and have the stronger justification that science was originally philosophy and that none of its methods or conclusions are ones that philosophy could not use or arrive at.

Coel’s big justification for science boils down to a claim of “it works”:

I consider it to be fundamentally wrong. Scientists don’t look to philosophers to justify their subject, they consider that science is justified because it works. And by “works” we mean that it leads to predictions of eclipses that come true, it leads to medicines that can cure people, it leads to accounts of reality that demonstrably contain understanding of the world. This is best demonstrated by technology. Computers work, iPhones work, aircraft fly, MRI scanners work.

Science can use its most esoteric theories to predict the existence of gravitational waves emitted by colliding neutron stars or black holes — both exotic concepts far beyond the world of everyday experience — and then build hugely complex machines of impressive technological mastery to detect such waves, and then use them to observe exactly what was predicted, complete with the characteristic in-spiral pattern.

Further the methods of science are not the product of philosophy but are themselves the product of science. By that, I mean that the methods of science are the product of experimentation, trying out different approaches and seeing what works best.

The problem is that if the methodology of science was really nothing more than “trying out different approaches and seeing what works best”, science would never have produced all of those wonderful things that he waxes eloquently about here. That methodology is the hallmark of what I have in the past called “everyday reasoning” but now will call “folk reasoning”, which is how most people reason most of the time. It generally takes empirical observations, forms rough and loose theories about what they mean, which is then used to make basic predictions about what will happen if you take a specific action. It tends to use a rough inductive approach as well, assuming that if something has always occurred, it will keep occurring. If its predictions turn out to be wrong, it adjusts its beliefs doing as little damage to the overall web of beliefs as possible, and continues on. It also tends to only test for confirming evidence and not disconfirming evidence, as seen in the Wason card task. As such, it can hold on to false beliefs for a long time, through being able to adjust the web of belief to accommodate it and the new data/beliefs, and through not testing the disconfirming cases to see if it really holds. However, for the most part this “works”, as we get a web of belief that we can use to navigate our every day lives and achieve our goals without too much difficulty.

So how does at least formal science differ? Science rejects the simple induction of folk reasoning in favour of the hypothetico deductive method, where instead of merely gathering empirical data and generalizing using ad hoc theories it builds an entire hypothesis that it justifies using deductive logic, insisting that if the premises of the hypothesis are true then the conclusion must be true. Thus, what it does is build not merely a hypothesis for what it has observed, but more importantly an explanation for what it has observed. And it doesn’t stop there. Since it has this deductively justified hypothesis/explanation, it can trace out the consequences of that hypothesis and then actively seek to disconfirm it (ie falsify it), which then allows it to find false beliefs or hypotheses earlier and then adjust appropriately. A good example of how the two methods differ is the old “The Sun rises in the east” example. For folk reasoning, the fact that the Sun has always risen in the east is sufficient to justify the claim that tomorrow it will rise in the east. For science, that’s not sufficient, because of the inductive fallacy; just because it has always done so doesn’t mean that it will indeed do so in the future. So science instead finds the explanation for why the Sun rises in the east: it’s because of the Earth’s rotation. Because of this, it knows that as long as the Sun exists and the Earth keeps rotating like it does, the Sun will rise in the east. Both methods work, in that they predict that the Sun will rise in the east every day, and lo and behold it does. But the scientific approach is much more robust and is more reliable.

Folk reasoning is less reliable, but is faster for simple claims, which is why it is what we use most of the time, and is reliable enough to work out. Science is more reliable, but also far slower if it’s being done properly. Philosophy is even slower because it is more skeptical and keeps challenging even the base principles that the ideas and explanations are based on. In theory, all of these methods could arrive at all true propositions eventually — putting aside certain underlying assumptions, which is more prevalent in science than the other two — but we can see that each method is best suited for certain questions. Folk reasoning works for questions that are directly empirical, where 100% reliability isn’t required, and when you need an answer quickly. Science works best for empirical questions that have consequences that extend beyond the actual question, or where being right is more important. Philosophy works best for questions where you aren’t really sure what approach to take or that are about basic underpinnings of things like experience or methodologies, because it’s not bound to any set of assumptions or methodologies. All of these are useful and can produce knowledge in their own way.

Coel, as usual, underestimates just how radically different science is and just how important those differences are to science’s success in his zeal to define science as the only way of knowing. In so doing, he ends up making science a pale shell of itself, and also imports failures into the method that he reveres for getting things mostly right.

If science necessarily adopts assumptions A, B and C, and then science built on such assumptions works in the real world (“works” in the above sense), then that demonstrates that assumptions A, B and C are real-world true. In other words, A, B and C are no longer assumptions but have now been tested and validated by the fact that adopting them works.

(If one wants to retort that, while science assumes A, B and C, it doesn’t actually test them because it could instead adopt P, Q and R, without any resulting change to observations, then this would mean that A, B and C were not necessary assumptions, and thus that they are not key underpinnings of science.)

This isn’t true, though. I’ll make a philosophical point here, and note that it may be a key underpinning of science — or, at least, scientists may assume it is such — that A, B and C are the case but that it is not necessary for science to do so. The example I’ll use here is the naturalistic assumption, the idea that all answers about the world will be natural ones. Many scientists like Coel assume this, and it is argued that this is a fundamental assumption of science. It will also be claimed that this assumption generally works, as all of the explanations science has discovered in the past have been natural, and science has had success in overturning supernatural claims and replacing them with natural explanations. But I would reply that it is unnecessary to do so, and even potentially detrimental to science as a way of knowing to do so. If the natural explanation really is the truth, then the evidence and methods of science should be able to make that clear. If it isn’t, then simply assuming a natural explanation is more likely to be true isn’t the right approach; at best, your knowledge here would be strongly, strongly provisional and dependent on future investigation and evidence, and so might be proven wrong at any time. This only gets worse if the determination is that supernatural explanations are so improbable that any natural explanation is to be preferred, like that someone is either lying or hallucinating for no other reason than that their experience would mean that the supernatural explanation is true. If all experiences of the supernatural are dismissed because they would support supernatural claims which are assumed to be false because all true explanations are assumed to be natural, then science will never accept a supernatural explanation … even if that is the one that turns out to be true. The only way around this blind spot would be to weaken the assumption of naturalism … and the best way to do that would be to eliminate it entirely.

There have been attempts to justify the naturalistic assumption, but all of them end up being philosophical arguments even by Coel’s standard, because they cannot appeal to “it works” as a justification, because science with the naturalistic assumption and science without the naturalistic assumption would, in fact, both work equally well. And, in fact, one of the problems with simply relying on “it works” as a justification is that there is no principled way to differentiate between two methods or theories that both work equally well in predicting the existing empirical data but are radically different. Even appealing to things like parsimony end up either being unjustified or requiring philosophical justifications, because “it works” can’t differentiate the claims sufficiently.

So, yes, you can have assumptions that are fundamental to science but that are not necessary for science. Unless Coel wants to drop all of those from science, he’ll need a philosophical argument to justify why science should retain them.

Thus Haack’s claim that, without philosophy, science would be “adrift with no rational anchoring” is misconceived. Science is not founded on “rational anchoring”, that is, reasoning from key axioms that can be known a priori (and nor is it founded on axioms that can only be taken on faith). Indeed, it can’t be, because philosophy has no way of arriving at a priori knowledge.

Instead, science is an iteration between a whole web of ideas and models, and the comparison of that web of ideas to the real world that we experience empirically through our senses. Science is not anchored in philosophy, it is instead anchored in the empirical world and justified by the fact that it works; that is, by the fact that the ensemble web explains the real world, enables us to make good predictions about the real world, and enables us to develop technology that works.

Well, how does Coel know that philosophy can’t arrive at any kind of a priori knowledge? Also, why does Coel assume that philosophy can’t use empirical data and “what works” as much as science can? When philosophers reject empirical approaches as a way to answer a question, they aren’t doing it because empiricism is icky or anything, but because as they analyze the question they note that empirical data’s not going to answer it. Coel continually denies philosophy any kind of method that he wants to claim works, and then chides philosophy for not being able to find out what works in the real world. This is not a reasonable move, and so philosophers ought not accept it.

To Susan Haack’s suggestion that: “none of the sciences could tell us whether, and if so, why, science has a legitimate claim to give us knowledge of the world”, the reply is that it demonstrably works, and that since it does, that shows that it has a legitimate claim on knowledge (since that’s what “knowledge of the world” ultimately means).

To quote a professor I had once: Says who? Why should we accept that “knowledge of the world” ultimately means “what works”, especially given how in at least limited circumstances false beliefs can work. And who says that we can’t derive truths about the world without having to test it against the world? Heck, what does he even mean by world here? All of these are philosophical questions that Coel is assuming answers to and then insisting that he has justified them by “it works” when all of those assumptions could be false and yet still work.

And it is a good thing that science is not anchored in philosophy, since philosophy itself has no way of justifying its tenets. Contra Haack, the problem would be if science did depend on philosophy, rather than justifying itself by a boot-strap iteration with empirical reality. Because we’d then have no way of establishing whether it had any more validity than other “ways of knowing”.

Again, who says that philosophy can’t and hasn’t use that same boot-strap of empirical reality? In fact, it already had done so, which is what led to science existing in the first place. Philosophy only rejects appealing to empirical reality when an examination of the question reveals that you can’t appeal to empirical reality to answer the question. Coel repeatedly either assumes answers to those questions and then justifies them by an appeal to empirical reality that can’t justify those answers or else dismisses the questions as at best uninteresting. But that doesn’t mean that he’s really answered them, or that his method can answer them, or that the answers to those questions can’t be known. He’d need philosophy to do that … and that’s what he continually refuses to acknowledge while still insisting that his philosophizing isn’t, and that he’s really doing science.

Whatever science is to him. Which is another philosophical question he’d probably want to answer and justify at some point …

Loot, There It Is!

November 15, 2017

So, Shamus Young wrote a post answering a reader’s question about whether we might be entering into another PC Golden Age. This is a bemusing topic for me, since I recently bought five new PC games … all from Good Old Games, all older games, and one of them, at least, I bought because I originally played it on the Amiga. I can’t think of any new PC games I want to play and, in fact, can’t even really think of any new console games that I want to play. While I’m more likely to buy a new video game in the next few months, I’m far more likely to be anxiously awaiting a new board game than I am a video game (Legendary: Buffy in particular). So to me it’s not looking like much of a Golden Age, for gaming in general and PC gaming in particular.

But, I recently picked up a new game. And it contains one of the big things that Shamus doesn’t like: loot boxes. And yet … I kinda like the loot boxes. But on reading Shamus’ complaints — and especially the links to what they do in Star Wars: Battlefront 2 and Shadow of War — I can definitely see the potential problem with them.

See, the thing is, why they work for me in Injustice 2 is because I get credits and loot boxes for, well, doing the things that I want to do in the game. Supposedly, they can increase stats and so add benefits and power to your character, but I generally play the game on the easiest mode possible anyway so I don’t really care. I like the idea of using them to customize characters, although I don’t get quite the same thrill on opening them as other people will because the list of characters I like to play with is very short — Supergirl and Black Canary — and so most of the stuff that comes up are things that I have no interest in. Which, of course, means that I’m completely immune to any kind of additive or even gambling type of stimuli, because I’m not opening each box with trepidation hoping for something really, really good. I’m more in the mode of opening up the things that I earned to see if there’s any goodie inside. And yet, there still indeed is a bit of a thrill in opening them up and seeing what I got.

And this, then, triggers the problem, as was pointed out in this article about Star Wars: Battlefront 2:

This system is just miles worse than a traditional progression system that allows players to choose what they want to upgrade. While loot boxes could work in a game like this sprinkling in extra stuff here and there, used as the entire core of the progression system, it’s beyond frustrating. You’re now not just grinding for upgrades, your grinding for the chance at an upgrade that you actually want …

Which returns it to my experience with Injustice 2. Since I only like a small number of characters, what I’d like — even if all of the loot is merely cosmetic — to do is buy stuff for those characters, at least at first. And for those characters, I’d like to buy the things that I want them to have, like specific costume options or things that improve the traits that I most rely on in the fights, or whatever. Under the old models, even and perhaps especially in fighting games, I fought through enough to get enough credits to buy the things I wanted, and in general I saw exactly how much that would cost me well in advance and so, since I can also figure out how much I earn on average doing specific things, could know how long it would take me to get there (roughly) and thus, if I had to grind, knew how long the grind would be and could decide if it was really worth it for me.

With this system, I have no idea how long that will take. I have to grind to get the credits and/or boxes, and then when I open them I’ve either gotten something I wanted or I haven’t. If I have, then I can stop. If I haven’t, then I need to keep grinding again. In the worst case, I’m spending lots of time doing things I didn’t really want to do in the hopes of getting what I want or need, in an endless cycle of earning, looking at, and then sighing and going back to the things I don’t really want to do again.

But this gets worse, as I think that game designers have underestimated how much hope turning into disappointment can impact the attitude of a player. If I get enough to buy an appropriate loot box, I will feel some hope that maybe, this time, I can get what I wanted. And then that hope will come crashing down when I don’t get what I wanted. In fact, if I end up only getting things that I don’t want, that disappointment will only be heightened. Essentially, the game will continually force the player into a cycle akin to Christmas morning, where they run downstairs to open their present full of hope that they’ll get that gift they’ve wanted for ages … only to be disappointed when they find out that that’s not what they got after all and, even worse, they got something they don’t like and can’t use. And the game will do this over and over and over and over, in the worst and likely even in the most common cases. That can’t make the player pleasantly disposed towards the game.

The only way to avoid this is to make the loot boxes and the things they contain unimportant, nice little asides that you can open that might have something interesting in it but in general might just have trash. So as you earn them, you open them up to see if you get anything neat but you are never really looking to get anything specific or really interesting from them. The problem with this is that if they are that unimportant/uninteresting no one will grind to get them and certainly no one will pay real money to get them, and so they won’t achieve any of the goals that the developers are putting them into the game to achieve. But if they make them important enough for that, then their random nature will cause players to be very upset at being jerked around. And unlike CCGs, these loot boxes exist pretty much entirely within the game itself, so anger at that system will definitely carry over to the game itself.

So, the only reason that I like the loot boxes in Injustice 2 is because I don’t really care about what’s inside, and so there’s some fun in the anticipation of what I might get and I think that it might be neat to see what some of the options are. But I certainly don’t care enough to play the game to get more of the boxes and see what’s inside; if I’m not having fun playing the game, I’m not going to play it to get the next loot box, so adding the loot boxes to the game didn’t add any incentive for me to play the game. And if I did care, getting a ton of Joker and Green Arrow gear when I really wanted Black Canary and Supergirl gear would have made me quit the game in disgust. If I like the game, then I’ll play it regardless of the loot boxes, and if I really want the loot then the loot boxes not giving me a predictable path to the loot I want is going to frustrate me more and sour me on the game. It’s only if I actually get addicted that these moves can pay off … and that’s probably not a market that games want to deliberately cater to.

Thoughts on Injustice 2

November 13, 2017

As I’m sure I’ve stated before, I tend to play fighting games for the story, and not for the actual fighting. I had picked up Injustice a while ago, played it, and enjoyed it, and so while browsing in EB Games I came across Injustice 2 for the PS4 and decided that I’d give it a shot. Since this is a recent game and I’ll be talking heavily about the plot, I’ll put the rest below the fold:

Read the rest of this entry »

You asked for it …

November 10, 2017

So, over at Feminist Frequency, Carolyn Petit has posted a commentary on Super Mario Odyssey. However, her really big complaint ends up being about something that the game pretty much did to subvert gender expectations and the damsel in distress trope in the way that Sarkeesian’s entire “Tropes vs Women” series seemed to call for. It’s no surprise that it wasn’t a good move, and only a slight surprise — presumably to people who haven’t been paying attention to how the Social Justice side generally engage in games — that Petit doesn’t like it.

Before I get into that, though, I want to talk about the Tiara and the Cap (and the thief of the night):

This time around, it’s not just Peach who needs rescuing. There’s also Tiara, the sentient crown Bowser has snatched to rest upon Peach’s head during the nuptials he’s rapidly arranging. Now, Tiara is not just a living hat. No, Tiara is a female hat, and with her in danger, her brother Cappy rides along on Mario’s mop, giving him the remarkable powers he needs to complete his quest.

I mean, look. In a series that has been relying on gendered tropes for decades, if we’re gonna go so far as to gender the hats, couldn’t we at least switch things up and have the female hat (Hattie, perhaps?) ride along with Mario on a quest to rescue her brother? But no, Odyssey does damseling twice over, delivering a one-two punch of reinforcing those good ol’-fashioned video game gender norms.

So, here’s the issue. They came up with the idea of using parts of the characters’ apparel as sentient beings that can help out the characters, or at least be confidants for them (I don’t know how much of a role the tiara plays in Princess Peach’s story, at least throughout the game). They chose their typical head wear … or, at least, what would be typical head wear for their occupations (cap for a plumber, tiara for a princess). Now, these clothes are in some sense gender-typed; while women can indeed wear caps, men don’t generally wear tiaras, and a cap would not go with a princess outfit, and a tiara would not go with a plumber’s outfit. With the tiara, at least, being strongly feminine, if they had tried to make the tiara male and the cap female, it would have turned into a joke, because of the incongruity of a tiara being masculine. This means that if they did that, it would have been seen as a joke and it would have lent itself to more and more jokes about the incongruity, which would have annoyed Petit to no end, I imagine. The only way around that would be to make the cap and the tiara both non-presenting trans, which would have introduced many complications and more serious content than a Mario game — primarily aimed at kids — would want to do. So they took the easy way out and made them match the impressions, in a way that really isn’t any more problematic than what they were doing with Princess Peach in the first place and in all of their other games … which Petit then gripes about as being a doubling of damseling.

Sorry, but that criticism seems both petty and ignorant of the potential consequences of the switch, including the idea that Peach might be controlled by a male character in some sense (depending on the role of the tiara in the game, which I haven’t played).

But now onto the scene that she really hates:

The final battle takes place as Mario literally crashes Bowser’s wedding ceremony. Once the battle with Bowser is at an end, Mario, Peach and the Koopa King are together on the surface of the moon. Bowser, not entirely out of steam, charges up to Peach with an offering of a piranha plant, still trying to win her over. And here’s where things really got weird for me. Mario also crowds Peach, holding a flower, engaging in a moment of “pick-me!” rivalry with the Koopa King. For a few seconds, the two dudes elbow and jostle each other, pushing their respective flowers in Peach’s face.

Now, this is a really messed-up thing for Mario to do, a vile position to put Peach in. Furthermore, until this point in the series, it’s remained plausible that Mario’s motives for rescuing the princess were mostly selfless. One could say that he simply objected to her freedom being infringed upon, and didn’t want a brute like Bowser getting away with his dastardly schemes.

However, this moment suggests that it’s not that at all, that the real reason he’s rescued Peach so many times is because he wants her for himself. I’ve made countless jokes with friends over the years about how the surprise plot twist of the Mario games will someday be that Mario was the villain all along, but this game was the first that kinda made me believe it. It was impossible for me not to think about the twist ending of the Mario-influenced game Braid, in which the protagonist Tim is revealed to be a stalker, not a hero. Peach has long served as a reward for players in these games, but this scene made me think that Mario, too, sees Peach more as a prize than a person.

To her credit, Peach doesn’t deign to give Mario so much as a kiss on the cheek, but instead gives both of these jerks the cold shoulder and walks off, at which point Mario and Bowser take some solace in their shared rejection. I guess at the end of the day, Bowser is really just another one of the Bros., and, well, you know what they say about Bros.

Yeah, and do you know why all of that is there? To set up that scene where Princess Peach rejects them both and storms off in a display of female empowerment, to later cruise around the world herself having adventures. This is clearly an attempt to subvert the damsel in distress trope — and, particularly, the “Women as Reward” trope — in precisely the way that Sarkeesian had talked about in the past. Yes, to do that you have to derail Mario into someone who presumably was at least seen as being in this for the reward of the love of the princess instead of just trying to do the right thing, but what’s derailing an entire male character when compared to making that obviously visible pro-feminist statement? Which Petit, of course, likes; it’s making Mario a, in her words, “creep” and that Princess Peach didn’t get to do more than she objects to. Um, despite the fact that Mario falling in love with her isn’t actually unreasonable, and that the only thing that, to me, makes his timing suspect is that Bowser isn’t actually real competition. If Bowser was seen as real competition that Peach might have chosen but only if she didn’t believe Mario felt that way about her, then the timing would be necessary, somewhat romantic, and fit into the normal trope that people really should express their feelings about each other if they have them.

Anyway, why did this scene flop for everyone? Because it put, it seems to me, the feminist message ahead of telling a good story. Petit can argue that it’s there just for a cheap joke, but with the final sequence where Peach goes off to be an independent woman having her own adventures that’s hardly likely. No, it seems obvious to me that they wanted to do the sort of subversion that people like Sarkeesian and Petit ask for and didn’t care if they derailed the existing characters to do it, and instead ended up getting complaints because they derailed Mario into someone who is non-feminist (ie a “creep”) with nary a mention that he was derailed in a terrible way specifically to promote a feminist message. Feminists didn’t like it because it wasn’t feminist enough, in that Peach got limited freedom and Mario fit their idea of a “creep” or “Bro”, and non-feminists — or, at least, those who pay attention to the underlying theme — won’t like it because it derails Mario for a ridiculous feminist subversion that even the feminists don’t care for. This is precisely what happens when you try to satisfy the vague and poorly thought out demands of much of the Social Justice line instead of looking at your games and your story and deciding what you want to do. In short, don’t listen to what they say they want, but look deeper to see if there’s a valid complaint and do the work to fix that complaint.

Of course, if you do that, shallow analysis might still have them up in arms. But shallow fixing of complaints brought about by shallow analysis won’t make anyone happy. Least of all you.

The Importance of Goals

November 8, 2017

So, recently, Extra Credits did a video on what they have coined “The Arbitrary Endpoint Trap”. Essentially, this is the case where you are playing a game not just to have fun with the game and in fact might even be having less fun playing the game than you were earlier, but you have an endpoint or goal in mind and are trying to finish that before stopping for the night. In general, they are against this sort of thing, and are encouraging players to stop when they stop having fun, and to notice when games are trying to use these sorts of arbitrary — in some sense meaning “in-game goals that have no real impact on the player” — goals against them to get them to play longer or to play their game instead of another game.

There are a couple of problems with their analysis. The first is that often gamers — especially casual gamers — do get an improved experience from completing these “arbitrary” goals, even if that pay-off is only when you start the next session. The second is that the sort of goals that they cite for mobile games are actually a different goal altogether, being more of the “one more turn” type of mechanism than the “let me get this next level and I’ll stop for the night” sort of mechanism. I’d actually argue that in most of the cases that they cite as being problematic, the problem is that there aren’t enough arbitrary goals, or that they are in fact spaced out too much, not that they exist at all, and that adding more goals might solve more problems than trying to avoid making or following them.

As a casual gamer, one of the big hurdles I face in playing games is the fact that in order to really feel satisfied with my experience, I have to feel like I’ve made some sort of progress in the game. As I commented when trying to decide what to play after Persona 5 — as well as at other times — it is important to me to feel like I’ve managed to get somewhere in a game in a session, that I’ve made some progress or that something has happened. Record of Agarest War is always problematic for me since there are long stretches of grinding where you don’t advance the plot and only gain levels and skill points in the hopes of being able to eventually take out that boss that you need to finish the dungeon and advance the game. It’s particularly bad because it’s not the combat that I like about the game, but instead the story and the dating sim elements, so I end up spending many sessions doing things I like less in order to finally overcome the boss and therefore to be able to do the things that I actually liked to do in the game. Persona 3 had the same problem for me, because the dungeons weren’t all that interesting, so I’d spend one session merely grinding through the dungeons and then the next advancing S-links, which were the things I most enjoyed about the game. And that was when I could exploit the fact that the MC’s level carries over on NG+ to blow through the dungeon on one night. When I had to start from scratch and couldn’t do that, the game was more of a slog than really entertaining.

Now imagine how much worse this is if you can’t play the next day, and your next session is a week or more away. You can really start to feel like you’ve made little to no progress for months if you aren’t achieving any goals and are just grinding your way towards them. So a bunch of small, achievable goals will break up the grind and give you a good place to stop a session where you can feel that you’ve accomplished something in that session. This, of course, will mean that at times you will push on towards a goal to finish it rather than because you are maximizing your fun in the game, but the satisfaction you get from the feeling of having made progress will generally make up for that.

The video also ignores that there is often an experiential cost to stopping in the middle of something and trying to pick up where you left off in your next session. Let’s take their example of a book. The reason that I might want to push through a book and read the last ten pages even if I might enjoy it more if I was more awake and, well, wasn’t making it a goal to finish it tonight. And the reason is that the next time I sit down to read, I can sit down and start a new book and get into it without having to instead go back to that previous book, get into reading it again, only to have to put it down ten pages in — because it’s finished — and then pick up another book and get into that one. And presuming that there is at least some suspense in it or at least something driving me to finish those ten pages, leaving those ten pages to the next session can, in fact, leave me feeling tense and anxious to finish it. If I’m actually enjoying it, then I want to keep going and get to a nice end point, and stopping in the middle will always feel a bit less satisfying to me, even if I’m not enjoying it as much as I would when fresh.

Games, especially for casual gamers, can be even worse. Stopping in the middle isn’t always that easy, because for any game that is at all interesting there are things that you are trying to do and, depending on the game, various different things that you are trying to keep track of. In a strategy game, you usually have multiple objectives — including attacks, defenses and build and research queues, among others — that you are working towards. In RPGs, you usually have multiple quests on the go, and are also looking at level progression, equipment progression, and possibly even companion quests and influence. In adventure games, you almost always have to end on a puzzle that you are trying to figure out. And we can see that stopping in the middle of a match in a sports game would be terribly annoying. All of this only gets worse when your next session isn’t going to be the next morning, but is instead going to be the next week. Or perhaps even longer. Not only does that mean that you’ll have to take some time to get back into the game, you also might well have forgotten all of the things that you were trying to do, and have to spend time trying to remember that before playing again. And forgetting some of those things, depending on the game, could mean that you ruin your game because you, say, forgot to bolster the defense of that city or forgot to seek out your companion to talk to or trigger a quest before you take on the next mission and so lose the opportunity forever.

Or you could push on at less than optimal gaming fun for an hour or two and finish that, and start “fresh” the next day.

See, the thing about these “arbitrary endpoints” is that while they are in some sense arbitrary, when done properly they are also natural endpoints. They are a goal or achievement that you can use to say “Whew, that’s done, so now I can forget about them and do something else”. As such, they work to mark progress, by you getting something completed and thus being able to point to, at the end of your session, all of the things you got done. They can make endpoints of a progression, meaning that you don’t have to remember where you were in them the next time. They can make a natural ending point for the night, like the end of a chapter so that you can start from something that has even a minor resolution, and start over from something that is a natural new starting point. And these sorts of endpoints are important for the enjoyment of players and to allow them to not get completely sucked into an endless progression where they are still playing but not having fun because they haven’t hit a good place to stop yet.

Which brings us to their comments on mobile games taking advantage of our tendency to create and chase goals, because the problem with those games is that they don’t in fact have natural “arbitrary” endpoints, and so encourage “One more turn” sorts of gameplay. The game that most hits this for me is the game Star Wars: Rebellion, but the game that probably most exemplifies this is Civilization, where players keep playing for turns and turns and turns barely noticing the passage of time, and constantly thinking “I’ll just play one more turn” when they know that they really, really should stop. So why does this happen? Well, the issue here is that the players don’t really want to play for “just one more turn”, but instead want to play until one or more things get finished … but as those things get finished, you’ve had to start other things that also need to be finished, and all of these progressions have multiple chains where finishing one progression immediately starts the next one, and so on and so forth. So, in Rebellion, you might want to wait to finish building your Star Destroyer so that you can add it to a fleet, but then when it’s produced you want to send that fleet to capture a planet, and when that’s done the planet is now in uprising so you want to quash that, and send diplomats to make it happy and turn to your side, and then when that’s done attack the next planet, and so on and so forth. Meanwhile, the Rebels are threatening one of your planets so you want to chase them away, and Thrawn has figured out how to build Lancers so you should start building some for your fleets, and new ships have been built to add to fleets to attack more planets, and you want to see how your probes in the Outer Rim turned out and … well, you get the idea. There is no natural endpoint in the game, no point where you can say that nothing is happening so it’s a good time to stop. Sure, there’s downtime in the game, but that downtime is spent watching the progression bars pass so that you can get something done so that you can move on to the next thing, but there is no point in the game where there is nothing that you are waiting for.

This is precisely the sort of progression that they note is being exploited for mobile games.

Compare this to games like The Old Republic or Persona 5 or even games that rely heavily on “one more level”. In TOR, when you finish a section or especially finish a planet you have to either move on to the next area or return to your ship to move on to the next planet. While there are always quests to push you forward if you want to, finishing a planet or area gives you a natural down period when it seems reasonable to stop. For me, my playing of TOR was pretty much done that way: go through one section — which used to take me about three hours, doing every available non-heroic quest — and then return to the cantina for that sweet, sweet rest XP before doing the next section of the planet. While often the quests pushed towards the next area and tried to rush the player, knowing that there was no real rush and that I’d have to travel somewhere anyway — or, at least, just spent my time walking to the new area and so had had a lot of dull downtime — allowed me to quit the game for the day/night feeling like I’ve accomplished something and knowing that I wasn’t going to have to slog out to the middle of the area again if I quit early. In the Persona games, in general the dungeons and the deadlines gave a time when the game wound down for a little bit with a resolution before the next phase of the plot ramped up again. In games like City of Heroes, gaining a level — and a specific one — gave access to new powers that would be interesting for the next session and required me to trek to somewhere specific and thus to stop questing/grinding for a bit, and so finishing the level let me do that right before I quit for the night, meaning that I could start the next session doing quests and using that sweet, sweet new power that I got. In all of these cases, there’s a natural stopping point if you want one, but something that can push you forward if you don’t.

That doesn’t exist in those strategy games I mentioned, and likely not in those mobile games that they’ve talked about. And this means that you actually have to exert more willpower — or be more tired/bored — to quit those games, which then can lead to the problems that the video asserts. But the problem isn’t that those are arbitrary endpoints, but that the game tries as hard as it can to make there be no endpoints, arbitrary or otherwise.

This, of course, can backfire on them, especially in light of what I said about casual gamers earlier. If I have too many different progressions on the go, and come back to the game a week or more later, then I might forget what I was doing, or else might have a hard time getting back into the swing of the game. And if that’s the case, then I might screw up my game or at least not have as much fun playing it as I did that first session. And then I might stop playing it. Thus, their strategy to keep people from quitting their game might actually end up causing people to quit playing it, whereas if they’d even added something like a chapter break it might allow their players to get a sense of accomplishment and be better able to pick up where they left off the next day.

Arbitrary endpoints aren’t bad, and as endpoints actually work against the overly addictive “just one more turn” sort of thing they gripe about. If you constantly find yourself pushing yourself longer than you’d like to finish endpoints, maybe the problem isn’t having those endpoints, but that there aren’t enough of them to give you that sense of satisfaction without having to play for so long, and so adding some of them might, in the long run, be better for you and the fun you get from games.

Still Further Thoughts on Cheers (End Season 8)

November 6, 2017

Cheers got much, much better after Diane left.

A big part of this is because of what I touched on at the end of Season 5, as the Sam and Diane relationship overly dominated the show and wasn’t all that interesting to start with. As the show went along, the secondary characters became more and more important and also more and more interesting. With the Sam and Diane romance out of the way, there was more time available to explore these characters and tell stories featuring them. So we could focus on Frasier and Lilith getting married and having a child, Cliff’s mother moving to Florida and him finding romance with someone as mail-focused as he was, on Woody and his relationship with the absolutely spoiled sweet Kelly and a bit more on Carla to take her from the snarky tramp to having a bit more depth to her. About the only secondary character who doesn’t really get a chance to shine is Norm, but his big story arcs — his love for his wife and the details of his career — were pretty much settled before this. As he strikes up an early friendship with Rebecca, he turns into the perfect supporting character for all the storylines, as he’s pretty much the only character that everyone in the bar gets along with. Even his painting business is best used to set-up storylines for other characters.

With the big romance out of the way, the relationship between Rebecca and Sam can take a back seat to the other stories. Sam is indeed trying to pursue her, but it isn’t the main relationship drama of the show. This means that while it can be the main plot of an episode, it can also be a B-plot or even merely a complication. Even the triangle with Robin is less one of actual love and more of mutual jealousy. So this allows the plot to have more elements because they don’t have to play out the typical atypical dramatic romance plot. I actually really like Rebecca’s ploy when Sam finally corners her: agree, but refuse to participate. Not only does this show cleverness on her part, it also reveals that Sam doesn’t just want sex, but instead wants eager participation. If she’s just going to be passive about it, he’s not interested. This actually expands his character from the simple lothario to someone with a bit more depth. This is also revealed when at one point when Rebecca was devastated, he started to make a move on her and when she was somewhat responding, he decides not to take advantage of her, leaves, and calls her from the lobby so that he could still support her while not risking seducing her. When Rebecca insists that it wouldn’t happen, he asks her to check her bra, and when she does she asks “How did you do that?”, not knowing how he could do whatever it was while they were simply hugging on the sofa, which a) keeps his “ladies man” image alive, b) shows that he’s right to not want to stay because he’d probably seduce her and c) is actually funny.

I also liked how when they first met Rebecca gives a big smile of attraction and interest and only turns cold after he tries to hit on her.

As the series goes along, Rebecca becomes more of a loser. Up until Season 9, she hadn’t lost all of her competence and strength, but more and more she was screwing things up and had a disastrous personal life. I don’t mind the disastrous personal life, but think that her incompetence works better when it’s played up as book smarts vs street smarts rather than her just being, as she herself puts it, too dumb to live.

Speaking of that, at one point I was washing dishes and wondering what Robin Colcord saw in her that made him interested in her, and then remembered the ending — that I hadn’t seen the episode of yet — of that and noted what, indeed, he was after. But at the start of Season 9, he seemingly really does love her. We’ll have to see how that plays out (I already know the ending, but want to see how it gets there).

Ultimately, at this point Cheers is actually entertaining most of the time.

Can we trust science?

November 3, 2017

So, you’ve been given antibiotics. I’m pretty sure everyone will know the old rule: always take all of the antibiotics, because if you don’t then you might increase resistance to antibiotics which would be bad. This was a firm rule for as long as I can remember, and so pretty much for as long as I’ve been alive. If we knew anything, we knew that taking all of the antibiotics in your prescription was the right thing to do.

Or, perhaps not.

I actually came across this earlier this week while waiting for a lunch order and watching their TV while waiting. It was on a talk show and they were talking about headaches, the expert mentioned that for some headaches antibiotics might be required, the hostess repeated the line about always finishing them, and he rather awkwardly replied that, yeah, that might not be true anymore. She was flabbergasted, as was I. This seemed so certain. We were always told that this was the way things were supposed to work. And now it might not be? Really?

What’s next? Smoking doesn’t actually cause cancer? (What a monumentally chaotic situation that be, eh?)

And medical science tends to be fraught with such examples. Recommended diets, for example, change frequently as new things are discovered. Is chocolate, alcohol, or eggs good, or bad? Is fat good, bad, or indifferent? How much should you exercise? Can you exercise in small amounts or do you need to do longer sessions to get any benefit? And so on and so forth.

And don’t even get me started on Psychology.

Now, both of those at least have the excuse that they are trying to use the perfect, third-person oriented scientific method on situations that are far more chaotic and personal than normal. Maybe all they really need to do is stop trying to universalize these principles, turn the more common ones into recommendations, and add more ways to help people determine what works for them. I even have a couple of examples of this from personal experience. In a Psychology class I was taking, the old “constant review” rule was mentioned. The problem is that constantly reviewing bores the heck out of me, and can actually make my retention worse because I stop paying attention to it. You know what surprisingly does work for me? Writing everything down, even if I have notes or slides to look at. Saying it to myself in my head seems to help me remember things even if I don’t really study or review until the end. The other example is the common “graze” advice, where you eat small meals when you’re hungry instead of having a couple of big meals. The problem for me, as I constantly tell people, is that if I tried that I’d either eat all the time or not at all, depending on what I’m doing. If I’m mentally engaged in something and not thinking about food, then I won’t even notice I’m hungry (Star Wars: Rebellion, I’m looking at you here). But if I’m sitting around just reading or watching TV, then I get bored and so at least get more inclined to eat something. So for me the best model is to have scheduled meals and even plans for what I’m going to eat. But both of these cases are ones where for others — and maybe even for most others — they would work. If I blindly followed the advice, they wouldn’t work for me, but that doesn’t mean that it doesn’t work for other people.

But are these fields exceptions, or is science not really trustworthy?

Before I get into this, I probably should fire off this disclaimer. My first degree is actually a science degree. It’s a Bachelor’s of Computer Science, but that was under the Faculty of Science at my university. I took an Astrophysics course as an elective. The reason I don’t follow a former co-worker’s advice and do a Physics degree is because I can’t handle the math, not because I hate the science. I’m not an expert scientist, but I don’t dismiss any scientific discovery without at least having reasons to do so (like finding potential confounds). I oppose scientism, but don’t oppose science itself. And my main approach to clashes between science and religion or philosophy is to conform the religion or philosophy to the scientific facts (whatever they are). So I’m not some kind of anti-science crusader trying to weaken science to bolster my non-scientific claims.

And so, let me ask: should we trust a field that is most famous for getting things wrong?

There aren’t a lot of theories in the history of science that survived unaltered, and a large number of them were, in fact, overturned. As we have seen and are seeing, a lot of these upheavals have happened with theories that were considered rock solid for ages. Newtonian physics, for example, was at least found to be wanting and so predicted the wrong things at certain levels, and so had to at a minimum be supplemented by relativistic physics (it’s a major bone of contention to say that relativity replaced Newtonian, but the more I think about it the more I think that it did, because the only things that really were saved were the ones based on precise empirical measurements, and a theory that only explains what you can measure isn’t much of a scientific theory, but I digress). Depending on what you count as science in history — and even scientists and scientismists are inconsistent about this, claiming ancient philosophers and yet dismissing some medieval figures who actually claimed to be doing science — you can claim radical changes in pretty much every field being prevalent in science. And, in fact, that radical changes are more the norm in scientific history than long-standing theories that never changed.

And in fact even one of the biggest examples of science vs religion was in fact caused by a change in the science. Natural theologians adopted the design theory based on the mechanistic view that science was promoting at the time, only to have that base cut out from under them by science deciding that, no, the world wasn’t that way and that evolution was the way to go. In fact, science’s move here also caught Immanuel Kant, as many will criticize him for assuming that Newtonian physics was settled while discussing the phenomenal world and so “getting that wrong”, despite the facts that a) he was just saying what science thought at the time, b) he wasn’t making an argument that his philosophy implied or insisted on it, c) his philosophy didn’t really require that to be the case and d) his most important point there was that science was the method to figure out the phenomenal world, because that world was empirical.

Science, then, has changed its views pretty frequently throughout its history, and yet rolled along, in general, touting that it finds and corrects its mistakes. However, any other field that relies on the current understanding of science and tries to build on that very much risks science undercutting them later, and then having scientismists chortle about how those fields would be so much more accurate if they just did things the way science does.

One wonders whether anyone should, in fact, rely on science for anything important at all, or instead just rely on what seems best to them given all they know.

The problem is that there are three main aspects to science. The first is strictly empirical: taking measurements of the world and tossing those into equations that capture those measurements. These are, in general, pretty accurate, but are mostly meaningless. Science can pretty much measure, for example, what speed something will fall at if you drop it at various heights, and even write equations to allow it to predict heights that it hasn’t directly measured, but that’s not all that impressive. The second is the explanation for why that happens, which starts to get into various theories. These are more speculative, but can be not too bad when the situations are controlled and the theories add on the caveat that they are true given that the situations are the same and that nothing has changed. The third is the inductive step, where the theories try to generalize to more and more situations that we haven’t and can’t measure. It’s this step that causes the most problems, because the predictions depend on the reasoning being correct and the situations not actually varying in odd ways that they didn’t think of when they came up with the theory.

So the first is what science can do really well, but is the least interesting, while the last is the most interesting, but the more risky. Science is going to have to correct the first the least and the last the most. But to base anything interesting on science will require you to use the more interesting results, which are the ones that are the most likely to be incorrect. This will even apply to simple life choices based on medicine or psychology. Sure, you can trust the doctor when he says that if you have this condition that taking this medicine will cure it in general because there isn’t that much variance in people or that condition and they’ve tried it millions of times. You might not be able to trust him when he says that taking cholesterol medication will reduce your chances of a heart attack because there are all sorts of other factors involved, like risk factors, your reaction to the medication, whether you can improve your diet and exercise, and so on and so forth.

Maybe what we need to do, really, is be more careful about examining which of these three cases the purported scientific theory is. Scientists — and, more often, popular science media — often tend to express any scientific result as if they are all equally “supported” by the weight of science, but that isn’t true. Yes, scientists are generally better at noting when they are more or less certain about it in their papers, but if they find something really cool they generally emphasize the “coolness” and barely mention the “preliminary” parts, because they want the recognition and want to get money to keep looking at the cool things. Being more careful about this would certainly help.

That wouldn’t do a thing for cases of long-standing theories that suddenly get overturned, however. But perhaps the problem is that scientists don’t do enough philosophy. Philosophers are famous for pointing out “Your theory doesn’t have to be true because X could be the case”. I don’t think science should go full on skeptical like philosophy does, but I also note that a lot of the problems science faces tend to be ones that philosophy would, in general, point out. Potential confounds. Theories strongly overreaching the data. The logic not actually being valid. And so on. I’ve myself read scientific and psychological works and found obvious potential confounds. It might be a good idea for scientists to take more philosophy — it generally isn’t required for scientists to take any — or to have a field like Philosophy of Science produce more philosophers whose main role is to look at scientific theories and find all the places where their logic isn’t working and to advise new experiments to make or new data to gather.

Perhaps that could be a new career for me! If I could handle the math, that is …

So now what?

November 2, 2017

So, with the Houston Astros having won the World Series in seven games, baseball is completely finished for the season, which leaves me with one really important question:

What am I going to watch weekend and some weekday afternoons now while eating or reading or playing a game between now and April?

Crap, It Succeeded …

November 1, 2017

So, at work I am quite busily working on a feature where, essentially, what I’m trying to do is take an operation that used to be one-step and introduce a step into it where you can do that first step — which is most of the work a user might want to do — and then finish it later. This means that there are two major parts to the feature. The first is to do the first step so that everything is stored and there when we want to finish it off. The second is the operation to complete the original one-step operation in precisely the same way as the one-step operation did it. Thus, a lot of testing has to be done to ensure that the end result of my two-step operation is exactly the same as the end result of the one-step operation. Since there are a ton of different combinations, this is something that I need a lot of help from QA to do.

It also means that I can get into an interesting situation, which happened over the weekend. One specific scenario was failing, so I was working through the code and fixing up all the places where that failed. After I did that, it completely succeeded! But, I had to check to see that it did all the same things as the one-step operation, and things were looking a little funny, so I tried to create it using the one-step process … and it failed. After making sure that what I was doing wasn’t screwing something up, I then spent the next day trying to figure out where my code was going wrong and succeeding when it should have failed. I finally managed to successfully get it to fail and thus knew that my code was closer, at least, to being correct.

This is the second time on this feature where I had something succeeding when it should have failed, and so was incorrect. The other time, it seemed to work — meaning fail or succeed appropriately — for the QA people, so I ignored it as being something odd with my set-up. But it’s one of those odd cases where succeeding is really a failure and a failure would really be a success.

Of course, all error cases are like that. But this wasn’t supposed to be an error case. It just happened to be a failure case due to misconfiguration. And that always leads to that odd feeling of “Damn, it worked, so I did something wrong!”