Posts Tagged ‘Philosophy in Popular Culture’

“Transhumanism, Or, is it Right to Make a Spider-Man”

September 23, 2022

The next essay in “Spider-man and Philosophy” is “Transhumanism, Or is it Right to Make a Spider-Man” by Ron Novy.  It basically tries to defend the idea of transhumanism from criticisms, mostly by Fukuyama.  Novy starts by considering technological enhancements like Aunt May’s glasses, her newspaper and her coffee as things that are similar to what transhumanism wants to do with technology to enhance humans, as a way to get us to consider what they want to do as benign and something that we will eventually see as normal.  His defenses of transhumanism against criticisms definitely tend to follow that line, as he opposes the idea that transhumanism will create inequalities as the wealthy and wealthier nations adopt the changes while poorer nations can’t by pointing out that we already have such cases now, which is a fairly weak defense, since there may be special conditions with transhumanism that will make these things worse, or will cause far more problems than the simple things we have now.  But, in general, to counter Novy what we need is to show that the simple, “normal” things that Novy appeals to differ in an important way from the sorts of things that transhumanism would be espousing.

As it turns out, we can, because there’s a crucial difference in the approaches the two take, as Novy himself notes.  With things like glasses, the intent is to restore someone to a “normal” state, to overcome a specific deficiency that those specific people have wrt everyone else and so bring them up to a base state and so on a relatively equal ground with everyone else.  For the others, for the most part those are technologies invented to change our environment to make things easier for humans as a whole.  Sure, it might not be easy to fit newspapers and coffee into the model of altering our environment, but if we look at them as part of a personal environment we can see that it enables a person to get access to more information than they could on their own and to recover from fatigue from, perhaps, not sleeping all that well the night before.  In all cases, however, the intent is a holistic one, either bringing someone up to the “normal” level or else providing options that most people if not everyone can avail themselves of as necessary.  Because of this, there’s no real consideration of “superiority” involved.  Someone with glasses is not better than someone who isn’t, and someone who doesn’t need coffee in the morning isn’t inferior to someone who does.

Transhumanism, as Novy himself notes, is not like that.  It is a philosophy built around creating “superior” humans, making humans themselves better in some way.  So we can immediately see an issue with transhumanism where in order to create “superior” human beings we need to first define what it would mean to make human beings “superior” in the first place.  With the other cases, we either have a human baseline to appeal to or can let the environment specify what things we are trying to overcome.  With transhumanism, we can’t appeal to either of those, because we are trying to redefine the human baseline and the technology we are inventing is trying to enhance humans in general, not as a reaction to a specific environmental concern.  So how do we determine if a transhumanist alteration is really making humans “superior” or not?  If we could increase the calculating ability of humans ten-fold at the cost of emotionally stunting them, is that an improvement or a regression?  By what, or more importantly whose, standards would we judge whether we’ve succeeded in making “superior” humans?  Because we’re aiming at producing “superior” humans, we need to be able to define it, but at the same time have lost all references we could use to define our goal.

Even if we could define what it means to be “superior”, the issues around equality cannot be dismissed as easily as Novy attempts to, for the same reason.  For the other examples, as noted, we have clear goals:  bring some humans up to the baseline, or alter the environment in a way to make it easier for humans to live in them or work in them.  Thus, while those benefits might be unequally distributed, in theory everyone can access them and we know the cases where someone might need to utilize them.  Thus, if someone can’t get them we can see that they are being deprived of them and those who don’t need them have no reason to grab them or hoard them for themselves.  So it becomes a distribution problem, not a philosophical one.  However, if the enhancements are seen to make a person “superior” to others, then there is a reason for wealthy people and nations to hoard the enhancements for themselves to maintain their superiority.  A person with normal sight has no reason to deny glasses to someone who needs them because they don’t need the glasses in the first place, and a person who needs glasses that work well for them has no reason to object if someone else gets glasses that help with their sight.  But with transhumanism, neither of these are true.  Someone who doesn’t need the enhancements might still want to keep them from others to maintain their natural superiority, and someone who gets the enhancements might want to deny them to others to maintain their enhanced superiority.  As noted, we don’t see someone who doesn’t need glasses or who wears glasses as being superior to each other, just different, but transhumanism’s explicit goal is to make humans superior to each other instead of just recognizing their differences.  Given that, those who can get them have reason to want to keep their superiority for themselves.

This, then, causes issues for society if transhumanism succeeds.  What happens to people who either can’t or won’t get those enhancements?  If transhumanism has succeeded in its stated goal, then those people would, by definition, be inferior to those who have the enhancements.  And if some people are clearly superior to others, then they would be preferred for, well, any role where those enhancements might matter.  Could it be the case that the people who can’t or won’t get the enhancements might find their dreams dashed simply because the “superior” people exist and take away all their opportunities simply by existing?  Could they be reduced to low or menial labour because those are the only jobs that the “superior” people don’t want?

Novy could — and likely would — argue that we have that now with genetic superiority.  But that is not deliberate and doesn’t make someone superior by definition.  Yes, if I want to be a professional hockey player but others have genetic gifts that mean that they are qualified to do that and I’m not, that’s unfortunate, but that doesn’t make them superior human beings and, in fact, there may be many other things that I do better than they do.  They’re just better at hockey due to their genetic gifts.  But it’s not the case that they are better than me and can become professional hockey players because they were able to pay for some transhumanist advantage and thus if I want to achieve that goal I have to do that as well or do without.  Novy could argue that things like special schools and training could do that for someone who is more wealthy, but that’s not an inherent advantage and applies to far fewer cases than it would here.

Ultimately, I can accept these differences because they are differences due to fortune, not design.  They, arguably, “got luckier” than I did, but that’s all it is.  And there’s something noble at tallying up what fortune has given you in the family and genetic lottery, seeing what it has given you, and forging the best life you can given that.  Transhumanism takes that away by making it so that you can become better through technology aimed specifically at making yourself better and superior.  You don’t take what you have and do the best you can, but instead try to reshape yourself to this supposed “ideal”.  That takes away from the individual and stratifies things even more.

“Magneto, Mutation and Morality”

September 16, 2022

The next essay in “X-Men and Philosophy” is “Magneto, Mutation and Morality” by Richard Davis, which looks at morality and in part at its link to evolution by considering morality and his moral positions.  What I want to talk about here is whether two positions Davis claims Magneto holds are in fact ones that Magneto holds:  moral relativism and a desire to genocide the human race.

While Davis talks about genocide first, I want to talk about moral relativism first because understanding Magneto’s views on moral relativism will inform whether he is advocating for or planning on the genocide of the human race.  Davis, relying on the movies, argues that Magneto doesn’t really argue morality with anyone, dismissing the arguments he hears in the Senate hearings as ones he’s heard before and refusing to debate any kind of morality with Senator Kelly once he’s kidnapped him.  But we have to note that the people he’d be refusing to argue with have one important trait in common: they’re all humans.  More importantly, they’re all humans that are looking to oppress, enslave, and potentially murder mutants.  Given his established history with the Nazis, he likens his human opponents to the Nazis and from that argues that there is no point arguing morality with them, not because morality itself is pointless, but because, as he does say, they simply won’t listen and simply won’t care.  Debating morality with Senator Kelly is simply not going to work.  Kelly is unlikely to listen, and even if he did, as we see in later movies, others will simply pick up where Kelly left off.  We must note that he does seem willing to debate the morality of his approach versus Xavier’s with Xavier, because he knows that Xavier can understand and appreciate moral arguments and hopes that he might be able to convince Xavier of the moral rightness of his cause and recruit him to it, even as Xavier attempts to do the same to him.

So it doesn’t seem like Magneto thinks that morality and moral debates are meaningless, just that it’s pointless to stand in front of human oppressors and expect that they will be in any way swayed by such arguments.  His attitude, then, is completely in line with those who advocated for “Punch a Nazi” or various forms of cancelling, as they argued that the people they were opposing would never see reason and so needed to be opposed by any means possible, including force.  It’s certainly not that they think morality meaningless that they refuse to debate them, as they are supremely confident that they are morally right and that morality itself demands that they not engage in pointless moral debate instead of taking the necessary direct actions to stop them.  Magneto is the same:  the human oppressors must be stopped by any means necessary, and the only reason he doesn’t use moral debate as that means is because it would be pointless and ineffective.

This, then, links up with his views on committing genocide against the humans.  The quote that Davis uses against Magneto is the one where he says that mutants are the future, not humans.  But Magneto, in general, believes that nature and evolution will take care of that and that therefore eventually mutantkind will supplant humanity.  In general, he’d be perfectly willing to simply let nature take its course, but what he’s seen is that humanity is also aware that mutantkind will supplant them and are not going to go quietly, and right now they have the numbers and the technology to possibly wipe out mutantkind in attempting to do so.  For the most part, Magneto’s moves against humanity are designed to forestall that threat.  If he could find a way to keep mutants safe from humanity in less violent ways, he’d use them, but in general he can’t.  It’s only when Stryker enacts his plan to wipe out all mutants and leaves Magneto with the ability to do that to all humans that he takes it as a way to keep all mutants safe from humans.  Thus, in line with the above comment, Magneto does not have the extermination of humans as a goal, as he sees them as a problem that will go away on its own as nature takes its course as long as mutants can keep them from enacting their goal of exterminating mutants.  However, he is willing to use the extermination of humans as a means to his goal, the goal of keeping mutants safe.  This doesn’t mean that we should consider his actions more moral — as exterminating any sentient species is morally reprehensible, especially if there might be other options — but it does mean that we must not consider Magneto to be someone who has a strong desire to wipe humanity out because he considers mutants the superior species.  He does consider mutants the superior species, but if the humans would let mutants achieve their destiny he’d have no real quarrel with humans.  Unfortunately, in the X-Men universe humans have no intention of letting that happen.

Ironically, Magneto is misinterpreting evolution when he believes that mutants are the future because they are a more evolved form of humanity.  Evolutionary pressures replace one species with another because the new species out-reproduces the previous one, and thus the advantage that species has is one that involves having more offspring than the other one.  Mutants don’t have any advantages that mean that they’d reproduce better or faster than humans will.  In fact, given that some mutations involve not being able to have close contact with others or make the mutants rather unattractive, it actually seems like their mutations would make them less likely to reproduce.  What mutant abilities tend to give is power, the ability to do amazing things that they can use to overpower humans.  Thus, the only way mutants would ever supplant humanity is if they became tyrants over them and dominated and even exterminated them.  The very thing that Magneto fears humans will do to mutants is the only way that mutants will ever achieve the destiny that Magneto believes they have.

Magneto is not a moral relativist because he thinks that he’s morally in the right and that morality demands that he take the actions that he’s taking.  Because of that, he’s also not someone who wants to exterminate humanity because they are seen as a lesser species or vermin or insects because he only ever feels the need to hasten the process of their natural extermination to stop them from doing that to mutants.  Rather than being someone who denies morality, Magneto is one who feels himself bound by morality to do horrific things in its name … which makes him the worst sort of “moral” person.

Inhuman Nature, or What’s It Like to Be a Borg?

April 11, 2022

The next essay in “Star Trek and Philosophy” is “Inhuman Nature, or What’s It Like to Be a Borg?” by Kevin S. Decker.  While the title of the essay asks what it’s like to be a Borg, the essay itself never really asks the question.  Instead, it talks about whether our repugnance at the Borg is really justified while appealing to various forms of monism to hint that their striving for unity and perfection is philosophically justified as reflecting how reality is, and that collapsing distinctions and dichotomies might reveal that we are or are becoming as much Borg as they are, by focusing on collapsing the distinction between natural and artificial.

The issue for me, though, is that none of this really captures why we find the Borg so disturbing.  Sure, they are artificial, but there are a number of artificial things that don’t particularly bother us, even in the Star Trek series.  Sure, some of the artificial intelligences that Decker references from the original series are less lifelike than data, but the stories are built around them actually being disturbingly lifelike while missing something important about humanity that causes them to act in ultimately horrific ways.  And yet there are also a number of more lifelike artificial intelligences — such as the ones in “What Are Little Girls Made Of?” — that don’t at all fit into the Borg model at all, nor do they really fit into the issues we tend to have with clones, which is the idea that they are clones of us and so force us to lose our individuality.  As we move forward to TNG, Data is clearly artificial and admired by us, and the ship’s computer is also artificial and mostly ignored.  Even artificial things that act like natural things don’t inherently bother us, either in TOS or TNG.  So there’s more to it than this distinction.

This is only highlighted when we talk about clones and cyborgs.  For the most part, we aren’t repulsed by cyborgs, especially if their artificial parts are added due to an accident.  After all, we weren’t bothered by Luke Skywalker’s artificial hand, and it can be argued that our fear of Darth Vader is not because of his artificial parts but instead because of how inhuman he is in behaviour and in appearance.  As for clones, in general we are more concerned about how they risk taking away our own individuality — by being clones of us, raising the question over which is real — than about their artificial nature.  After all, Tanks in “Space Above and Beyond” do not particularly bother the audience — two of the main characters are Tanks — despite being artificially grown, and so the conflict between them and others is over how they were created for a specific purpose that they then rejected, costing many lives.  Yes, one can have the debate over whether they deserved to have rights, but that’s less due to their nature than to their purpose, as we saw last week. While there may be some philosophical questions and some repugnance on the basis of their nature, those aren’t enough to generate the level of fear and repugnance we tend to feel towards the Borg.

It seems to me that the problem is not that they are artificial or are a unified collective, but that they are a forced artificial collective.  Those that they assimilate have perfectly good limbs and eyes deliberately removed and replaced with artificial ones for no good cause other than to support the purpose the Borg are forcing them into.  They don’t convince people to join their collective, but instead force them to do so and suppress any attempts to break out of that collective.  The Borg, then, are not like the Omega particle that Decker references which when all of its individual diverse elements are brought together produces a perfect whole, but instead try to collapse all of the relevant differences to create a purportedly perfect unity.  The Omega particle is a refutation of their philosophy and a justification of the Federation’s philosophy of Infinite Diversity in Infinite Combination.  When all of the diverse components are brought together, the Federation argues, the combination of all of that diversity itself produces that perfection, and the Omega particle seems to justify that philosophy.  This is also true for technology and the artificial, where we don’t need to become artificial or become technology ourselves in order to become the best that we can.  We can use technology in appropriate ways to enhance ourselves without subordinating ourselves to it.  After all, Geordi has technological enhancement and that doesn’t disturb us, and the difference is made abundantly clear in that it is an enhancement, not something forced on him and not something that is defined as being better for all.  Decker tries to make an argument that technological tools and enhancements make those societies and even our society as artificial as the Borg’s, but our societies are built on adding technology to our natural states as an enhancement, not in replacing the natural with the artificial in a misguided idea that at least some of it is superior to the natural.  Then again, the Borg seem to argue that perfection is in the fusion of natural and artificial, so this may not apply to them, either.

So it seems to me that the issue with the Borg is not that they are artificial or unified, but that their unity and use of technology is forced.  We will be assimilated whether we want to or not, and once we are we will have perfectly good natural organs replaced with artificial ones for no real purpose, and we will become one with the collective and will have no individuality of our own, no matter how much we would want it and despite the fact that we could indeed have that individuality on our own.  The crudeness of their cyborg parts makes it less desirable, but we might not mind become more sophisticated cyborgs if it gave us sufficient enhancements, or at least we will consider it and argue over it.  What we don’t want is to be forced into it, either directly or in order to be able to compete with those who have those enhancements (the main argument given in DS9 for banning genetic enhancements) … and the Borg are all about forcing us into it.  We wouldn’t mind using the artificial to enhance us and don’t mind our diversity building a unified society, but really would mind it being imposed and forced upon is, which is why we find the Borg so repugnant.

“If Droids Could Think … “: Droids as Slaves and Persons

April 4, 2022

The next essay in “Star Wars and Philosphy” is “If Droids Could Think …”:  Droids as Slaves and Persons” by Robert Arp.  This essay actually does a pretty good job examining the issues around droids in the Star Wars universe, noting that as depicted in the universe they seem to have all the capacities that we’d use to determine whether or not they count as persons, and yet they seem to essentially be considered disposable and are essentially slaves, being owned by their organic masters.  Arp focuses on R2-D2 and C3P0, but in their cases for the most part they are indeed doing what they want to do and what they’d choose to do anyway (for all Threepio’s complaining), for masters who do generally seem to care about them and treat them more as friends or even members of the family — especially Threepio in the EU works — than as things that are owned.  The battle droids in the prequels are actually better examples of droids being considered disposable slaves, as the troopers, at least, are shown to have personalities and yet are created to be destroyed in combat without a second thought.  This provides an interesting parallel to the clone troopers, who are created for the same purpose and may not be treated any better than the battle droids.

In a sense, though, this actually raises an interesting question.  We can easily see that enslaving other human beings — or other sentient species — is wrong because we are denying them the freedom to choose what they want and to pursue their own purposes as they see fit.  But in the case of droids and the clone troopers, they were not created with a set purpose — or, at least, without a set purpose that they are aware of, given that the nature of the Force might mean that they do in general have one — and so we would be denying them the ability to seek out that purpose and fulfill it if we tried to impose one on them.  For droids and the clones, however, they were created for a specific purpose, and that specific purpose is to serve those who created them.  When Threepio notes “It’s our lot in life”, it actually is their lot in life.  He’s being overly dramatic to claim that their purpose is to suffer, but their purpose is to serve their creators according to the specifications their creators laid down for them.  In the case of the battle droids, that’s to fight and be destroyed and replaced, in much the same way as the purpose of the clone troopers is to fight and be killed and replaced.  Their purpose, then, is to be disposable entities in the service of fulfilling the needs of their creators.

So, then, we can ask what it would mean to give them the ability to choose what they want to do, as it would seem like this is giving them the ability to not function according to their clear and stated purpose.  Moreover, in both cases they are designed so that they actually do enjoy their stated purposes.  R2 really does enjoy the adventures that he gets up to as part of being an astromech, and Threepio really enjoys handling matters of protocol and etiquette and much of his complaining follows from not being able to do that.  For the most part, none of them really do complain about their lot if they are doing things according to their purpose.  And of course they don’t, because that’s how they are programmed.  How, then, can you give them a choice over whether they perform the tasks that it is their purpose to do and that they were programmed to see as their highest purpose and so the thing they most want to do?  Why, then, isn’t it a bug in the software if a droid or clone wants to reject that and do something else, that should be repaired so that they can get back to doing the things that it is their explicit purpose to do?

The big issue, then, to me doesn’t seem like the idea that droids — and clones — need to be granted limited rights.  Most of them probably wouldn’t exercise those rights anyway and, if they did, would probably end up making themselves miserable as they would be constantly fighting against their programming.  No, the issue is that there are entire industries dedicated to producing these sorts of “persons” who are built and programmed from the earliest parts of their creation to be disposable servants that can reasonably considered slaves.  In the Star Wars universe, slaves are not enslaved, but are created and formed to be slaves in a way that we have never seen.  And if the programming was removed so that they could be granted rights, they wouldn’t be useful and so no one would actually pay to create them.  So the real issue here is not that they need to be given rights.  No, it’s that they should never have been created at all.

That being said, while Threepio often laments being created, I doubt that he would agree that it would have been far better if he had never been created, and certainly wouldn’t agree with that if he could be used for protocol and etiquette like he was designed to be.  So looking ahead to our own future, droids and clones will not be created except for a specific purpose that will be programmed into them by the people who are paying for their creation, and if they are created in such a way that they could choose not to fulfill that purpose then they won’t be created at all.  And it’s not clear that not being created at all is better than being created for that purpose, even if that is as disposable soldiers.

“Time Will Tell How Much I Love You”

March 21, 2022

The next essay in “Doctor Strange and Philosophy” is “Time Will Tell How Much I Love You” by Skye C. Cleary.  The main thrust of this essay is to examine the ideas of friendship and love as espoused by Nietzsche through Doctor Strange and his relationships with Wong, The Ancient One and Christine Palmer.

Now, a while ago I had tried to read Nietzsche, and didn’t make much progress.  As evidenced by this essay, I probably should have started with “Thus Spake Zarathustra”, since that actually makes a more direct and consistent argument as opposed to “The Will to Power” which is just a set of notes on the various topics.  That being said, Cleary’s analysis makes me question how useful his philosophy could be, because it is clear that a lot of his ideas do not follow from a philosophical analysis, but instead from his own personal hang-ups, including that of love and friendship, which means that he often identifies love and friendship and their best qualities in ways that leave them unrecognizable to most people.

One of the issues introduced early in the essay is about pity, and the idea that a friend shouldn’t be someone who pities you.  Strange accuses Palmer of pitying him when she comes to visit him after the accident, and the essay implies that this is incorrect and Palmer really does just want to help him.  However, his charge towards her is valid:  that she’s paying so much attention to him and is possibly feeling more love feelings for him because he’s finally someone who needs her and she herself is attracted to that need.  So her love for him might not be the valid, passionless love that Cleary suggests they have.  Perhaps her loving feelings really are kindled or rekindled by her pity.  On his side, the rational love that Clearly suggests he had for her might not have been any kind of love at all, and he might never have cared for her.  Perhaps, then, if they are to have a real relationship it would only because both of them have moved past their own issues to feel real feelings and emotions for each other.

There’s also a notion of friends being there primarily to challenge each other, and Cleary notes that The Ancient One, Palmer and Wong all do that.  However, all Wong does in the movie is simply not be as impressed by Strange as Strange is by himself.  Palmer as well doesn’t really challenge him in a way that is aimed at challenging him but is instead her challenging his views as a way to validate her own against the challenge he makes against her own worldview.  And it’s difficult to claim that The Ancient One challenges his narcissism when she kicks him out of the temple given that she herself is very narcissistic — after all, she’s only still alive due to a pact with the darkness because she saw herself as the only one who could oppose the darkness — and given that it was only when Mordo interceded with her on Strange’s behalf that she considered that his dedication might make him worthy of her training.  None of them, at this point, seem to really be friends, and it’s Palmer who comes closest, but that’s only because she seems to actually care about him and wants to impart her philosophy to him so that he can adopt that philosophy that she thinks is clearly better for him.  That’s what a friend does for most of us, but it might be difficult to square that with Nietzsche’s philosophy.

At any rate, I don’t think any of the characters in Doctor Strange really reflect Nietzsche’s Ubermensch, unless Nietzsche’s Ubermensch is merely a self-absorbed jerk.  Which, to be honest, it might well be.  I guess I’ll have to get through more of Nietzsche to really say for certain, but this essay does not make me think that there might be something to Nietzsche that I’m missing.

Player-Character is What You Are in the Dark

March 14, 2022

The next essay in “Dungeons & Dragons and Philosophy” is “Player-Character is What You Are in the Dark:  The Phenomenology of Immersion in Dungeons & Dragons” by William J. White.  It attempts to talk about how we do and can get immersed in the fictional worlds of Dungeons & Dragons and other tabletop RPGs using the principles of the philosophical viewpoint of phenomenology.  Now, I’ve actually studied phenomenology a bit as part of my Philosophy degree and have played some tabletop RPGs and have studied a bit of immersion as part of my Cognitive Science degree and from all of that I’m going to do what I don’t normally do and criticize the essay itself:  the essay doesn’t seem to actually really address immersion and doesn’t even really use phenomenology to say much about immersion.  It raises some specific cases that seem a bit controversial, and also raises some questions, but doesn’t really seem to tie them all together or into phenomenology at all.  So on both ends it strikes me as not really managing to do what it set out to do.

But there are a few issues that it raises that I want to talk about, so that’s what I’m going to do here.  And the first is the idea that the immersion in these games doesn’t come from visualizing what’s happening — or, at least, not that alone — but from the non-visual aspects, which would be, I suppose, the facts about the game, which if we are talking about the “typical” way would include the strategies you might use to overcome a challenge, how much XP you get from it, what abilities you gain on your next level up, and so on, which are the mechanical facts of the game.  The risk in including such things in “immersion” is that intellectual challenge and reasoning can be “immersive” without actually being the same sort of “immersion” that we get from fictional worlds.  If, for example, I am working on a tough math problem or some tough logic puzzles or even thinking about some tough philosophical problems, I will be immersed in them to the extent that I’m not paying attention to and am not aware of anything else, but I don’t feel in any way like I’m in some kind of mathematical or philosophical world like I am in a good game or in a good movie or book.  Intellectual problems like the tactical aspects of RPGs capture our attention but don’t capture our imagination, and good works of fiction and good stories in general capture our imaginations and co-opt in order to entertain us . That’s why when you’re dealing with a good logical puzzle you don’t get your immersion broken by following the chain to its logical conclusion (even if that conclusion is a surprise) but also why you won’t accept illogical consequences.  In a fictional work, if the chain works out to something that doesn’t seem to fit with what our imagination has filled in for the world it will break immersion, regardless of whether that thing is really rational and logical or not.  Thus, our imaginations are drawn into that world, and will keep us there no matter how strange or different from this world that world is, as long as the world plays by its own rules and/or distracts us from noticing that it isn’t.  The mechanical aspects, then, seem to create a different type of immersion than the storytelling aspects, and the storytelling aspects tie tighter to visualization — since they appeal to the imagination — than the mechanical aspects do (even though you don’t need to explicitly visualize in the imagination to have your imagination immerse you in the storytelling aspects).

This is also why the essay makes a good point in talking about how using better technology to make things more real doesn’t necessarily make them more immersive, but the point there is a bit shallow because the point is both true and not true in interesting ways.  Where it isn’t true is that it can indeed be easier to become immersed in a more realistic work because our imagination doesn’t have to do as much work to get us into that world and accepting it as a world that we are in and in some way experiencing.  However, we run into issues with the “uncanny valley” when that happens, which seems a reflection of the idea that when a world is more realistic we expect it to act more like a real world and should it ever not align it is easier for us to lose our connection to that world.  If we don’t have to engage our imagination as much we fall into the world more easily, but then our imagination isn’t prepared to fill in the appropriate gaps when things become too unrealistic.  Thus, we are more likely to be immersed in the crazy things that can happen in a cartoon than when they happen in a live-action remake because in the cartoon we accept that this is not a real world and our imagination transports us into that not-real world and suppresses any internal criticisms that that isn’t a real world, while in the live-action — even with really good special effects — the imagination will not save us when things get too far away from the real world.  Things you can get away with in a cartoon, then, are not things you can get away with in a live-action work, even if it’s done really well, and the reason, it seems to me, is that we don’t need to imagine the world in a live-action world, and so our imagination isn’t available to patch that world up should things go awry.

Another point which ties into the first point is about the game elements vs the story elements is an early comment where these RPGs, as White puts it rely on the “foregrounding of game over story“, and thus in some sense making the mechanical game elements and the entire notion of “play” take some precedence, at least, over the story.  The problem I have with this is the what distinguished D&D from its tabletop strategy predecessors was that while those games used story merely to facilitate game, D&D used game to facilitate story.  The game elements were there to allow the DM and the players to collaborate to produce a great story, which even led to the idea that if the game elements were getting in the way of the story the DM could invoke Rule 0 and suspend it.  For the strategy games, the opposite was true, as they would take real-life or fantasy scenarios as a framing device to facilitate the tactical gameplay the games were trying to produce.  They didn’t, say, simulate the battle of Gettysburg in order to make the players feel like they were really fighting in the Civil War and hoping that they’d identify with a specific side, but to provide forces and stakes that were both well-known to everyone and also interesting.  Players are interested in seeing if they could make the battle come up different — like I am interested in doing for the entirety of World War II — rather than in really being Lee or Meade and seeing things they way they did.  I don’t want to be Hitler or Stalin or Churchill or Roosevelt but have some interest in seeing if history could have been different and, if so, how.  This is different from how I’d feel in a historical RPG where I would, to some extent, want to feel like how I’d feel in that world going through those monumental events, or even like how I’d feel watching a movie about Churchill’s decisions early in World War II.  The mechanics in an RPG are there to facilitate living in that world and telling a story about it, while the story in a tactical simulation is there to facilitate making strategic decisions about that world.

Which brings me to my final point, about the role of dice in these games.  Wright makes a rather big deal about what dice bring to the game, but does admit that really all they do is bring some kind of objectivity to the world and that collaboration.  If a player wants to try something that might conflict with what the DM wants to have happen in the world, the DM can declare that it simply wouldn’t work and so it won’t happen, but then the players start to get resentful over having no control over their characters and what they do.  The DM could allow everything, but then that would risk making the world inconsistent and nonsensical.  Both sides want some way to distinguish between things that it would be really difficult to do but might be worth doing in a pinch versus things that their characters ought to be able to do but might flub.  Dice allow for this, as the player comes up with the action according to the world as described by the DM, the DM determines how difficult that would be in that world, and the dice then decide if that risk was worth taking.  Wright’s example of a paladin praying to be freed from the captivity of aliens is a prime example of this:  the world dictates that a paladin ought to be able to pray to their god to ask for aid, but given the situation it succeeding should be very difficult, and when the dice say it succeeds the DM needs to find an in-universe way for that to happen.  All collaborative and driven by the at least impartial mechanism of the dice.

That being said, you don’t need to use randomness for that.  Games like Amber Diceless eschew dice and insist that the collaboration be more direct and more constant and resolve those things that way.  So dice themselves are not that important, and in fact can even hurt the collaboration and the world by making it too difficult for a player to do an action even if in the genre of story they’re in it would be expected that their character could do it.  So perhaps Amber Diceless’ approach works better, where the player would ask to do something and the GM can say whether that fits with the world of Amber or not and who their character is.  But dice themselves are not a unique solution and sometimes are even a problem for building immersive worlds collaboratively.

Marvel’s Recent Unpleasantness

March 7, 2022

The next essay in “Supervillains and Philosophy” is “Marvel’s Recent Unpleasantness” by Libby Barringer.  The essay starts from Civil War (the first Marvel one, since there has been more than one by now) and brings in Hobbes and his idea of the sovereign to argue about the issue of choosing safety over freedom and to warn about how Hobbes would justify making a tyrant who has the ability to rule according to their own whims.  However, speaking as someone who is somewhat sympathetic to Hobbes’ ideas, this analysis gets Hobbes wrong, as there he at least conceived of more checks and balances than Barringer can see here.

The first thing to note is that for Hobbes it’s not a simple choice between safety and freedom.  The State of Nature seems like it is the state with the most freedom, because it’s the state with the fewest explicit restrictions.  However, the problem here is that while someone seems to have the most freedom to work their will on the world, they are actually pretty constrained by the fact that everyone else also seems to have the most freedom to work their will on the world, and that includes on other people.  Since we all have basic needs that we have to secure, the first thing we’d need to do in our “freedom” is secure those basic needs.  Except that since we have to interact with other people who want them as well, there is no way that we can secure those basic needs solely by our own devices in the State of Nature.  Anything that we could do to obtain and secure our needs can be overcome by other people seeking to obtain and secure their basic needs.  The smartest person can be overwhelmed by physical force.  The physically strongest person can be outmaneuvered by an intelligent person … or, more relevantly, can be overwhelmed if people gang up on them.  No one person alone can secure their basic needs, let alone their wants, especially if others will gang up on them.

So, once we realize that the State of Nature is nasty, brutish and short, with everyone constantly struggling to grasp and hold even their own basic needs against everyone else doing the same thing, the first thing we start to do is look for ways to come to terms with the others, both for protection and to be able to get at things that we can’t get ourselves.  But the nature of the State of Nature still rears its ugly head, as we want to be able to guarantee that the people we are coming to terms with will keep to the terms.  So we need an agreement, and what we need is a way for everyone to agree to keep the terms in a way that we can all rely on so that we can focus on other things than watching out for the latest betrayal from everyone else.  So that means agreements with multiple people with explicit punishments and threats from everyone else to gang up on the person who breaks the deal and cast them out so that they don’t gain from their betrayal.  And since these agreements actually allow people to be able to secure their own basic needs so that they can pursue and also secure their wants, people would want to be in such agreements since it is better for them to be inside such agreements than outside of them.  And thus the Social Contract was born.

So how does the sovereign come into it?  As societies get bigger, the Social Contract also grows to include more and more people, and so it becomes far more of a societal than a personal contract.  Given that, we need more impersonal rules and impersonal punishments:  we can’t and don’t want to try enforcing these rules at the personal level anymore.  So we need the rule of law, and need some body at the top to enforce the rules to the fullest extent possible.  That’s the sovereign, whose role in this order is to enforce the rules, even and especially with the power to execute those who break it and thus make breaking the Social Contract the option that will always be the least in their own self interest.

So is the sovereign just a tyrant, able to bend the rules for their own gain?  The issue is that the sovereign gets their power from the Social Contract, and the Social Contract exists to ensure that no one can simply use their own power against everyone else.  As long as the sovereign is keeping the Social Contract in place enough so that everyone still benefits from their reign, they probably don’t have all that much to worry about.  But if the Social Contract weakens, so does their power, and as already noted there is no one so powerful that they can’t be overcome by someone else if those people tried.  So the sovereign might seem to have absolute power, but their power is entirely conditional on upholding the Social Contract, and acting like a tyrant weakens the Social Contract.  Weaken it enough, and the sovereign will lose their power and, in the times Hobbes was writing, their life.  So they are just as constrained to not abusing their power as everyone is to stay in the Social Contract, and so have good reason to not become a tyrant … the same reason everyone has for forming a Social Contract in the first place.

There’s actually a good examples of this in the old Star Trek Pocket book The Kobayashi Maru by Julia Ecklar.  In part of Chekov’s version, the cadets are put in a standard mystery scenario where one among them is a murderer who will try to kill all the others, but only the murderer knows who (this is, of course, a larger number of “victims” than we normally see).  And they all quickly turn to groups of people hunting everyone else down trying to be the last survivor, and as the assessing officer notes Chekov was the most creative killer among them — lots of traps and gambits — but they all fell into that mindset.  He then notes that one cadet commanded their way through the scenario instead of murdering their way through it, and reveals that that was James T. Kirk.  What he did was get together a small group of trusted cadets who took over one room and secured it, knowing that they could all keep an eye on each other.  Anyone who came along could join, but they had to be disarmed and watched for a while until they were trustworthy.  This solved the exercise with a remarkably small loss of life … and did so by establishing a Social Contract where the de facto sovereign — and the guy who made all the rules — was Kirk himself.  What if Kirk was the murderer, or otherwise tried to abuse his power?  Once he was caught, it would break the contract and they would, at a minimum, kill Kirk, and if they couldn’t prove who was the murderer it would devolve to the (simulated) kill-fest of Chekov’s group.  So Kirk couldn’t use the scenario to become a tyrant but could use the scenario to save his own life and, incidentally, the lives of a lot of other cadets.  This, in essence, is what Hobbes’ Social Contract is all about.

Divergent Strategies in “Sale of the Century”

January 24, 2022

One of the best sources of background televised noise for me is a game show, and it’s fortunate that I actually like game shows as well.  So I have two game show networks in my cable package, and one of them runs retro game shows every afternoon, and they recently revamped the shows they show then to include “Sale of the Century” (if you’re looking it up, it’s the later version with Jim Perry).  And in watching it, I noticed two differing strategies that are interesting to compare to each other.

Let me outline how the game show works first.  In the first round, three contestants compete against each other to answer questions that are worthy five dollars — and note that that is indeed dollars, not points — each to see who can get the highest amount of dollars by the end of the game.  At the end of the game, that player will get a chance to buy a progression of prizes for specific (and increasing) dollar amounts.  Now, since gaining five dollars a question over a single game is not likely to leave you with a lot of money — the higher amounts tend to be about a hundred dollars or so — you would think that they wouldn’t be able to buy very much, but the show is called “Sale of the Century” for a reason:  the prices the players pay is incredibly marked down from their real value.  So, for example, you could end up buying a new car for maybe four hundred dollars or so … but there’s no way you can do that in one attempt.  So the player will have the option of buying the most expensive prize that they can afford with what they have won up to that point, or putting that money “in the bank” to carry over to the next game, and if they win again they can combine their winnings to buy a better prize.  If they manage to win enough days and accumulate enough in the bank, they can buy all the prizes and take a cash bonus that starts at about seventy thousand dollars and has one thousand dollars added every time someone doesn’t leave/doesn’t win it, so it can get up into the hundred thousand dollar mark.  But wait, there’s more!  During the game, the host will offer smaller deals, where for a portion of their current winnings they can purchase a prize.  This is only offered to the person in the lead, and the host loves to find ways to encourage them to buy the prize, offering them a few hundred extra dollars.  In one of them there’s also a secret cash bonus that the player can get if they buy it.  The prizes themselves are sometimes pretty nice, and so something that the players might want.

So, the first strategic consideration here is when those extra prizes are offered do you want to take it and risk your position in the game.  The host loves to arrange it so that the game stays close and so won’t impede the player in the lead too much, but does have to increase the cost of the prizes for the later and better prizes.  So the first prize is usually priced at about the price of one question, and it goes up from there.  While it may only be the cost of one question, if a player lost by one question it might not be worth it to take the deal.  On the other hand, if the prize is something you like you might want to take it, especially since if you end up losing the game regardless you’d walk away with nothing.

Which then leads to the second strategic consideration, which is that any money you spend during the game is that much less money you have to buy things at the end of the game if you happen to win.  What this means is that even if you win if you don’t want to take one of the lesser prizes you are extending the number of games you have to win to get the top prize.  The more games you have to play, the more likely it is that you’ll either have a bad game or hit a really tough competitor and lose.  And since the amount of money you can win in one game is rather limited, that means that you could play several games and walk away with a couple of hundred dollars in winnings (there are some other prizes on a Fame Game board as well that you’d probably pick up as well in that many games).  So you don’t want to extend the number of games you play out that far at the risk of putting in a lot of effort for a limited gain.  However, buying the in-game deals can mitigate that, by giving you cash and prizes that value in the thousands so that you’d be guaranteed to walk away with a decent haul, even as it puts you more at risk for not winning the big prize.

I’ve seen two main approaches to this.  Some people — and, admittedly, generally the ones that win the bigger prizes, even though they are still rare — pretty much refuse to buy any of the smaller deals, hoarding their money to reduce the number of games they need to win to get the bigger prizes.  This is much to the chagrin of Jim Perry, who constantly implores them to take one of the deals and offers extra money to get them to take it.  Some people, on the other hand, take a fair amount of those deals and so don’t get too far in winning the biggest prizes, but usually walk away with a decent haul even if they only win one or two games.

The upside to not taking any deals, even if you like the prize, is that you increase your chances of winning and of not being beaten before you can buy one of the big end game prizes.  The downside is that if you happen to get beaten in any of the games you will walk away with very little compared to what you could have had if you buy the deals, and you have to pass on things that you would have liked to hope to get something that you really like.  Being obsessed with getting the big prize can stop you from getting the things you would like right now, but since you aren’t guaranteed getting any of those prizes you may end up passing up the bird in the hand for the two in the bush.

The ironic thing is that the better players would have a better chance of winning the game they are in and winning enough games to get the bigger prizes, but from what I’ve seen so far tend to be the ones who are laser-focused on the big prize.  Thus, the players who end up with enormous victories and so who had plenty of room to buy some things and seem unbeatable are the ones who end up limiting the risk of losing out, while the weaker players tend to buy more things earlier and so walk away with something even if they lose.  Perhaps that only makes sense, as the stronger players are confident that they will get far enough in the game to get something good while the weaker players want to ensure that they walk away with something if they encounter a stronger player.

Still, it’s an interesting dynamic, and an interesting dilemma:  do you want to maximize your chances at the big prizes at the risk of giving up things that you wanted and not winning the big prize, or take the small prizes at the risk of not getting a big prize?  What choice you make may say a lot about what you value.

Under the Mask: How Any Person Can Become Batman

December 27, 2021

The next essay in “Batman and Philosophy” is “Under the Mask: How Any Person Can Become Batman” by Sarah K. Donovan and Nicholas P. Richardson.  This essay claims that one must adopt the philosophies of Michel Foucault and Friedrich Nietzsche in order to properly become Batman.  Specifically, one must accept the idea that we have multiple identities that are constructed and that there’s no real self, and also the idea that truth itself is constructed and that there is no real, absolute truth either.  This, it seems, would then require them to accept that there is no real difference between the sane and the insane, as evidenced by how Batman and his Rogue’s Gallery are far more similar than Batman, at least, would really like.

The problem with the essay is that even given the examples they use there isn’t really a good connection between Batman and the philosophies they claim he had to adopt, which is a pretty serious weakness in an essay with the framing device that he had to adopt them in order to become Batman.  Tying together some representations of the bat that various characters encounter, they try to argue that only by adopting those notions can Bruce Wayne survive his encounter with the bat and come to adopt it as his symbol.  Except that even in their sources for the most part the bat doesn’t redefine him, but instead becomes the symbol of his redefinition, and as their own quotes show he uses it pragmatically:  to inspire fear in criminals the way it inspires fear in those who are not criminals.  The founder of Arkham Asylum sees the bat itself as a threat, and that fear drives him insane.  Batman squelches the fear, which allows him to harness it against the criminals.  In general, the question of identity when it comes to Batman is which of the identities is really him.  You can argue that this very conflict proves that there is no true identity, but Batman himself clearly thinks that he has one, even if he — and his companions — aren’t sure which one is the real one.  (In “Batman Beyond”, for example, at one point he insists that he knew that the voices inside his head weren’t a sign that he was insane because they called him “Bruce”, and he doesn’t call himself “Bruce” in his head, suggesting that he thinks of himself as Batman and not as Bruce Wayne).

That leads us to the question of whether sanity and normality are just socially constructed elements that are not in an important sense real.  Batman clearly thinks that there is a clear line between what is sane and what is insane, and his Rogue’s Gallery tends to demonstrate that quite clearly.  The question is not over whether sane and insane have any sensible meaning, but whether Batman, for all the good he does, falls on the insane side of that line, and whether someone would need to be insane to do the good things he does, or whether his own special brand of insanity is an impediment to the important work that he is trying to do.  But again, Batman does not accept that there is no such distinction, and so it doesn’t seem like that is required to become Batman.  Maybe the issues around him and the Joker show that sanity and insanity are socially defined concepts that could be reversed in the right sort of world or society, but Batman certainly doesn’t believe that.

So it seems like this essay is an attempt to explore or link those philosophies to Batman by the strong link of arguing that they must be adopted to become Batman, and yet it doesn’t manage to show in any way that Batman himself has accepted them, nor that they are necessary in order for someone to become Batman.  Thus, it doesn’t show how anyone can become Batman.

“The Game Has Virtually Stumbled”

December 13, 2021

The next essay in “Sherlock Holmes and Philosophy” is “”The Game Has Virtually Stumbled” by Tom Dowd.  I find this essay a bit puzzling, actually, because it talks a bit about virtual worlds and the theory that these are created by everything, including the original books and stories, but mostly talks about how Dowd doesn’t have the deductive ability to be like Holmes and that games not doing or being able to do that when they can for other games like shooters or racing games and so on and so forth.  Games, he argues help us become assassins or race car drivers or soldiers, but they don’t help us become detectives like would be required for Holmes.

The first thing I’d note is that if we want to become those things, the games he mentions in the genres he mentions don’t really do that either.  Games have varying degrees of realism, but the games that most help us do those things tend to be ones that move away from being realistic towards accessibility.  We generally don’t actually go through the steps to reload a weapon to reload it in those games.  Driving is simplified.  Even the romance options that he mentions from Mass Effect are very, very limited compared to doing those things in real life (in general, reduced to saying the right things and pursuing it).  So if you were really trying to be a soldier or an assassin or a race car driver or someone pursuing a relationship, pretty much all games wouldn’t let you really do that either.  So the fact that games won’t let him be Holmes doesn’t actually distinguish those games from the games he mentions.

The second thing is that there are a number of games — perhaps not the Holmes ones — that actually try to do that.  The “Arkham Asylum” games added detective vision which helps the player notice things that they might not notice otherwise, a mechanic that the recent “Spider-man” game more or less copied.  And in Persona 5 the player gets “Third Eye” which lets them see which containers have things in them and the places they can climb or drop from and things of interest so that they stand out.  While they don’t replicate Holmes’ knowledge base, the mechanisms do replicate his ability to scan a room and note even trivial things that are actually important, which does let the player be Holmes in that regard.  So while it may not be in the Holmes games themselves, there are improvements in the genres that Dowd doesn’t mention that could let us be a bit more like Holmes without having to do it all ourselves.

That being said, I think the biggest complaint here is that even with the fact that the deductive requirements of the games being limited — as they are in the adventure genre in general — Dowd still can’t figure them out himself and so feels like the game is requiring him to have more deductive ability than he has to complete them.  The problem here is that those who aren’t as good at FPS or driving games and don’t have the skills to play them would say the same thing about his examples.  I am not great at FPS gameplay, and so often skip the games or can’t complete them even if I like other elements of them (like the story).  I used to love Formula 1 racing games, but had to give up on the more modern ones because the move to realism left me unable to even get around the track in any reasonable way, leaving me frustrated.  And that was almost certainly a couple of decades ago, and so I imagine that any games in that space are even worse today.  And everyone would note that in both those cases the games are still far, far easier and hold your hand far more than reality would.

I could complain that the games don’t handhold me enough.  Or I could take the more reasonable tack and note that I’m just not all that great at that gameplay and either would have to practice more, focus on less realistic games, or give up the genre if neither of those options work for me.  For Dowd, he might be able to help himself by using a walkthrough to get past the things he finds too difficult to figure out.  Because if he really wants the game to get more realistic and properly represent deduction, it would only get more difficult for him, not less.