Posts Tagged ‘Philosophy in Popular Culture’

“Imagination and Creation: The Morality of Fiction in Dungeons & Dragons”

December 16, 2022

The next essay in “Dungeons & Dragons and Philosophy” is “Imagination and Creation: The Morality of Fiction in Dungeons & Dragons” by Robert A. Delfino and Jerome C. Hillock.  The basic premise here is the examination of whether participating in the highly imaginary but creative shared fictional world of a Dungeons & Dragons game is a good thing or a bad thing.  To that end, they appeal to the “traditionalist” views represented by Plato, Aquinas and Tolkien and also to the postmodernist views represented by Nietzsche.  The main point of contention is over whether it is a good thing to allow players to play as evil characters, in D&D, any other Pen and Paper RPG, or in a video game.  The traditionalist view might seem to oppose the sort of shared world that D&D creates, especially if it allows people to play as evil characters, due to its links to Christianity and the fear that allowing people to play evil and brutal characters will lead to evil and brutal people.  It certainly would not want us to revel in and celebrate the victories of evil over good.  But they note that the traditionalist view will indeed allow for the shared worlds of D&D and even instances where players are playing as evil and even where evil wins as an expression of human creativity.  Their only concern would be to ensure that those expressions of human creativity don’t have any impacts on the real world.  As long as people do not mistake that world for their own and do not allow playing as evil to make them desire or even be neutral or accepting of evil, engaging our, to them, God-given intellect is a good thing, not a bad thing.  The postmodern view, they claim, is one of everything goes, as they use Nietzsche and argue that we must move beyond good and evil and should not, at least, put any constraints on our creative faculties, even if we risk having their use change us in ways the traditionalists would consider terrible.

Now, while I’ve started but haven’t finished reading Nietzsche himself, this reminds me of my main objection to postmodernism:  when it says something interesting, it seems wrong, and when it says something reasonable, it’s not saying anything new.  For me, I wrote an essay responding to a fellow students comments on I think Foucault arguing against dichotomies and arguing that this was a really important result from postmodernism by noting that analytic philosophy — which I favour — already knew that what seem like dichotomies may not be real dichotomies and so already knew to question whether they really were or not.  It didn’t need postmodernism to tell them to question dichotomies.  But if postmodernism went further and argued that there simply were no dichotomies and so they all had to be torn down and deconstructed, then it was simply wrong.  Sure, not everything is a dichotomy, but there definitely do seem to be some dichotomies, even if that’s limited to “exists/doesn’t exist”.

The same sort of thing seems to me to be happening here.  The traditionalists are taking a perfectly reasonable view that all exercises of our creativity and intellect are good as long as they don’t have a bad impact on ourselves or on the world.  Even if Nietzsche is skeptical of the good/evil dichotomy, he surely would have to accept that playing a D&D game that turned someone brutal when they wouldn’t have otherwise is probably not something we want to see.  It’s just not credible to think that there could never be any consequences to playing the game that we’d want to avoid, even if we might disagree over what counts as a negative consequence.  So taking the view that anything must go seems incorrect on the face of it.  But if Nietzsche doesn’t espouse that view, then he’s not saying anything interesting.  He’s quibbling over what counts as a negative consequence, but accepting the base, traditionalist idea that these creative expressions are only acceptable if they don’t cause negative consequences.  So either he’s wrong or uninteresting … and philosophically speaking, both are about equally bad.

Personally, I will play evil characters in video games and often do some brutal things in them if I’m going whole-hog on being evil.  If I thought that that might make me an evil person in real life, I would stop doing that.  But it doesn’t seem like it does that.  Thus, I side with the traditionalists:  if it isn’t doing harm, allowing as full a range of human creativity is indeed a good.  We don’t need to open it up fully to achieve that.

“Who Trusts the Watchmen?”

December 9, 2022

The next essay in “Supervillains and Philosophy” is “Who Trusts the Watchmen?” by Rafaela Hillerbrand and Anders Sandburg.  This essay examine the idea of whose moral decisions we can trust based on the three primary characters in “Watchmen” and their reactions to the plan of the “villain” of the piece’s plan to kill off a large number of people to highlight the horrors of nuclear war and prevent Armageddon.  The three primaries are:  Dr. Manhattan, who represents some form of moral nihilism or error theory as he seems to be in some way above or disconnected from morality and does nothing to prevent the event even though he is aware of it and could do so, Rorschach, who represents deontological morality who refuses to go along with the plan and insists that he will let the world know what happened, and Ozymandias, who enacted the plan and obviously represents Utilitarianism, trading the deaths of a large number of people to prevent the deaths of an even larger number of people.  And so the question in the essay is basically:  which of these would you trust to make such weighty moral decisions?

The problem with Dr. Manhattan is that while he arguably knows all the physical and potentially even moral facts that we should use to determine what is or isn’t moral, at a minimum his value system is not close enough to ours to be trusted to make decisions in our best interest.  That itself may not be an issue for morality since it shouldn’t at least be simply about satisfying our preferences, but the issue is that it seems like he simply doesn’t care enough about nothing:  not about the people as per Ozymandias, not about morality as per Rorschach.  He is either unconcerned with morality or considers himself above it, and so cannot be counted on to determine was is moral.  He has the knowledge and he has the objectivity, but he doesn’t have any interest in being moral or doing anything moral.  He, then, is an awful lot like how I considered psychopaths to be in my essay on them:  he doesn’t care enough about morality to be anything other than amoral.

The problem given in the essay for Rorschach is that while he wants to stand on principles, his principles are ones that are personal to him.  What he thinks is acceptable is what is acceptable, even if others would agree.  Arguably, he has the interest in being moral, but doesn’t have the objectivity and doesn’t have the knowledge.  In fact, one can further argue that the reason he lacks the objectivity is because he lacks the knowledge.  If he knew what was or wasn’t moral, then he could conform to those moral standards, but since he isn’t he has to make it up as he goes along, which means that it aligns with his own personal moral beliefs.

Ozymandias, at least in principle, as the World’s Smartest Man has the knowledge to determine what is and isn’t moral or at least knows what the physical facts are that need to be considered in determining what to do.  As a strict Utilitarian, he also at least arguably has the objectivity.  But someone who is willing to casually kill that many people might be considered someone who is too heartless to be properly moral, and his moral stances require him to know a lot of things that he may not really know, like consequences in the future, and unlike Rorschach who can be wrong about what consequences will follow from what he does Ozymandias can’t be, since it’s only that that can ultimately justify the actions he takes now.

The authors, at the end, suggest that perhaps the big key, then, is for them to consider themselves fallible and so know that as they are limited their interventions should be limited as well.  The problem is that we are not only going to be concerned with their actions, but also with them as moral exemplars so that we can follow them in making proper moral decisions.  It’s okay if they are flawed, but we want them to know what is properly moral and immoral and at least be able to advise us on that score, and these three examples are too flawed on that score to be trusted.  Dr. Manhattan might have the knowledge of morality if he cared enough to consider it, but he usually doesn’t.  Rorschach doesn’t know what it really means to be moral, and so is making it up as he goes along.  Ozymandias might be the closest, but only if strict Utilitarianism is the right moral system … and if that’s the case we wouldn’t need him as an exemplar (and I don’t think Utilitarianism is correct anyway).

The Watchmen we trust are not ones that make no mistakes in their actions, but ones that make no mistakes in their moral assessments.  We don’t have such characters in “Watchmen”, which is why we don’t trust them.

“Could Batman Have Been the Joker?”

December 2, 2022

The next essay in “Batman and Philosophy” is “Could Batman Have Been the Joker?” by Sam Cowling and Chris Ragg.  Now, in a non-philosophical context, this question would be an introduction for a “What if?” scenario where through different circumstances Bruce Wayne takes on the criminal persona of the Joker instead of becoming Batman or forms some kind of split personality where he’s both Batman and his own nemesis the Joker.  But while Cowling and Ragg do talk a bit about that, what they want to do is focus on the idea of modal logic and possible worlds and whether what is referred to by “Batman” could ever also be referred to as “the Joker”.

The big argument is indeed about referents.  The idea is that “Batman” is a name, and a name signifies and points out a specific individual in the world, and it can’t be used to point to anything else.  This differs from descriptions which cannot be used to uniquely identify an individual since they can change between possible worlds.  So while “Batman” may not be a crime fighter — or even exist — in a possible world, it would still point out the same individual, and if “the Joker” is also a name that points to a different individual in a possible world then “Batman” can never, in fact, be “the Joker”, even if there exists a possible world where Bruce Wayne dresses up in a clown suit and commits crimes using laughing gas while the Clown Prince of Crime puts on a bat suit and opposes his criminal schemes.  We would have change the properties of those individuals, but ultimately we would have to say that they remain “Batman” and “the Joker”, which would seem odd if such possible worlds actually existed, because then whether that person is “the Joker” or is “Batman”, it would seem, would depend on which world a person started from.  Start from the normal DC comics world, and it’s “Batman” and “the Joker”.  Start from that inverted world, and it’s “the Joker” and “Batman”.  That’s not a result that seems to make sense to us.

I think the way out of this sort of mess is revealed with an issue they reveal, that of the various “Robins”.  If “Batman” picks out a unique individual, then “Robin” does as well.  But there have been multiple “Robins” with different properties, but that are all called “Robin”.  This leads to the recognition that there have been multiple “Batmans” as well.  If the name picks out a unique individual in all possible worlds, then which individual does it pick out?  And if it has to pick out all of them, then it doesn’t pick out a unique individual and we risk having it be just another description or property, leaving us no way to pick out and follow an individual across multiple possible worlds.

However, the issue here is one that I think Jonathan MS Pearce’s nominalism hits as well:  mistaking the symbol for the concept itself.  We know that there are people who share our given name.  But in no sense are we ever confused over which of us is the “real” one or which is being referred to at any given time.  A name is not a unique identifier because of the sequence of letters that make it up, but because of what it is connected to, and so what it refers to.  That sequence of letters gets its meaning by being attached to me and that same sequence of letters, in other context, gets its meaning by being attached to that other person.  It’s the combination of the name itself and what it refers to that produces the unique identity that we can follow through all possible worlds.  That also applies, then, to the “Robins” and even to the “Batmans” and the “Jokers”:  the unique individual is what that name is being used to referred to, and is not the name itself.  After all, a rose by any other name would still smell as sweet, so it cannot be the name itself that creates that uniqueness.  Instead, the name is a symbol that we use to label the reference that we have to that specific individual.

That’s why a name picks out an individual in all possible worlds no matter how we shuffle the properties of that individual.  It’s not because that’s what a name does, but because the name is the label for that specific reference to that individual, and that reference stays constant across all possible worlds.  The issues listed above only happen when you treat the name as that crucial element rather than the reference that the name is merely the label or symbol for.  Once we understand this, we can see that the reference to Batman could never be a reference to the Joker, since they are distinct individuals in at least one possible world and so get their own references, and can see that the reference to Bruce Wayne and the reference to Batman are to the same individual since they are the same individual in at least one possible world.  If we find a possible world where Bruce Wayne didn’t become Batman but someone else did, we can see that the original reference is still to the same individual and it’s only the names and symbols that have changed.  In short, the new individual who becomes the crime fighter and calls himself by the name Batman is the same individual that we might reference in the DC comics world, and Bruce Wayne is also the individual that we reference by that name in the DC comics world, but the properties around becoming the crime fighter are what differs.

Once we understand this, we can understand how people and characters can change names between possible worlds without becoming completely different people, while maintaining that the person we, at least, know by that name is the individual that we can follow through all possible worlds and so one that we can look for and posit changes in without having to wonder if they are indeed the same person.  The string of letters that we call them by is another property of the individual that can change between worlds, but the reference we have to them remains constant and is what we actually use to identify them.  Like Pearce, Cowling and Rigg confuse the symbol in the English language for the concept or reference itself, and then tie themselves into knots because they try to apply the properties of the symbol to the properties of the underlying element itself, which causes ridiculous results that they need to untangle.  Separating the two resolves these issues and allows us to use these things in the way we originally wanted to use them.

“How Marriage Changed Sherlock Holmes”

November 25, 2022

The next essay in “Sherlock Holmes and Philosophy” is actually “The Curious Case of the Controversial Canon” by Ivan Wolfe, but I’m not going to talk about that one since all it really does is talk about what importance an official canon has, point out that the official canon for Sherlock Holmes is mostly what Arthur Conan Doyle wrote about Holmes, note that the French version of the complete works adds a couple of stories, and notes that even some things that Doyle writes aren’t included in the canon.  I don’t really have anything to say about that, so I’m going to skip it to talk about “How Marriage Changed Sherlock Holmes” by Amy Kind.  Which, rather ironically, actually has a relation to canon, since it talks about Sherlock Holmes’ marriage to Mary Russell, which never occurred in the canon and so was only ever captured in a series of books by Laurie R. King, which is meant to focus on Russell herself and not on Holmes.  The importance of canon is to established a baseline of Holmes and his world and characters so that fans can have a more or less consistent idea of the character and world to discuss, and so given that the work where he marries Russell is non-canon all we could glean from discussing how that marriage changed him is the view of Holmes that King herself imagined.  We might end up arguing that it is consistent with the character, but we might just as easily argue that it isn’t and even that he would never have married someone like Russell in the first place.  Thus, the dangers of relying on non-canon works.

While I haven’t read the works — I hadn’t even heard about the series until this essay — Kind’s description of King’s heroine makes me wonder if her first name of “Mary” is actually incredibly apropos.  Mary catches Holmes’ eye at a young age, and is explicitly called his equal in intelligence and observational skills, and meets him in a “meet cute” type of event where she doesn’t manage to observe him well enough to avoid almost running into him, but then immediately impresses him with her observational skills.  She also manages to catch Holmes, who was a confirmed bachelor in Doyle’s works and had only been impressed by one woman, Irene Adler.  Thus, the works do come across from this description as being author insert fan fiction and so it isn’t at all clear how examining this “marriage” would help us see how marriage changed Holmes himself.  It’d always be too easy to argue that any such changes were out of character for Holmes, given that the the marriage itself might be out of character for him.

So I’m not going to bother with that.  Instead, I’m going to focus on Kind’s discussion of love that takes as its inspiration the speech of Aristophanes in Plato’s Symposium.  The story that he tells is one where we were originally one being, but have been cleaved into two, and the purpose of love — and presumably marriage — is to reunite those two selves into a whole once more.  Now, when I first read that, my very first thought was that the story would imply that we should find not the person who is most like ourselves, but instead the person who has those parts of us that we lack, like the Captain Kirk we find in “The Enemy Within”, with the two halves split from each other and quite different from each other but also with them being unable to survive on their own.  Thus, it would seem like perhaps the best marital companion for him would be someone like Watson, who has the qualities he lacks and could thus help Holmes fill in the gaps in his personality, at least.

Kind is explicit, however, that Holmes could never have married someone like Watson because they were never really equals in their relationship and so could never have been partners, which is required for the sort of love that Aristophanes talks about.  Watson is no where near Holmes’ intellectual equal, whereas Russell is, and so she can be a partner to him in a way that Watson couldn’t.  While that idea of love does insist that the two married partners retain their own identity — Russell, for example, maintains her study of theology despite the fact that Holmes has a strong distaste for it, at least in part to establish that as something she has for herself — the idea here is that Russell is a good match for Holmes because she is quite a bit like Holmes, and a match that was more like Watson wouldn’t be because that match would be more complementary to Holmes instead of being like him.

It seems to me that both views have some merit.  In forming any kind of partnership, the best ones are ones where the two partners are indeed more complementary.  They both bring different things to the table and are masters of at least the two if not more different spheres that people would encounter in the world.  If the partners were too much alike, then they’d have the same weaknesses and wouldn’t be able to help each other overcome their struggles in the world.  We saw this in the idea of the masculine/feminine spheres that were covered traditionally by the male/female marital tradition, and we also see it in the idea that “opposites attract”.  It does seem like we might, in some way, be attracted to people who provide for and are more comfortable in the areas that we ourselves aren’t that good at, that can negotiate and can help us negotiate those areas that we would like to be in, at least at times, but aren’t really capable of moving in.

On the other hand, “opposites attract” rarely seems to extend to true opposites.  We really do seem to want to have things and important things in common with the people we are attracted to.  If we didn’t have any of the same interests or moved in any of the same circles or had any of the same abilities, we wouldn’t be attracted to them at all, perhaps even as friends.  In the case of Holmes, it’s a good point that someone whose intellect lagged his too much wouldn’t be of interest to him.  He might be able to survive someone who was more supportive to his work and took care of his pragmatic needs and managed his emotions and boredom appropriately, but it does seem more credible that if he was ever to fall in love it would be with someone like Russell or Adler whose intellects matched and could challenge his own.  Perhaps she wouldn’t have to be a consulting detective, but her having some knowledge and interest in the facets that make that up would have to be a boon.  They’d have to have something in common.

But, perhaps harkening back to that comment about identity, we have to concede that the person would certainly have to have some interests in common, but would have to have her own interests as well.  No one wants to be married to someone who is exactly like themselves.  Which leads us away from complementary partners or identical partners to the idea of compatible partners, which would argue that the person we are looking for is like us in the important ways but is different enough from us to also work as a complementary partner.  They share our interests so that the two of us can share those activities and grow closer through them, but have enough of their own interests and, importantly, do not share enough of our interests that we can go off and do our own thing at times, retain our own identity, and have something that we maintain as ours and ours alone as opposed to something the two of us share.

Is Russell’s love of theology enough to make her different enough from Holmes to work as his ideal mate, given their similarities.  I can’t say.  I can’t even say if this analysis of love is correct.  But this is a way for us to be split as per Aristophanes:  in some cases, we possess two halves of the same thing, and in some cases we each possess things that the other lacks.  Considering those things is what, then, ultimately reunites us as a complete whole and thus allows us to find our “soul mate”.

“The Bloody Connection Between Vampires and Vegetarians”

November 11, 2022

The next essay in “Zombies, Vampires and Philosophy” is “The Bloody Connection Between Vampires and Vegetarians” by Wayne Yuen.  In it, he is going to argue that it would not be moral for vampires to kill humans for their blood to feed on, and then use that argument that it is also wrong for humans to raise and eat animals for food.  I don’t think his argument really works, but in order to get to that argument the first thing he needs to establish is that vampires are indeed moral agents and so can be held morally responsible for their actions.  If they were simply animals or were compelled by a strong compulsion to drink human blood, then he’d be unable to establish that vampires would be morally wrong to drink human blood and so any argument he would make for vegetarianism would have to stand on its own without the support of the analogy from vampires.  In doing so, he has to talk about the nature of morality and also free will, which is why there’s a fair bit more to talk about in this “Philosophy and Pop Culture” post than there normally is.

So he starts by arguing that in order for us to morally judge vampires, we are going to have to establish that vampires are moral agents and have moral responsibility.  He first examines whether they are rational enough to be moral agents, which means, to him, that they can evaluate what the best outcome would be among a host of possibilities and choose on that basis.  The issue is this sort of goal-directing reasoning is not sufficient to make someone moral.  As I noted in my essay on psychopaths, while they may not make the right choices they certainly seem capable of some sort of goal-based reasoning but have a notable lack in being able to understand what makes a situation a moral one or not, since they fail at the moral/conventional distinction.  Yuen tries to establish that they can judge what is and isn’t moral with a quote from one talking about how evil people taste better, but this doesn’t mean that they are able to judge or comprehend what is or isn’t moral but might only reflect that they have learned what humans consider moral or immoral and apply that classification to those people.  Since the vampire who says that — Lestat from Interview with a Vampire is not exactly a good being himself it’s far too quick an assessment from that statement that he understands what it really means for someone to be evil.  Even if he does, what Yuen would also need to establish is the one thing that we do think that vampires lack when it comes to morality:  the capacity to care about what is or isn’t moral.  Again, psychopaths can be seen as inherently amoral because they are incapable of being motivated by morality and moral considerations, and vampires in general are seen as lacking that as well.  Even if they can judge what is or isn’t moral, they in general are incapable of caring about what is moral or immoral.  In Yuen’s example, Lestat’s judgement about evil people is shallow and is unconcerned with their moral status.  He neither condemns nor praises their immorality, and seems to only note as an interesting aside or irrelevancy that has an impact on his aesthetic preferences.

So Yuen hasn’t really established that vampires are moral agents because he hasn’t established that they are capable of properly understanding or being motivated by moral considerations.  The next issue he tries to address is if they have free will and so can make proper decisions.  The big problem here is that while he does seem to get that free will is the ability for us to act on our own choices without being forced to by something outside ourselves, the example he gives is one of coercion:  if someone threatens to kill our family unless we do something, Yuen argues that in that case we no longer have the free will to not do that thing.  This is, sadly, pretty common in discussions of free will, but is also something that is a philosophical pet peeve of mine.  I would agree with Yuen that in such a case we wouldn’t have moral responsibility for taking that action, but argue that it isn’t because we would be forced to do that by something outside of ourselves.

For me, the argument follows on from general Stoic teachings, which argue that if someone threatens your life — or that of your family — in order to force you to do something what the proper Stoic is morally obligated to do is refuse to take that action and let them kill you or your family.  The reason is that a person is not morally responsible for what other people do, but is only morally responsible for what they themselves do.  So if someone faces such a threat and gives in, then they are still morally responsible for the action they take and so if the action is morally wrong then they did something morally wrong.  However, if they refuse to act immorally in the face of such a threat and the other person then kills them or their family, then they are not responsible for what that person did and so have not done anything morally wrong.  They are not, therefore, morally responsible for the deaths of their family if they refuse to take that immoral action.  All the moral responsibility rests with the person who committed the actual deed themselves.

And outside of what might be considered esoteric philosophical stances, we actually do understand that someone can indeed reasonably and morally choose an action against such strong coercion.  In the Shakespeare play “Measure for Measure” (which I just read), a main character is forced by the temporary ruler of the city to either have sex with him or else her brother would be executed.  She is adamant that she is forced by morality to choose to let her brother die, and while some people try to encourage her to go through with it to save her brother they understand her position that doing so would be immoral.  The war is over practicality vs morality, not over her warped sense of honour vs what is actually moral.  Thus, we can understand that it is indeed possible to have an external force apply strong coercive pressure and yet not only be able to choose against that pressure, but in fact to actually be moral — and the most moral — in doing so.

And yet, we also have a strong intuitive understand that if someone had, in fact, given in to that pressure they wouldn’t be held morally responsible for doing so.  We would not call such an action immoral, anymore than we generally consider someone who steals food to feed their starving children to have done something immoral.  These intuitions, then, are what I think drives the arguments that claim that we cannot be properly responsible for such actions and so can’t freely choose otherwise.  I have opined in the past that we don’t necessarily consider their actions moral, but instead consider them understandable.  We may consider them taking that action to be them still doing “the wrong thing”, but we can understand why, faced with such a choice, they took that action.  This, then, suggests that perhaps the reason we don’t want to consider them morally responsible for that action is because it’s just too much to ask of them to not take that action.  Therefore, it’s not that they couldn’t take that action, but that it’s an action that morality cannot reasonably demand they take, and so it ends up being outside of the bounds of morality.  We can see this with Utilitarianism where it constantly runs into problems with thought experiments such as you can save your child or a scientist with the cure of cancer from drowning, where utility is clearly on the side of saving the scientist but we all intuitively think that we couldn’t reasonably demand that you sacrifice your child to utility in those cases.  There are lots of attempts to patch it up and bring in rules for that and recalculate utility and so on and so forth, but the easier answer might well be that in such strong cases we could not ask someone to actually make that sacrifice.  It seems reasonable to argue that a proper moral system cannot force someone to take an action that would break them, and such sacrifices might well break the person making the decision.  This, then, might be captured best by Kant’s maxim that we must always treat moral agents as ends in themselves and not merely as a means to an end.  That includes as a means to a claim of morality or the demands of a moral system, and such sacrifices treat people as a means to the end of  maintaining a moral system.

So it does seem like coercion doesn’t mean that we aren’t morally responsible for our actions, but if the coercion is strong enough we might end up with the decision being taken out of the realm of morality entirely.   If vampires are capable of understanding morality and their desire for blood is not a true compulsion, then it does seem like we could assign moral blame to them for killing humans for their blood if that is immoral.  And both Yuen and our intuitions claim that vampires killing humans for their blood is morally wrong.  Yuen then moves on to argue that while animals are not moral agents, in all relevant respects they are the same as us wrt why we think it is wrong for vampires to kill us for food, and so we ought not kill them for food either.  And yet, there is a critical difference:  they are not moral agents.  It seems reasonable to argue that humans are sentient in a way that animals are not, and so even though animals do suffer it is that extra sentience that makes it so that killing humans for food is immoral.  I would disagree that it’s just a matter of rationality, but would argue that it’s about morality.  To return to Kant, we are moral agents and so must be treated always as an end in ourselves and not as a means to an end.  Animals are not moral agents and so need not be granted that respect.

This doesn’t mean that we can treat them badly.  Yuen makes a mistake in arguing that the animals we raise for food are treated poorly in order to make the suffering point, but one can argue that they shouldn’t be made to suffer in that way without accepting that we are not allowed to raise them for food.  For example, the above analysis lends itself to an argument that I’ve heard before — but forget the source of — that the reason it is immoral for us to mistreat animals is not because of a moral responsibility we owe to them, but instead because of one that we owe ourselves.  Someone who could mistreat animals and deliberately try to cause them to suffer or to be impassive towards their unnecessary suffering is not a good person.  They would be missing the virtue of Compassion, or at least it would be deficient in them.  Thus, we can tie the Virtue Theory of the Stoics to the deontological theory of Kant and point out that we have no moral obligations to those things that are not moral agents but a virtuous person has traits that will create such obligations internally from their own morally virtuous character.  So we would not be a properly moral person if we were apathetic to the unnecessary suffering of animals, but raising and killing them for food need not be that.

Yuen could, of course, argue from this that a moral obligation to not eat animals follows in the same way.  However, that … is another argument for another time.  So I don’t think that his argument works, mostly because it is a far more wandering path we need to take to negotiate the moral morass to get to where Yuen wants to end up.

“The Razor’s Edge: Galactica, Pegasus and Lakoff”

October 21, 2022

The next essay in “Battlestar Galactica and Philosophy” is “The Razor’s Edge:  Galactica, Pegasus and Lakoff” by Sara Livingston.  In it, Livingston tries to analyze the two main commanders and “father figures” of the revamped Battlestar Galactica series using George Lakoff’s idealized parenting strategies of the Strict Father and the Nurturant Parent, using Kendra Shaw’s experiences first with the Strict Parent of Helena Cain and then with Apollo who was taught by the Nurturant Parent of William Adama.

Immediately, we can see some issues with doing this.  The first is that Cain is far more Psychopathic Parent than Strict Father given how she acts.  Livingston might be able to make a case for it by ignoring the worst examples of Cain’s behaviour and instead only focusing on the cases where she aims for strict military discipline — such as the interesting comparison of how Cain and her crew treat the first face-to-face meeting between the crews versus how the Galactica crew treats it — but she also references the scene from Razor where Cain shoots her XO for disobedience and tortures the human-form Cylon that was her former lover.  That definitely exceeds the scope of a Strict Father and doesn’t follow from it.  But we can also note that she needs to cherrypick Adama’s actions to set him up as the Nurturant Parent.  Yes, he gives the crew more leeway at times and seems to forgive them their faults more than a Strict Father, but often this comes across as him more playing favourites than being a nurturing parent to his crew as a whole.  He certainly doesn’t seem like he was a Nurturant Parent to Zac, which is why he and Apollo are on the outs at the beginning of the series.  And while Cain executes her XO, Adama executes Gaeta for mutiny as well, and while you can certainly see that as justified Gaeta had more reason to oppose Adama than others had and other acts of mutiny went completely unpunished.  So Cain is in general more cruel than strict and Adama is more strict than nurturing much of the time.

Now, in general this reflects how things work in real-life, as you rarely get a parent that is all nurturing or all strict, and it’s probably not a good idea to adopt either of those as an overall parenting strategy.  But we can ask what it really means to be a Strict Father or Nurturant Parent.  Livingston roughly presents it as the Strict Father setting out rules and punishing those who step out of line while the Nurturant Parent lets them make their own mistakes, but obviously neither of these work in reality.  The Strict Father definitely wants their charges to learn what does and doesn’t work for them, and the Nurturant Parent can’t let their charges make all of their own mistakes since some of those are fatal.  So we can argue that the Strict Father relies on making the actions have consequences when their charges don’t understand or aren’t capable of understanding those consequences, feeling that when they get older they will be able to work out why the rules were right in the first place, while the Nurturant Parent relies less on strict rules and providing consequences because they focus on making sure that their charges understand what the consequences are.  Both ultimately want to achieve the same end, which is people who understand the consequences of those actions and make the right decisions.  The Strict Parent ingrains the behaviour first hoping that their charges will come to understand the reasons later, while the Nurturant Parent pushes the understanding first hoping that they won’t hit cases where their charges can’t understand the consequences before they need it.

Thus, Strict Father would be the right approach for cases where their charges can’t understand the consequences or don’t have the experience or information to understand them, while the Nurturant Parent approach works better where the charges can understand the consequences if it is explained to them and will chafe at rules.  So the Strict Parent approach works better for young children while the Nurturant Parent approach would work better for older children and teens, as they wouldn’t feel talked down to and would feel better about taking a more active and adult roll in their own decisions as opposed to simply following imposed rules.  And I think we’ve seen that, as many parents have moved towards the Nurturant Parent approach with younger and younger children and have discovered that they understand more than we thought … but still have issues and cases where they really need some structured rules.  So perhaps these aren’t competing strategies but instead are complementary strategies that are to be used when appropriate.  As such, Adama might then be capable of being a better parent than he was, and that anyone expected him to be.

“Fighting the Good Fight: Military Ethics and the Kree-Skrull War”

October 14, 2022

The next essay in “The Avengers and Philosophy” is “Fighting the Good Fight:  Military Ethics and the Kree-Skrull War” by Christopher Robichaud.  The basic idea here is to examine what makes for a “good” or “justified” war, and the big example from “The Avengers” is Ronan the Accuser’s plan to remove Earth from being a threat in the future by de-evolving humanity so that they could never gain enough of a technological level to steal the Kree’s technology and turn it against them.  Robichaud’s definition of a justified war is basically one where there is no other choice but to enter into the war and the war is prosecuted only to the degree necessary and no further.  So a war of self-defense, for him, would always be justified as long as the defending nation only fought as long as necessary to remove the threat the other nation presented to them.  We could also imagine that a situation like the Holocaust could also be a justified war if there was no other way to stop that heinous situation and it was, again, only fought as long as necessary to stop that sort of genocide.  But then Robichaud raises the question of a preventative war.  Could a nation be justified in starting a war to prevent a war from starting in the future?

At first glance, this seems somewhat ridiculous, as starting a war could never actually prevent a war from starting, since by definition it would start a war.  So you could justify a preventative strike, perhaps, but not a preventative war since it seems like an oxymoron.  Sure, a preventative strike would always be an act of war, but if the other side doesn’t declare war on the first nation then it really doesn’t look like there’d be any way at all, and so it wouldn’t be a preventative war.  Thus, it looks like we might have to consider Ronan’s plan to be more of a preventative strike than a preventative war, especially given that he was acting outside of the political and military structure of his nation at the time and if he had succeeded humanity would be in no position to declare war on him.  And it looks like we could use Robichaud’s definition above to determine when a preventative strike is justified:  when there is no other way to avoid the war and when the attack is the minimum necessary to avoid a war starting.  So if, for example, two sides were evenly balanced but one side was going to acquire a new superweapon or a new cache of weapons that would encourage them to start a war, a preventative strike eliminating those weapons and those weapons only might well count as a justified preventative strike, as long as it does indeed restore that balance that means that neither side is willing to declare a war that they cannot win.

Could we then justify Ronan’s plan as a preventative strike?  Probably not.  For one thing, it does seem like he could prevent a far-in-the-future war with humans by other means, including diplomacy.  For another, his only justification for a war starting is that he believes that the humans will do what the Kree — his own race — did and so are a threat to the Kree that way, but he has no reason at this time to think that this is what will happen.  So it doesn’t seem like there is no other way to avoid the war and it seems like there are other options than to devolve all of humanity to avoid that war anyway.  So Ronan’s plan is not a justified preventative strike.

But is a preventative war truly an oxymoron?  Or, at least, are there cases where we could justify one?  While it would still be a war and so wouldn’t count as a preventative war in and of itself, it seems like we could have a justified premature war, where one nation knows that another nation is preparing for a war and will almost certainly start one one their preparations are complete, and so the first nation starts the war early so that they can win the war and bring about a peace and eliminate their capacity to start that war.  The most obvious case of this is one where a neighbouring nation has a huge advantage in productivity and while they are starting from almost no military hardware they will be able to quickly catch up and, after that, will continue to outproduce their neighbour and so would be able to win any war they start after that point.  Of course, for this to work it would have to be clear that they will attack their neighbour as soon as they are strong enough, but if that was clear — like it was, for example, with Nazi Germany — then that nation that could win now but will lose later might well be justified in starting the war early, especially if their goal is not to dominate but to, say, change the government that would start the war or to take territory that would at least even the odds productivity-wise or else create a boundary that would make invasion difficult if not impossible even with the productivity advantage.  Or, if none of this is possible long-term, to buy time to even up the odds in terms of productivity.

So a preemptive strike seems like something that can be justified.  A preemptive war is an oxymoron.  But a premature war seems like something that could exist and could be justified.  Ronan’s actions, of course, fit none of these categories.

“Transhumanism, Or, is it Right to Make a Spider-Man”

September 23, 2022

The next essay in “Spider-man and Philosophy” is “Transhumanism, Or is it Right to Make a Spider-Man” by Ron Novy.  It basically tries to defend the idea of transhumanism from criticisms, mostly by Fukuyama.  Novy starts by considering technological enhancements like Aunt May’s glasses, her newspaper and her coffee as things that are similar to what transhumanism wants to do with technology to enhance humans, as a way to get us to consider what they want to do as benign and something that we will eventually see as normal.  His defenses of transhumanism against criticisms definitely tend to follow that line, as he opposes the idea that transhumanism will create inequalities as the wealthy and wealthier nations adopt the changes while poorer nations can’t by pointing out that we already have such cases now, which is a fairly weak defense, since there may be special conditions with transhumanism that will make these things worse, or will cause far more problems than the simple things we have now.  But, in general, to counter Novy what we need is to show that the simple, “normal” things that Novy appeals to differ in an important way from the sorts of things that transhumanism would be espousing.

As it turns out, we can, because there’s a crucial difference in the approaches the two take, as Novy himself notes.  With things like glasses, the intent is to restore someone to a “normal” state, to overcome a specific deficiency that those specific people have wrt everyone else and so bring them up to a base state and so on a relatively equal ground with everyone else.  For the others, for the most part those are technologies invented to change our environment to make things easier for humans as a whole.  Sure, it might not be easy to fit newspapers and coffee into the model of altering our environment, but if we look at them as part of a personal environment we can see that it enables a person to get access to more information than they could on their own and to recover from fatigue from, perhaps, not sleeping all that well the night before.  In all cases, however, the intent is a holistic one, either bringing someone up to the “normal” level or else providing options that most people if not everyone can avail themselves of as necessary.  Because of this, there’s no real consideration of “superiority” involved.  Someone with glasses is not better than someone who isn’t, and someone who doesn’t need coffee in the morning isn’t inferior to someone who does.

Transhumanism, as Novy himself notes, is not like that.  It is a philosophy built around creating “superior” humans, making humans themselves better in some way.  So we can immediately see an issue with transhumanism where in order to create “superior” human beings we need to first define what it would mean to make human beings “superior” in the first place.  With the other cases, we either have a human baseline to appeal to or can let the environment specify what things we are trying to overcome.  With transhumanism, we can’t appeal to either of those, because we are trying to redefine the human baseline and the technology we are inventing is trying to enhance humans in general, not as a reaction to a specific environmental concern.  So how do we determine if a transhumanist alteration is really making humans “superior” or not?  If we could increase the calculating ability of humans ten-fold at the cost of emotionally stunting them, is that an improvement or a regression?  By what, or more importantly whose, standards would we judge whether we’ve succeeded in making “superior” humans?  Because we’re aiming at producing “superior” humans, we need to be able to define it, but at the same time have lost all references we could use to define our goal.

Even if we could define what it means to be “superior”, the issues around equality cannot be dismissed as easily as Novy attempts to, for the same reason.  For the other examples, as noted, we have clear goals:  bring some humans up to the baseline, or alter the environment in a way to make it easier for humans to live in them or work in them.  Thus, while those benefits might be unequally distributed, in theory everyone can access them and we know the cases where someone might need to utilize them.  Thus, if someone can’t get them we can see that they are being deprived of them and those who don’t need them have no reason to grab them or hoard them for themselves.  So it becomes a distribution problem, not a philosophical one.  However, if the enhancements are seen to make a person “superior” to others, then there is a reason for wealthy people and nations to hoard the enhancements for themselves to maintain their superiority.  A person with normal sight has no reason to deny glasses to someone who needs them because they don’t need the glasses in the first place, and a person who needs glasses that work well for them has no reason to object if someone else gets glasses that help with their sight.  But with transhumanism, neither of these are true.  Someone who doesn’t need the enhancements might still want to keep them from others to maintain their natural superiority, and someone who gets the enhancements might want to deny them to others to maintain their enhanced superiority.  As noted, we don’t see someone who doesn’t need glasses or who wears glasses as being superior to each other, just different, but transhumanism’s explicit goal is to make humans superior to each other instead of just recognizing their differences.  Given that, those who can get them have reason to want to keep their superiority for themselves.

This, then, causes issues for society if transhumanism succeeds.  What happens to people who either can’t or won’t get those enhancements?  If transhumanism has succeeded in its stated goal, then those people would, by definition, be inferior to those who have the enhancements.  And if some people are clearly superior to others, then they would be preferred for, well, any role where those enhancements might matter.  Could it be the case that the people who can’t or won’t get the enhancements might find their dreams dashed simply because the “superior” people exist and take away all their opportunities simply by existing?  Could they be reduced to low or menial labour because those are the only jobs that the “superior” people don’t want?

Novy could — and likely would — argue that we have that now with genetic superiority.  But that is not deliberate and doesn’t make someone superior by definition.  Yes, if I want to be a professional hockey player but others have genetic gifts that mean that they are qualified to do that and I’m not, that’s unfortunate, but that doesn’t make them superior human beings and, in fact, there may be many other things that I do better than they do.  They’re just better at hockey due to their genetic gifts.  But it’s not the case that they are better than me and can become professional hockey players because they were able to pay for some transhumanist advantage and thus if I want to achieve that goal I have to do that as well or do without.  Novy could argue that things like special schools and training could do that for someone who is more wealthy, but that’s not an inherent advantage and applies to far fewer cases than it would here.

Ultimately, I can accept these differences because they are differences due to fortune, not design.  They, arguably, “got luckier” than I did, but that’s all it is.  And there’s something noble at tallying up what fortune has given you in the family and genetic lottery, seeing what it has given you, and forging the best life you can given that.  Transhumanism takes that away by making it so that you can become better through technology aimed specifically at making yourself better and superior.  You don’t take what you have and do the best you can, but instead try to reshape yourself to this supposed “ideal”.  That takes away from the individual and stratifies things even more.

“Magneto, Mutation and Morality”

September 16, 2022

The next essay in “X-Men and Philosophy” is “Magneto, Mutation and Morality” by Richard Davis, which looks at morality and in part at its link to evolution by considering morality and his moral positions.  What I want to talk about here is whether two positions Davis claims Magneto holds are in fact ones that Magneto holds:  moral relativism and a desire to genocide the human race.

While Davis talks about genocide first, I want to talk about moral relativism first because understanding Magneto’s views on moral relativism will inform whether he is advocating for or planning on the genocide of the human race.  Davis, relying on the movies, argues that Magneto doesn’t really argue morality with anyone, dismissing the arguments he hears in the Senate hearings as ones he’s heard before and refusing to debate any kind of morality with Senator Kelly once he’s kidnapped him.  But we have to note that the people he’d be refusing to argue with have one important trait in common: they’re all humans.  More importantly, they’re all humans that are looking to oppress, enslave, and potentially murder mutants.  Given his established history with the Nazis, he likens his human opponents to the Nazis and from that argues that there is no point arguing morality with them, not because morality itself is pointless, but because, as he does say, they simply won’t listen and simply won’t care.  Debating morality with Senator Kelly is simply not going to work.  Kelly is unlikely to listen, and even if he did, as we see in later movies, others will simply pick up where Kelly left off.  We must note that he does seem willing to debate the morality of his approach versus Xavier’s with Xavier, because he knows that Xavier can understand and appreciate moral arguments and hopes that he might be able to convince Xavier of the moral rightness of his cause and recruit him to it, even as Xavier attempts to do the same to him.

So it doesn’t seem like Magneto thinks that morality and moral debates are meaningless, just that it’s pointless to stand in front of human oppressors and expect that they will be in any way swayed by such arguments.  His attitude, then, is completely in line with those who advocated for “Punch a Nazi” or various forms of cancelling, as they argued that the people they were opposing would never see reason and so needed to be opposed by any means possible, including force.  It’s certainly not that they think morality meaningless that they refuse to debate them, as they are supremely confident that they are morally right and that morality itself demands that they not engage in pointless moral debate instead of taking the necessary direct actions to stop them.  Magneto is the same:  the human oppressors must be stopped by any means necessary, and the only reason he doesn’t use moral debate as that means is because it would be pointless and ineffective.

This, then, links up with his views on committing genocide against the humans.  The quote that Davis uses against Magneto is the one where he says that mutants are the future, not humans.  But Magneto, in general, believes that nature and evolution will take care of that and that therefore eventually mutantkind will supplant humanity.  In general, he’d be perfectly willing to simply let nature take its course, but what he’s seen is that humanity is also aware that mutantkind will supplant them and are not going to go quietly, and right now they have the numbers and the technology to possibly wipe out mutantkind in attempting to do so.  For the most part, Magneto’s moves against humanity are designed to forestall that threat.  If he could find a way to keep mutants safe from humanity in less violent ways, he’d use them, but in general he can’t.  It’s only when Stryker enacts his plan to wipe out all mutants and leaves Magneto with the ability to do that to all humans that he takes it as a way to keep all mutants safe from humans.  Thus, in line with the above comment, Magneto does not have the extermination of humans as a goal, as he sees them as a problem that will go away on its own as nature takes its course as long as mutants can keep them from enacting their goal of exterminating mutants.  However, he is willing to use the extermination of humans as a means to his goal, the goal of keeping mutants safe.  This doesn’t mean that we should consider his actions more moral — as exterminating any sentient species is morally reprehensible, especially if there might be other options — but it does mean that we must not consider Magneto to be someone who has a strong desire to wipe humanity out because he considers mutants the superior species.  He does consider mutants the superior species, but if the humans would let mutants achieve their destiny he’d have no real quarrel with humans.  Unfortunately, in the X-Men universe humans have no intention of letting that happen.

Ironically, Magneto is misinterpreting evolution when he believes that mutants are the future because they are a more evolved form of humanity.  Evolutionary pressures replace one species with another because the new species out-reproduces the previous one, and thus the advantage that species has is one that involves having more offspring than the other one.  Mutants don’t have any advantages that mean that they’d reproduce better or faster than humans will.  In fact, given that some mutations involve not being able to have close contact with others or make the mutants rather unattractive, it actually seems like their mutations would make them less likely to reproduce.  What mutant abilities tend to give is power, the ability to do amazing things that they can use to overpower humans.  Thus, the only way mutants would ever supplant humanity is if they became tyrants over them and dominated and even exterminated them.  The very thing that Magneto fears humans will do to mutants is the only way that mutants will ever achieve the destiny that Magneto believes they have.

Magneto is not a moral relativist because he thinks that he’s morally in the right and that morality demands that he take the actions that he’s taking.  Because of that, he’s also not someone who wants to exterminate humanity because they are seen as a lesser species or vermin or insects because he only ever feels the need to hasten the process of their natural extermination to stop them from doing that to mutants.  Rather than being someone who denies morality, Magneto is one who feels himself bound by morality to do horrific things in its name … which makes him the worst sort of “moral” person.

Inhuman Nature, or What’s It Like to Be a Borg?

April 11, 2022

The next essay in “Star Trek and Philosophy” is “Inhuman Nature, or What’s It Like to Be a Borg?” by Kevin S. Decker.  While the title of the essay asks what it’s like to be a Borg, the essay itself never really asks the question.  Instead, it talks about whether our repugnance at the Borg is really justified while appealing to various forms of monism to hint that their striving for unity and perfection is philosophically justified as reflecting how reality is, and that collapsing distinctions and dichotomies might reveal that we are or are becoming as much Borg as they are, by focusing on collapsing the distinction between natural and artificial.

The issue for me, though, is that none of this really captures why we find the Borg so disturbing.  Sure, they are artificial, but there are a number of artificial things that don’t particularly bother us, even in the Star Trek series.  Sure, some of the artificial intelligences that Decker references from the original series are less lifelike than data, but the stories are built around them actually being disturbingly lifelike while missing something important about humanity that causes them to act in ultimately horrific ways.  And yet there are also a number of more lifelike artificial intelligences — such as the ones in “What Are Little Girls Made Of?” — that don’t at all fit into the Borg model at all, nor do they really fit into the issues we tend to have with clones, which is the idea that they are clones of us and so force us to lose our individuality.  As we move forward to TNG, Data is clearly artificial and admired by us, and the ship’s computer is also artificial and mostly ignored.  Even artificial things that act like natural things don’t inherently bother us, either in TOS or TNG.  So there’s more to it than this distinction.

This is only highlighted when we talk about clones and cyborgs.  For the most part, we aren’t repulsed by cyborgs, especially if their artificial parts are added due to an accident.  After all, we weren’t bothered by Luke Skywalker’s artificial hand, and it can be argued that our fear of Darth Vader is not because of his artificial parts but instead because of how inhuman he is in behaviour and in appearance.  As for clones, in general we are more concerned about how they risk taking away our own individuality — by being clones of us, raising the question over which is real — than about their artificial nature.  After all, Tanks in “Space Above and Beyond” do not particularly bother the audience — two of the main characters are Tanks — despite being artificially grown, and so the conflict between them and others is over how they were created for a specific purpose that they then rejected, costing many lives.  Yes, one can have the debate over whether they deserved to have rights, but that’s less due to their nature than to their purpose, as we saw last week. While there may be some philosophical questions and some repugnance on the basis of their nature, those aren’t enough to generate the level of fear and repugnance we tend to feel towards the Borg.

It seems to me that the problem is not that they are artificial or are a unified collective, but that they are a forced artificial collective.  Those that they assimilate have perfectly good limbs and eyes deliberately removed and replaced with artificial ones for no good cause other than to support the purpose the Borg are forcing them into.  They don’t convince people to join their collective, but instead force them to do so and suppress any attempts to break out of that collective.  The Borg, then, are not like the Omega particle that Decker references which when all of its individual diverse elements are brought together produces a perfect whole, but instead try to collapse all of the relevant differences to create a purportedly perfect unity.  The Omega particle is a refutation of their philosophy and a justification of the Federation’s philosophy of Infinite Diversity in Infinite Combination.  When all of the diverse components are brought together, the Federation argues, the combination of all of that diversity itself produces that perfection, and the Omega particle seems to justify that philosophy.  This is also true for technology and the artificial, where we don’t need to become artificial or become technology ourselves in order to become the best that we can.  We can use technology in appropriate ways to enhance ourselves without subordinating ourselves to it.  After all, Geordi has technological enhancement and that doesn’t disturb us, and the difference is made abundantly clear in that it is an enhancement, not something forced on him and not something that is defined as being better for all.  Decker tries to make an argument that technological tools and enhancements make those societies and even our society as artificial as the Borg’s, but our societies are built on adding technology to our natural states as an enhancement, not in replacing the natural with the artificial in a misguided idea that at least some of it is superior to the natural.  Then again, the Borg seem to argue that perfection is in the fusion of natural and artificial, so this may not apply to them, either.

So it seems to me that the issue with the Borg is not that they are artificial or unified, but that their unity and use of technology is forced.  We will be assimilated whether we want to or not, and once we are we will have perfectly good natural organs replaced with artificial ones for no real purpose, and we will become one with the collective and will have no individuality of our own, no matter how much we would want it and despite the fact that we could indeed have that individuality on our own.  The crudeness of their cyborg parts makes it less desirable, but we might not mind become more sophisticated cyborgs if it gave us sufficient enhancements, or at least we will consider it and argue over it.  What we don’t want is to be forced into it, either directly or in order to be able to compete with those who have those enhancements (the main argument given in DS9 for banning genetic enhancements) … and the Borg are all about forcing us into it.  We wouldn’t mind using the artificial to enhance us and don’t mind our diversity building a unified society, but really would mind it being imposed and forced upon is, which is why we find the Borg so repugnant.