Archive for the ‘Philosophy’ Category

Musings on the Starcraft II AI Test …

February 22, 2019

So, today I’m going to muse on the Alpha Star AI playing Starcraft II that I kinda talked about here. These are just musings on the topic and so might be a bit inaccurate, and I’m not doing any extra research and so am just relying on what Shamus and the commenters said about it (links to the posts are in the post I linked above).

As best I can understand it, the system was essentially a neural net that was trained using what were in my day called Genetic Algorithms and potentially have evolved a much more “cool” name, where there were a number of agents that played the game against each other and the best ones were kept to play against each other again and so on and so forth. What I did way back in university — as part of my Honours Project I did a GA simulating a “Diplomacy Problem”, with various countries getting advantages or disadvantages based on whether or not they agreed with the others — was create a set number of agents — 50? — and rank them by score, and then drop the bottom 10, double the top 10 for the next run, and leave the rest. I hope they did something similar, but at any rate the overall idea is the same: run the agents, see which ones get the best score, keep those, keep or introduce some new agents so that they can learn new behaviour, rinse, repeat.

What this meant is that they needed to have agents that could play each other in order to generate the massive data sets that you need to train a neural net, which led them to limit the agents to being able to play as Protoss against players that are playing as Protoss. Right away, this is a little unimpressive, as humans can learn to play as pretty much any combination of the races a lot faster than the agents learned to play as Protoss against Protoss. This also led me to comment on the posts that there’s a risk with trying to get it to learn to play as other races or against other races because of the nature of neural nets. The big advantage of neural nets is that you don’t need to program any rules or semantics into it to get it to solve the problems. There aren’t really any rules or semantics in a neural net. Sure, there may be some in there somewhere, and it often acts like it has rules or semantics, but internally to the system there aren’t any. The system learns by semi-randomly adding and removing nodes and connections and adjusting the weights of connections, but the system doing that, at least in a pure neural net (supposedly Deep Learning systems combine the semantics of inference engines and the flexibility of neural nets, but I haven’t looked at them yet) doesn’t have any idea what actual rules or decisions those things are involved in. Thus, a common problem with early neural nets was that when you decided to train it to do something different or learn anything there was always a risk that you’d break existing behaviour unless you also trained it on the old functionality at the same time, which is not how things seem to work in humans. You can limit that by restricting how much it can change the original net, but then it has a harder time learning anything new. Make it static and the machine can’t learn, but make it random and it will forget lots of things it used to know.

What this means for our agents is that teaching them to play as another race or against another race might cause them to forget important details about how to play as a Protoss against a Protoss. I opined that what they’d probably do instead is build separate agents for each case and then have a front-end — which could be an inference engine since this is all deterministic — pick which agent to use. After all, while there are nine different combinations — the AI playing each race potentially against all other races — that’s set at the beginning of the game and so it’s a pretty straightforward decision of which agent to use, and there’s no real reason to try to teach the AI to try to find the ideal match-up given who they’re playing against. So this seems to me to be the easier way to go than trying to build a generic agent that can play all combinations, and it’s actually even less artificial than some of the other things that the agents were already committed to.

So, after the AI beat all the players the first round, how did the one player get a rematch and beat it rather handily? What he did was adopt a strategy that the AI was vulnerable to, which was harassment. The player waited until the AI had built a big army and sent it off towards his base, and then sent a few units in to attack the base. The AI turned its army around to meet the threat, and he moved the units away. After they were chased off and/or destroyed, the AI started out again … and the player repeated the harassing attack. Rinse, repeat, and eventually win the game.

One issue with neural net type AIs is that since they learn through repetition over massive data sets, they don’t really have the ability to learn or adapt on the fly. They don’t really learn much from one event or run. Inference engines actually can learn on the fly because their actions are driven by the premises and logic of their systems, and so if one event doesn’t turn out right they can immediately try to reassess their inferences. In this case, for example, the AI was probably bringing the army back because it anticipated that it was a mass invasion that it needed to repel. A neural net won’t store this explicitly, but an inference engine will. So there’s a chance that after a few repetitions it concludes that this doesn’t indicate a mass invasion and will learn to ignore it. Which, then, would leave it vulnerable to a “Cry Wolf” strategy: harass it until it learns to ignore the harassment, and then launch a full-scale attack to catch it napping. Which it could then learn to defend against as well, and so on and so forth.

People in the comments asked if you could just teach it to ignore the harassment, but the problem with neural nets is that you can’t really teach them anything, at least by explaining it or adding it as an explicit rule. Inference engines can be tweaked that way because they encode explicit rules, but neural nets don’t. To add a rule to the system you have to train it on data sets aimed at establishing that rule until they learn them. There are approaches that allow for feedback and training of that sort from what I’ve seen (mostly through short presentations at work) but either those will establish explicit rules which the system has to follow — even if wrong — or else they can be overridden by the training and so would need to be trained and retrained. In short, you can explain things to an inference engine, but not really to a neural net. You can only either let the net learn it itself or flat-out tell it the answer.

Neural nets, I think, excite people for two reasons. First, because they don’t generally have explicit rules they can come up with unique correct answers that we, ourselves, can’t figure out, or that at least are extremely difficult for us to figure out. This makes them look more intelligent than we are for coming up with answers that we couldn’t see. Inference engines and expert systems can come up with novel solutions as well, but all of those systems can explain how they came to that conclusion and so seem less “mysterious”, in much the same way as when we see Sherlock Holmes explain his reasoning it seems less mysterious and, often, more of a “Why didn’t we see that?”. We aren’t that impressed by computers having access to all the data and never forgetting or forgetting to consider any of it since that’s kinda what they do, but we are impressed by what seem like leaps of intuition that we can’t match. The other reason is that they loosely resemble the structure of the human brain — although anyone doing AI will tell you that they aren’t really that close, but as they are designed to model that in at least some ways the point still stands — and so people impressed by neuroscience will think that it’s closer to what we really do. Personally, I’m more interested in the reasoning aspects of intelligence and am more interested in finding the algorithm we use rather than emulating the hardware, so I’m less impressed by them. Still, they do manage to do the pattern-matching aspects of intelligence well and far better than more reasoning based systems, which has led me to opine that the ideal AI has an inference engine front-end and a neural net back-end. The inference engine answers what it can and passes off anything else to the neural net, assess the answer, adopts it if it seems to work and retrains the net if it doesn’t. Again, some people commented that this seems like what Deep Learning does.

All of this starts to tie back into the heart of questions about AI leading all the way back to Searle: does the Alpha Star agent actually understand how to play Starcraft II? There’s no semantics to a neural net. You could take those agents and hook them up to something that is, say, trying to run a factory and if the weights were correct the system could do that just as well (and people have indeed taken neural nets trained for one specific task and had them perform entirely different tasks and noted that they can more or less work). So what does the agent actually understand about Starcraft II itself? Does it know what the units are and what they mean? It doesn’t have to, as it doesn’t really encode that information in the neural net itself? If you don’t have the semantics, do you really understand anything at all? With Searle’s Chinese Room, most will agree, at least, that the person inside the room is not doing anything intelligent by simply taking in a symbol, looking up the answer, and passing it back out. That person doesn’t understand Chinese. What people say the error is with the thought experiment is assuming that the room itself can’t understand, or couldn’t if it had the right context and information. But all of that is semantic information about meaning. Does a neural net in and of itself ever have meanings? Does the Alpha Star agent store any semantic information at all, even to the extent that an inference engine does? Having the right output doesn’t guarantee meaning, especially if it can be used for things that mean things that are completely different. So does it have meaning? And if it doesn’t, does it really understand?

These may not be questions that the creators are worried about. They may simply want to build an AI to beat humans at a video game. But these questions will obviously be raised by these things, and the answers — and attempts to answer them — are of great interest to fields like Philosophy of Mind, Cognitive Science and Psychology.

Advertisements

Does Great Power Bring Great Responsibility?

February 18, 2019

So the fifth essay in “Spider-Man and Philosophy” is “Does Great Power Bring Great Responsibility?” by J. Keeping. This essay examines whether Spider-Man and Uncle Ben are right when they say that “With great power comes great responsibility” and in what sense that is true, using the example of the Good Samaritan as a framework to, essentially, ask if we are morally obliged to be the Good Samaritan if we are capable of doing so.

I’m going to start in a little bit, and start by examining why Peter Parker and Uncle Ben at least seem to think that the statement is true. Keeping points out that we already know that “Ought implies can” and so perhaps we can argue that “Can implies ought”. This does seem, at least in part, to be the reasoning that the Parkers are employing: the more power you have, the more you can do, and so the more you are obligated to do. However, it seems more reasonable given at least some interpretations of the origin story is that Uncle Ben really means that the more power you have, the more you are obligated to not misuse it, to use it to help people instead of to harm them or for your own selfish reasons. This is actually fairly well-established in the Sam Raimi Spider-Man movie, where Uncle Ben gives that advice explicitly in response to Peter getting into a fight at school, and in fact explicitly saying that just because you can do something doesn’t mean that you should. It can even be argued that Peter’s failing in that movie and in the later movies was entirely about putting selfish interests ahead of what is right or reasonable, as he lets the criminal go because the manager screwed him over — rather than apathy as the original comic portrays it — and chases after Sandman and the original criminal just to get revenge. So perhaps all it really means is that we always have to use whatever power we have to help others and not just ourselves, and the more power we have the more careful we have to be to not abuse that power.

Peter Parker, however, seems to believe otherwise. He sees his refusing to stop the burglar as a moral failing in and of itself, even if he only failed to act out of the apathetic “It’s not my responsibility” motive that is more common to his origin. He definitely seems to be holding himself to the principle that if he could prevent a harm and chooses not to then he has done something wrong, and has not lived up to his genuine responsibilities. Later, Keeping talks about causes and how causation is complicated, but Peter doesn’t seem to be claiming that if he didn’t prevent the harm he’d be the cause of it, but instead that he would make himself, in some sense, at least partly responsible for that harm. This, of course, seems to follow from Peter’s interpretation of Uncle Ben’s death: he could have stopped the event by stopping the criminal, he didn’t, Uncle Ben was then killed by the criminal, and so Peter is partly responsible for his uncle being killed. The cause of his uncle’s death was always the actions of the criminal, but Peter, through his inaction, is at least partly responsible for that because it was his actions that allowed that situation to come about. Again, this is clearer in the Raimi movie because Uncle Ben was only even there because of Peter: Peter lied that he was going to the library to study when he was really going to participate in a wrestling match for money, and so when Uncle Ben gave him a ride and returned to pick him up he had to park in that area where the attack happened. Moreover, he only chose to give Peter a ride because he wanted to talk to Peter about Peter’s change in behaviour after getting his powers, including that fight. Uncle Ben was placed into that position in large part because of Peter’s selfish decisions, with disastrous, though unintended, consequences.

Which, then, leads to what I think the driving force behind Peter’s adoption of the strong form of “With great power comes great responsibility”: consequentialism. He takes on an equally strong form of consequentialism that argues that you are responsible for the consequences of your actions regardless of what your intentions were. Peter didn’t intend for Uncle Ben to die when he chose not to stop the criminal, but that is precisely what happened. Thus, he has to intervene to prevent harm when there’s even the slightest possibility that there will be because otherwise he would be partly responsible for that harm if those end up being the consequences. Sure, others might prevent those consequences or there may be other reasons why they won’t happen, but Peter can’t take that chance. To map that to the Good Samaritan story, if the Good Samaritan had simply walked on by as well, someone else might well have stopped and helped the victim. But if no one did, or especially if no one else came along and the victim had died, then the Good Samaritan would clearly have been at least partly responsible for his death. To Peter, the Good Samaritan would have been responsible in part, but so would the others who passed by. And so Peter refuses to ever pass by.

This fits into the main discussion underlying most of Keeping’s essay: the distinction between positive and negative duties. Roughly, positive duties are duties to take specific actions, while negative duties are duties to not take specific actions. Keeping ends up arguing that we aren’t morally obliged to do our positive duties as those generally are things that help people, while we are obligated to do our negative duties as those harm others. The problem is that while early in the essay Keeping properly excludes duties that we have accepted, like being a parent, because they don’t map to what either Peter or the Good Samaritan were doing — as neither have any particular responsibility to help those they are trying to help — nevertheless those responsibilities are, in fact, positive duties, and in fact can only be claimed to not be so if one takes the tack that Keeping later takes and argue that if not doing them would cause harm then we have to do them, but harm is what Keeping used to define negative duties. In short, taking this tack means claiming that taking the other action would cause harm and so would end up being a negative duty. But this move seems to prove too much; it’s way too easy to redefine any positive duty as being one that might cause someone some kind of harm and thus turn it into a negative duty. We can still avoid obligating us to do them by appealing to how reasonable the demand is — as Keeping does later — but at that point the distinction isn’t doing much for poor Peter … especially since he would disagree that we don’t have an obligation to perform our positive duties.

The issue here is that we all think that there are times when helping someone is a duty and where not harming someone is not a duty, so to divide our duties into positive and negative duties and then claim that our positive duties are optional and our negative duties mandatory seems, at a minimum, to be too shallow an analysis. What we need to do is focus in on those cases and see when we think that we are obligated to do something or not do something and when we don’t think that. This seems to have two components. The first ties directly into what Peter and Uncle Ben hold: what we are capable of doing. But the second component is the more important one, which is what it is reasonable for others to demand of us. Every time we consider what our moral duties are, we not only have to consider what we can do for others but what claims others legitimately have to our efforts. No one can demand that we sacrifice our lives for theirs, for example, unless we were directly responsible for placing them in that situation … and, perhaps, not even then. But people can indeed legitimately demand that we not kill them just because they’re in the way and we’d have to walk two more feet if we didn’t. Yes, these are extreme positions, and there’s a lot of room in-between them, but I think they get the point across: some demands are unreasonable and so can never rise to the level of duties, while some are indeed reasonable and would rise to that level.

It’s pretty easy to show that directly harming someone else without exceptionally good reason falls into the “obligation” category because it is trivial to show — or at least claim in a way that seems reasonable — that asking someone else to not harm you is a pretty reasonable request. It’s also pretty easy to show that no one has an obligation to put their lives and health on the line again and again for no recompense and, to tie it back to Spider-Man, to be demonized for doing that, even if we can consider someone heroic who does so and wish that we could be that self-sacrificing. And, in fact, it’s that it isn’t really an obligation is precisely why we find it so admirable: living up to ones obligations to help others is admirable, but helping others even when one isn’t obligated to do so is even more admirable.

So for the Good Samaritan, a case can be made that the demand is so minor — an inconvenience and a little bit of money — and the consequences so extreme that the Good Samaritan really should be morally obligated to help the victim. This, however, doesn’t really seem to be the case for Spider-Man. As Green Arrow commented to the original Justice League members in the Justice League cartoon, if Spider-Man wanted to quit and focus on his own life, no one could say that he hadn’t done enough to justify the move. We cannot demand that he keep sacrificing his life and his body and his relationships to help others, especially as there are other heroes out there who can take up the slack … even if we might admire him for doing so. He’s neither uniquely positioned to stop those harms nor has he taken any action that would obligate him to do so.

Peter has great power, but has merely accepted great responsibility. His great power doesn’t obligate him to accept the responsibilities that he has accepted, but it is, in fact, that fact that makes him the admirable superhero that he is.

Mad Genetics: The Sinister Side of Biological Mastery

February 11, 2019

So, way, way back I had skipped over “X-Istential X-Men: Jews, Supermen, and the Literature of Struggle” by Jesse Kavadlo from the “X-Men and Philosophy” collection. However, after returning to doing these I decided to revisit it and see if I couldn’t wring something out of it. On re-reading it, however, my initial decision was confirmed, for three reasons:

1) It focuses strongly on existentialist philosophy, which is not one of my interests and about which I am not an expert.

2) Its main point is to turn the X-Men into an analogy for the Jewish experience. I disagree with their interpretation of the comics and characters, but really don’t want to get into the potential morass of discussions of anti-Semitism just to disagree with them over how they interpret comic characters.

3) And outside of that any interesting philosophical points I was interested in making would have little if anything to do with the actual essay, making commenting on that essay quite pointless.

So, let me move on to “Mad Genetics: The Sinister Side of Biological Mastery” by Andrew Burnett. This essay uses the example of Mister Sinister to explore issues around ideas about evolution and, particularly, ideas around eugenics and the idea of deriving values about what we should do from the impression of Darwinian evolution that it is about the survival of the fittest, with organisms and species in constant conflict over their survival, with life going to the victors and the losers only being rewarded with death and extinction. Burnett notes that Sinister explicitly abandons the typical moral and empathetic considerations that we normally make in service of evolving stronger mutants, and cites Hubert Spencer in saying that perhaps those things are precisely what allowed us to evolve in the first place, calling that decision into question, which is an interesting idea, but not so much one for Sinister to adopt in himself, because Sinister would see himself as the uncaring and pragmatic force of evolution, but a superior one, one that chooses on rational benefit rather than simply on whatever manages to survive.

Which is where I think Burnett somewhat misses the nature of the on-again/off-again conflicts between Sinister and Apocalypse. In the X-men universe, Apocalypse represents the unthinking force of Darwinian evolution, as he pushes mutants into conflicts and declares the ones that survive the fittest and so the most deserving. Sinister, on the other hand, advocates for controlled manipulation of mutant genetics, and thus his greatest project focuses on manipulations to Scott Summers and Jean Grey. Most of the time, Sinister’s involvement with Apocalypse happens because Apocalypse’s plans risk upsetting his own, as seen in the “Age of Apocalypse” alternate reality, where he sides with and remains one of Apocalypse’s Four Horsemen and only rebels when Apocalypse’s plans threaten the Summers brothers and his work with them. In a “What If?” where Wolverine becomes Lord of the Vampires, Sinister again intervenes but only because Wolverine turning mutants into vampires upsets his plans. Sinister, then, is the careful and controlled breeder where every move is calculated to maintain and enhance the desirable qualities while minimizing and eliminating the undesirable ones, while Apocalypse is the unconcerned force of nature maneuvering the organisms into conflict and rewarding the survivors with life and the title of “The Fittest”.

So it’s no wonder that they clash. Sinister would find Apocalypse’s semi-random outcomes at best a waste and at worst a threat to his more considered project, while Apocalypse would argue that Sinister coddles his subjects too much and so artificially preserves those that nature would prove weaker. Ironically, each would consider the other’s successes to actually be failures: Apocalypse would claim that Sinister’s successes have never been properly tested, while Sinister would consider Apocalpyse’s successes lucky breaks that Apocalypse has no means of preserving and molding. It’s no wonder, then, that the two are often in conflict.

Mind Your Ps and Qs: Power, Pleasure and the Q Continuum

February 4, 2019

So the next essay in “Star Trek and Philosophy” is “Mind Your Ps and Qs: Power, Pleasure and the Q Continuum” by Robert Arp. This is less a coherent essay and more a hodge-podge examination of issues raised by being a Q, starting from the idea that power corrupts — highlighted by the story of the Ring of Gyges — moving through to ideas of whether or not anyone could really be happy if they could easily get anything they wanted and ending on ideas of certain pleasures that are satisfying in themselves as opposed to the fleeting pleasures that not only don’t satisfy on their own but that often introduce new pains once their pleasures fade.

So the first big question to consider is whether or not power corrupts. Arp uses Q as an example of someone who has been corrupted by power, but then says that Riker, when he gets the power of the Q, wasn’t corrupted and so contradicts that. The problem is that Riker had indeed started to change based on having that power, as Picard notes directly by pointing out that suddenly Riker was using his first name instead of “Captain” as would be expected of a subordinate. Picard manages to shake Riker out of that by having him try to grant his crewmates what he thought they most wanted, only to discover that they didn’t really want it … or, rather, that they didn’t want it that way. But for a good person there will always be the temptation to fix other people’s lives just because you can see what the solution is and can give it to them, but Data most poignantly expresses the idea that it isn’t worth much to have those things if you didn’t achieve them on your own. Of course, there is much debate over whether saving the life of someone — the little girl, for example — when there is no other way to do so fits into the same category. As Chuck Sonnenberg opined in his review of the episode, while it might be admirable to avoid taking the easy way out in that case there is no hard way. Is it acceptable to let someone die just because it might draw on powers that you’d rather avoid using? And it seems that, perhaps, this is how the good get corrupted by power. Either you try to fix everyone else’s lives and become a tyrant or else you distance yourself from others and become completely self-interested. It’s very hard to walk that tightrope when you can literally do anything and so only have the choice of fix everything or choose what stays wrong.

The second question is over whether if we could achieve all our desires easily if that would satisfy us. Arp missteps here by conflating achievement that is too easy with base pleasures like Romulan ale (and Arp’s example from The Undiscovered Country was less about seeking pleasure and more about avoiding pain). I think there is a good point to arguing that pleasures and desires gained with no effort become unsatisfying because we lose the sense of achievement that goes along with them. That being said, I think that if we could achieve our base desires easily most of us could be content with that, as long as we could seek out other pleasures (the higher ones that Arp talks about later). And I also disagree that our base desires and pleasures are inherently unsatisfying. There may well be times when the fleeting pleasures of eating and drinking too much, even so much that we feel bad the next day, are worth the pain we experience afterwards. The issue is that if you do it too much the pleasures are no longer special, and so the pleasure we feel from them diminishes, while the pain doesn’t diminish as much … or, at least, turns misery into our base state, which makes our life overall miserable. So taking that one day every six months or a year to do the things you really shouldn’t is okay and I think helps you have a happy life, but doing it every week makes your life miserable in the long run.

But in reading the essay, I’m not really convinced that there are, in fact, higher pleasures that are satisfying in and of themselves. Arp makes a good case for the big thing we would miss if we could easily achieve all of our desires being the satisfaction of working hard and achieving them ourselves. But this would seem to apply to most of the higher pleasures as well, and perhaps even more so. After all, if someone loved solving logic puzzles but could solve any one that came their way in an instant, it seems quite likely that their taste for them would fade, but as long as one does not overindulge the taste of a tasty ice cream cone on a hot day remains as long as we have physical bodies as we have. All you have to do to maintain interest in the base pleasures is not overindulge so much that they become boring, but it seems that a big part of the “higher pleasures” is the pleasure of achieving them and striving to achieve them, which can be lost if one gets sufficient power to achieve them easily. This, then, might introduce another way for power to corrupt, by leaving the base pleasures as the only pleasures that the powerful can still find pleasure in.

Coyne on Determinism Again

February 1, 2019

As I have commented a number of times before, Jerry Coyne appropriately can’t seem to help himself wrt commenting on free will and determinism. This time he’s taking on a comment by Scott Aaronson in a video where Aaronson says that he doesn’t find the determinism debate all that interesting unless it leads to us being able to predict behaviour, which Coyne disagrees with. I’m not going to touch too much on that specifically since it’s a statement of interest, but as Coyne gets himself into some muddled philosophical positions here I’m going to focus more on them

Starting with a comment Coyne makes talking about predictability:

Surely it will be impossible, at least in our time, to gather the requisite information to accurately predict someone’s behavior, for such a computer would have to model not just a person’s brain, but also the entire universe, for the universe impinges on a person’s brain in ways that affect their behavior.

The problem is that this is true of every single event in the universe, and in fact has to be true for Coyne’s argument that our decisions and choices must be deterministic just like everything else is. And yet we don’t get our computers to model the entire universe while we try to predict what would be any other physical process just like the one in our brains, which again has to be true for his view to be correct. Coyne can argue that our brain processes are so complicated to make such prediction more difficult, but surely all that does it make predictions more complicated, not require us to track the entire universe. Predicting human behaviour would be more like predicting the weather than predicting where an object will land if you drop it off a building, but it wouldn’t be something so completely different that suddenly our model would have to incorporate the entire universe to pull it off.

He also uses quantum mechanics oddly:

Further, insofar as fundamentally unpredictable quantum events may determine behavior, no machine could ever model those: at best it could give probabilities of different behaviors. But those quantum effects do not violate physical determinism, and cannot give us free will in the sense that most people think of it.

But brains and humans are not quantum phenomena, as far as we know. And the underlying premise of quantum mechanics is that it is the micro layer and that randomness at the micro layer can’t impact the events at the macro layer, which is why physics still works with quantum indeterminacy. The best possible interpretation there is that Coyne is referencing the claim that some defenders of free will make that our free will interactions might be at the quantum level and so not determined, and so saying that if that was true it would make deterministic behaviour not predictable. But since Coyne doesn’t believe that, it’s a rather odd statement to make to Aaronson while trying to argue that our behaviour could be deterministic but not predictable. If Aaronson was claiming that our behaviour couldn’t be deterministic because it’s not predictable, it might make sense, but even in the initial comments it’s clear that Coyne is aware that that’s not his argument.

Coyne then moves on to reiterate some of the philosophical conclusions he thinks follows from determinism:

If we are truly biological automatons, which I think is true on first principles (viz., we are made of molecules), then that has huge implications for religious thought and dogma, which of course depend on assuming contracausal free will. You are free to choose your saviour, your faith, your actions, and, for gay Catholics, whether to commit homosexual acts. Because you make free choices, making the wrong choice will send you to perdition, and making the right one to God, Yahweh, or Allah.

The counter to this, though, is that if we don’t really choose to believe in God, we also don’t really choose not to believe either. So it would be inappropriate to believe that one is intellectually or morally superior for not believing in God, or to assign any actual negative properties to those who believe in God no matter what evidence Coyne advances supporting atheism. This is one of the biggest flaws in Coyne’s view, in that he constantly argues that we aren’t responsible for our blameworthy actions but very rarely acknowledges that we aren’t responsible for our praiseworthy ones either, and so just as we deserve no blame for our bad actions we deserve no praise for our good ones, either. We can neither change for the better nor for the worst. And if Coyne tries to argue that praise can motivate people to do better, then a) that also applies to blame and even punishments and b) he’d be arguing for maintain those concepts and behaviours for the same reason Dennett occasionally advances for arguing for the existence of free will, which Coyne strongly derides as reasoning. So this is an implication that Coyne rarely seems to consider.

There are ramifications for the justice system. I firmly believe that if we grasped that nobody, including criminals, has a “choice” in whether or not to do something, like mugging someone, we would structure the justice system differently, concentrating less on retribution and more on keeping baddies out of society, trying to reform them, and using punishment as a deterrent to improve society.

The problem with this is that when we consider what rehabilitation would mean, we can see that there is sense in dividing things up by “choice”, and specifically by appealing to the reasons they commit the crime. A kleptomaniac steals because they have an overwhelming desire to, even when they don’t want to and in fact desperately want to not steal that item. You can also have someone who steals because their family is starving and will die otherwise. And you can have someone who steals because they consider it an easy way to get what they want and they don’t care that it belongs to someone else. You cannot treat these three groups the same if you want to rehabilitate them. For the former, you need to cure or control that desire. For the person with the starving family, if you make it so that their family won’t be starving they won’t steal anymore. And for the latter, you want to make stealing not be easy for them, through punishments. How do you suss this out without appealing to the reasons they have for the choices they make? But hard determinism insists that reasons don’t matter, which is where the compatibilists win out by accepting that reasons do matter and do make a difference, even as they are determined.

And, of course, you can decide that we shouldn’t have a justice system based on rehabilitation without having to be a hard determinist. All you have to do is make its main purpose be protecting society, and all of Coyne’s points here follow.

There’s another implication to consider as well: if no one is responsible for their actions because they don’t choose them, then every action they take was determined. They never commit any crime out of malice, so what reason do we have for locking them up? Their action was determined by the specific conditions in that place and that time — by the entire universe, remember? — and so we have no reason to think that they will ever repeat the action in different circumstances. We determine that now by appealing to their reasons, but their reasons aren’t what drive their decisions in a hard deterministic viewpoint. We can, of course, identify patterns and use that to justify the action, but then what do we do if we identify a pattern that cannot be rehabilitated, someone that we want to put in jail for life without parole. If they cannot be rehabilitated, why keep them alive? Why not simply execute them and give them the death penalty instead? Sure, opponents of the death penalty can argue that if we get it wrong we can let someone free but can’t unexecute someone, but surely there are some criminals where we know that they are guilty of the crimes and cannot be rehabilitated. So it seems like Coyne’s hard deterministic view has to support the death penalty, which I believe Coyne does not support. Another implication that he doesn’t seem to have considered.

There are ramifications for politics. Once you realize that people’s acts solely reflect the physical consequences of their genetic endowment and environment, you (or at least I!) become more sympathetic to the plight of those who drew a bad hand in the poker game of life. The notion of the “Just World”, in which people get what they deserve, depends on accepting contracausal free will. But that view must be tempered by realizing that neither the successful nor the downtrodden freely chose their paths.

Except it doesn’t. The “Just World Fallacy” depends on the idea that the world itself will reward those who are good and punish those who are evil, and therefore conclude from that that if someone is doing well then they must be good and if someone is doing poorly they must be evil. This does not seem to be true of our world, but it is clear that we don’t need free will to assert that such a world exists. However, the idea that the world should be like that and so that we should reward those who do good and punish those who are evil does depend on free will, and more specifically on us actually deserving good or bad consequences for our good or bad actions. Coyne’s view undercuts the whole idea that we ever deserve anything, and so undercuts any rationale we could have for rewarding good deeds and punishing bad ones … beyond a simple “This is the behaviour that I like to promote” idea. Again, another implication that he doesn’t seem to have considered.

It’s a common flaw of hard determinists that they tend to try to maintain the concepts that follow from free will when they make sense to them but deny them when they don’t, but then argue that this follows from their view of determinism. While I don’t think that compatibilism is correct, at least they tend to try to maintain all the concepts and so end up with a more consistent viewpoint. I have yet to see a hard determinist position that was truly consistent, even as I know that it can be done. Hard determinists just don’t, in general, like the implications of a consistent hard determinist position. Jerry Coyne does not seem to be an exception.

An Aspiring Jedi’s Handbook of Virtue

January 28, 2019

The next essay in “Star Wars and Philosophy” is “An Aspiring Jedi’s Handbook of Virtue” by Judith Barad. In it, she compares the Jedi to Plato’s Guardians, and also makes references to his famous cave analogy, placing the Jedi firmly in the Platonic framework. For the most part, the comparisons work, although using Yoda training Luke to balance things is a bit shaky. The Jedi could indeed be Plato’s virtuous warriors, although to what extent they are Platonic or Stoic is an open question.

I’m going to take on her discussions of emotion, however. Plato, the Stoics and the Jedi all want to restrict emotion in ways that we would consider harsh. Barad wants to use Aristotle’s idea of balance to moderate these views a bit, to allow for things like righteous anger or compassion — as an emotion, not as a virtue — to influence our decision-making. In both cases, her overall argument is that these things can work well and are even necessary for us to act virtuously, falling back on the common arguments that we need righteous anger and compassion to motivate us to do good and act virtuously. She says that compassion is why we care about everyone and not just ourselves, and that righteous anger can lead to just action.

In the Star Wars universe, anger, righteous or not, rarely leads to good actions, and rarely does so for long. The issue when it comes to justice is that just actions are actions that are determined not by how anyone feels about them, but instead by what is truly just or not. It’s a rational determination, and neither anger nor compassion facilitates the determination of what is just. A heinous action may make you feel angry, and even justifiably so, but that doesn’t change what the actual just action is. You might feel compassion for someone who has done a wrong, but that doesn’t change what the just response to them is. This is because the just action will always take into account all relevant factors. If the factors that made you feel compassionate towards them are relevant to what the just action is, then the just action will already take them into account. So the determination of the just action should be identical if all of those factors are relevant. It’s only when the factors aren’t relevant that they would differ. It’s the same thing for righteous anger. If the factors enraging you are relevant, dispassionate justice would take them into account. It’s only when they aren’t that the assessments would differ.

And the problem with emotion is indeed made pretty clear in Star Wars: it encourages you to do things that, later, you regret doing. Anakin, in a rage, attacked Padme and gravely injured her because he thought he had been betrayed, when in reality he hadn’t been at all. He also slaughtered the entire Sand People village, which he later at least somewhat regretted. His love for Padme led him to choose Palpatine over Windu, combined with his fear of losing her. If he had paused to consider his actions and gather all the facts, he probably would have chosen something else. Emotions are quicker and easier ways to acting at least somewhat virtuously, but they are also seductive. Once you start accepting them as the judgements of what is right or wrong, then they will constantly seduce you into following their recommendations … even if those recommendations are incorrect. You can’t fix that by trying to balance them, because balance is a dispassionate assessment, and righteous anger and compassion are not dispassionate.

If you’re going to create warriors with the power of life and death over others depending only on their own judgement, you don’t want that judgement clouded, and righteous anger and compassion could your judgement. This doesn’t mean that you don’t show concern about others, but that you don’t let that concern override what you know is right, or encourage you to do what’s wrong.

Game Association

January 23, 2019

I haven’t talked about a video from Extra Credits for a while, so let me look at a recent one today. The video is about “The Catharsis of Doing”, and talks about how games by their nature get us to actually do things, which then can affect us in different ways than simply watching a movie or reading a book can. This, of course, is at its base rather obvious. Chuck Sonnenberg, for example, talks about how Dragon Age Origins showing you the impacts of your choices as your army heads out for the final battle (in part 10) makes that incredibly epic, pointing out that if you see the dwarves marching that means that you chose to not save the Forge and allow the creation of golems instead, and that the mages being there instead of Templars means that you managed to save the mages. So, it is definitely the case that you doing things does make things different than just passively watching other people do it. But then that raised a question for me: do what extent are you, the player, actually doing it?

Because most of the examples in the video, and even Chuck’s example, rely heavily on the player associating themselves strongly with the character they are playing, so much so that they really see themselves to be the character in the game. If a game is going to make you feel regret for the choice you made, then it’s going to have to be the case that the character is you and not a character you are playing. If you feel frustrated over not being able to get over a hurdle in a game, or feel like a success because you did, then that game and game session is going to have to become part of your life and one of the things that are a crucial part of it. And if you feel good for making the choices that lead to the army that you have, then again it’s going to be you, as the player, who decided that, and not your character.

This way of thinking, I now realize, is rather foreign to me, because I tend to play not as myself but as character. I played 9 or 10 characters in The Old Republic, none of which were me in any way (despite a friend making that assumption when I showed him my first character, which I was trying to play as Corran Horn). Even in the games that are closer to me — for example, where I use my own name — I’m not really me. I might try to make decisions as if I was me, but in general I’m always asking myself what my character would do, not what I would do.

Sure, when I’m just playing a game and focusing on the gameplay, then failing at it feels like my own failure, and that can impact my mood. But even then, if games are supposed to be an escape from the world if I’m feeling frustrated I know very well to avoid playing games that will frustrate me more, and it is far more often the case that frustrations in the real world will make me less able to tolerate frustration in a game than that frustration in a game will add to my frustrations in the real world, because it’s only a game, after all. And it’s hard for me to feel regret in a game because it’s never me doing it, but instead is my character doing it. For example, one of my TOR characters was a Michael Garibaldi ex-pat — who was the brother of my Sith Warrior — who started out in the Empire, got drummed out of the military for drunkenness and then went to the Republic as a Smuggler once had a choice in a quest to side with either an attractive Sith or an attractive Jedi. He didn’t have any real loyalty to either side, and spent his time flirting madly with both of them. When it became clear that the Jedi wasn’t going to offer him a tryst as a reward but that the Sith was, he sided with the Sith and killed the Jedi. Now, this is a pretty despicable thing to do, but I didn’t feel any guilt or regret over doing it, because I didn’t do it. That character might have regretted it later when he reformed, but I didn’t.

For me, in general, when I take an action in a game I’m either doing it as a character in a game, or am doing it as part of the game itself. I might take an evil action just to see what happens if I do — like killing a romanced Carth at the end of Knights of the Old Republic as a Dark Side character — or else to get a mechanical advantage in the game. But I don’t strongly associate myself with characters in a game, whether RPGs or other games. They are not me, and I am not them.

But I’m starting to realize that for many people this is not the case. From the complaints about not being “represented” in games to this video, it seems to me that for many people their escapism isn’t into a story of another world or of something that is not them, but is in fact them themselves. They might be trying to escape from their life into a world where they can have a better life, not a world that isn’t their life. So if that life doesn’t go the way they’d like it to, when they do things in that life that doesn’t align with their view of themselves and their morality, when the character in that life simply can’t be them for whatever reason, then the illusion of that being a separate and better life is shattered and their escapism and any kind of catharsis from that is lost.

The thing is, we know that we can have escapism without having to make that sort of strong association. Books, movies and TV shows, as the video points out, don’t allow for that and yet have always been excellent escapist media. By allowing the player to strongly associate themselves with the characters in the game, games allow for a different type of escapism, but I’m not sure that that sort of escapism is a good thing. It seems to me that the negatives pointed out in the video follow precisely from that sort of association, and yet if we, as they advise, try to remember that it’s just a game then the positive forms of catharsis that they talk about are likely going to be lost as well. Unless you think of your character as you, you will not get the “good” kinds of catharsis from your character doing things or achieving things, but once you do make that association you’ll also get the “bad” kinds of catharsis from your character doing things you wouldn’t or failing to achieve things. You can’t have one without the other.

I think this ties back into the “assumed empathy” that I talked about last week: people perhaps having less and less ability or less and less desire to associate themselves emotionally with people who are not them or not like them. This encourages them to instead of relating to the character make themselves the character and relate to the game and plot and emotional resonances that way. I don’t think this is a good thing, because it risks taking away the fun of the game, the fun of doing things that you wouldn’t do normally and in fact have little interest in doing just for the heck of it. It also seems to me to make the outcomes of the game have far too much importance. For me, my interest in finishing the Persona games had nothing to do with the games or my life in them itself, but instead from the external commitment I had made to do so. So when I couldn’t quite finish Persona or abandoned Persona 2 that was a personal failure not because my character who is me failed, but because I didn’t achieve a personal goal of mine. But I could be consoled in that by considering that in deciding to abandon them I had taken into account all of my desires and goals and capabilities and decided what was more important to me, and could make plans to do it later. That’s because it was all me as me, and the details of the game itself were completely separate from that; the goals of the characters were not important goals for me as me because _I_ wasn’t doing anything in the game itself. Only the characters were.

As I said, associating strongly with the character in a game is a foreign concept to me, so I don’t know if my impression of how those who do it do it is accurate. But if it is that way, then a failure in a game or a perma-death of a character could be devastating to people who feel that their lives are ruined because of it. That can’t be healthy.

Sympathy for the Devils: Free Will

January 21, 2019

So, here it is, Monday again, and I was pondering what to post given that Monday is actually one of my regular posting days (I’ve started adding posts Tuesday and Thursday to take care of the backlog). Sure, curling was on this weekend and so I could have cheated and made that post my Monday post, but that seemed unsatisfying for some reason. But the issue is that I’ve pretty much finished talking about Doctor Who, which is what was filling up the Monday posts before this. Before that, I was posting about the cheap horror movies, but now I post them on Tuesdays, and don’t want to mess with that because I tend to need a quick post for Tuesdays and those posts are pretty quick. I’d post about other movies or the like that I’ve been watching, but I don’t have anything new this week. So, what should I post this week?

And then I remembered my old “Philosophy in Popular Culture” tag. I had decided that I wanted to post more philosophy this year, and had decided that part of that was continuing that series, as it has lagged for quite some time now. Over three years, in fact. Those posts are philosophical and generally relatively quick, so it seemed like a good fit … especially since we got quite a bit of snow over the weekend and so on Sunday, when I was writing this, I was going to have to shovel it and probably wasn’t going to want to do anything else after that.

So, the essay I’m going to look at is from a new book “Dungeons and Dragons and Philosophy”. I had bought this a while ago — probably close to the time I stopped writing these posts — and had decided not to read the book but instead to read it essay by essay when I commented on it. That means that I’ve owned it for likely years and haven’t read anything but the first essay. I may reconsider my stance on reading it. But, anyway, I had read this essay originally and meant to talk about it, never did, and am now returning to it.

The essay is “Sympathy for the Devils: Free Will” by Greg Littmann, and it’s an examination of free will in the context of D&D, and especially in light of the Evil and Always Evil aligned races. Unfortunately, this is the most interesting question — can we really say that races that are “Always Evil” really have a choice in being evil — but he addresses that specific question at the beginning and then dovetails into standard hard determinist arguments against free will, which are both less interesting and more problematic. So I’ll leave the specific problem of “Always Evil” races to the end, and talk about his other points first.

After using the thought experiment of a cleric who slides down a trap and collides with the rogue and having the rogue determine that the cleric had taken an “evil” action in attacking their fellow party member, attempting to show that such blame was unfair (in an attempt to get us to see that calling the evil races evil for what they also have no choice over), Littmann immediately segues into an argument that we are all, in fact, physically determined and so have no choice either. The immediate problem here is that saying that eliminates that entire thought experiment, as we should consider the cleric to have acted just as wrongly by just sliding into the rogue due to the laws of physics as by making an actual decision to attack the rogue. So the rogue’s actions, by that, shouldn’t been seen as unfair and so, by extension, neither should the actions of the party in attacking evil creatures. This, then, should cause us to doubt whether we should have sympathy for the devils at all; perhaps the most reasonable action is simply to treat any action that harms others except as direct retribution or attempt to prevent hard as evil and react accordingly, even the thought experiment with the cleric. More philosophically, we can note that the difference between the cleric falling and the cleric attacking is a fact about the internal state of the cleric, specifically about intentions. If intentions and determinism are not compatible, then his thought experiment doesn’t work under the deterministic view that he purports to hold.

Which leads to where he goes next, which is to talk about compatiblism. He invents another thought experiment to attack them: imagine that a succubus has “seduced” the fighter and made him attack the party, and the party, once he is freed, decides to execute the fighter in response because the fighter attacked the party. Using the definition of compatiblism that says that it is about acting on your desires, he says that holding the fighter responsible seems unfair, but since all desires are equally determined we are all in that situation all the time, so it isn’t fair to hold us responsible either. The problem is that, again, he bases this on an idea that forces us to distinguish between the internal motivations of the person, and even worse is one that compatiblists — the group he is going after — have already taken into account. Most compatiblists who deal with decision-making argue that a free choice is one that is made from a person’s decision-making capacity when that capacity is functioning properly. If the succubus is exerting exceptional influence over the fighter’s mind, then the fighter’s decision-making capacities are either not involved in the decision or are not functioning properly. Thus, he shouldn’t be held accountable for attacking the party. But if he had decided to attack them based on fully functioning capacities, then he should be held responsible for his actions. Littmann’s example relies on us having this distinction in mind, applying what we think we should do for what we consider “free” decisions to the case that we don’t think is “free”, pointing out how unfair that would be, and then trying to apply that back to the “free” choice, arguing that the “free” choice is no more free than that one. But we could just as easily decide that we were wrong to think that in the case that was not “free” it would be unfair to “punish” them. Littmann’s gives no reason for us to think that that solution is any less reasonable than the one he advocates.

This only gets worse when he makes the same sort of argument that Jerry Coyne does about the consequences of his arguments: we cannot justify retribution, but instead can only justify causing suffering on the basis of rehabilitation or preventing others from causing suffering. Putting aside that changing our thoughts on the matter requires our internal thought producing systems to not be deterministic and so be “free” which is what he had to deny to make his other arguments work, the question we can ask here is: why not? Is there any meaningful concept of evil without people being able to have meaningful intentions, for good or evil? If we can’t claim that the evil races are evil because they aren’t free, then how would it be in any way evil to kill them for the harms they cause or, in fact, for any reason we want? Why should we think that we are only “allowed” to cause suffering if it will prevent suffering? Sure, that was how we defined good and evil before, but that, as Littmann argues, is determined by and embroiled in our idea that the choices are free and that intentions matter. If our choices aren’t free and intentions don’t matter, then what we do and why we do it don’t matter either. So why stick with the outdated idea of morality that Littmann rejects or, rather, why should we reject the things he rejects and accept the things that he wants to accept?

I’ll skip the discussion of how things being random don’t save free will, but will now backtrack a bit and talk about his idea that, following on from Einstein, the future is fixed and so we can’t have any kind of meaningful free will. Compatiblists who tie it to decision-making processes, of course, don’t have an issue here, but even without that this isn’t as clear as Littmann thinks it is. I like to use this thought experiment: imagine that I have a time machine, and I go into the future, observe the outcome of your free choice, and then return. Even if that was by that necessitated to happen, would that automatically make it not a free choice? Well, no, it would seem that by definition it would still be a free choice, because my simply coming to know what that choice was going to be had no causal impact on your decision-making process. Even under determinism, there is no way that my coming to know that could causally impact that process, and so it can’t impact that process and that choice one way or the other. And this would seem to hold to free choices as well; there is no causal mechanism that could impact that. You could reply that my thought experiment isn’t possible if we have free will as the future could never be determined beforehand, but this gets into complicated ideas about time travel that are too long to get into here. Suffice it to say, his argument isn’t as clear as he thinks it is.

It doesn’t help that he himself provides a counter-example to his own argument: that of randomness. He implies that randomness is still randomness under such a system, and that there’d still merely be a fact about what that random process spit out. Well, then why wouldn’t the same thing be true of free choices: the choices are free, but there’s a fact of the matter about what free choice was made. So that argument doesn’t really do for him what he hopes it will.

So let’s return to the more interesting example: what do we do about creatures who by their nature are evil and so don’t really get to choose whether to act evil or not? Well, there are two basic way to look at them. First, we can see them as having evil desires by nature that predisposes them to act evil. As long as it is possible for them to act on other desires and so not act evil, then we can rightly punish them for acting evil. However, if they are literally incapable of acting on any other desires or not in an evil fashion, then they seem more like a force of nature than like a person, and so we might be able to justify killing them simply on the basis that it is not possible for them to do otherwise and we want them to stop harming other people. So the stronger Littmann’s case is that they cannot freely choose to do evil, the more we should be inclined to say that we should take dramatic steps to prevent them from committing evil, which would then justify the “Detect and kill” paladin: anything that comes up as “Likely to harm others” — the only meaningful notion of “Evil” that Littmann allows — should just be killed out of hand to prevent that. As they are not free, there are no other options. But don’t blame us for it: we aren’t free either.

Arguments against free will tend to rely on free will presumptions to make their case, and Littmann’s essay is no exception to that. And one of the main issues with hard determinism that is responsible for the existence and possibility of compatiblism is that their conclusions, if they hold strictly to them, seem so counter-intuitive that they simply cannot be true. But if they weaken their conclusions, then they seem like compatiblists who simply don’t want to admit it, which is pretty much true of Coyne much of the time. Free will, then, seems far more complicated than most hard determinists will allow.

Pondering Normativity

January 18, 2019

So, I’ve been engaged in a long discussion with Coel about morality in the comments over at his blog. As part of that discussion, though, I’ve started pondering the concept of normativity more, mostly because Coel subjectivism at least in part comes from a skepticism of normativity, or at least a skepticism that we can build any kind of morality that can have that sort of normative power. That being said, he’s not really clear about what he thinks is required for normativity or, in fact, even what normativity is at times, in the same comment saying that he has no idea what normativity is while still saying that some views of morality can’t be properly normative.

Now, from my side, the issue with this is that I’ve never really pondered normativity in-depth. I’ve taken the loose definitions and examples of it that I came across in philosophy and just went with it, preferring to focus more on the details of the various theories with the presumption that normativity was at least mostly understood and agreed upon, with only niggling differences to be resolved if they became relevant. And generally in philosophical circles that’s pretty accurate. This was the first time I had encountered the situation where what it meant for something to be normative was both critically important to the discussion but where there was a major disconnect or confusion over that very definition. This led me to realize that I, in fact, didn’t really know what it meant to be normative. Sure, I had the common examples and a rough understanding, but I had never examined the details of that in any great detail.

So, I started pondering it. And some things jumped out at me that I thought informative, and so I thought maybe I should make a post about it. Remember, these are musings, and so not completely thought out yet, but I should be able to come up with a coherent account of what I’m talking about here without too many major missteps.

Anyway, to start talking about normativity, it’s probably best to talk about is-ought problem, which is essentially how we understand the distinction between the descrptive and the normative. And the simplest and most bare-bones way of describing that seems to me to be this: You can’t look at something — a thing, an event, a situation, an idea, a statement, etc — and determine that the way it is now is how it should be. In morality, this is typically expressed as you can’t look at how people act when they say they are acting morally or what people think it means to be moral and from that fact alone conclude that that is what it means to be moral. But I submit that there’s another way to view this, which is through another example that I use to talk about normativity a lot, which says that you can’t look at something and determine from the properties it has that those are the properties it is supposed to have. The example I use for this is that you can’t look at my front steps and determine from that that it’s supposed to be that way.

What’s interesting about this example is that it makes it clear that there are two senses we can interpret “Supposed to be that way”. The first one is the most obvious one, which is that you can’t determine from how it ended up what my intentions were for it. I might have intended it to be far better built, for example, but my lack of skill made it, well, rather crappy. So is that loose railing intentional, or is it an accident that I couldn’t fix? You can’t answer this question in any way by saying that it is loose, so I must have intended that it be loose. To make an argument here, you’d have to use a bridging argument, by perhaps saying that I left it like that and was skilled enough to fix it, so that must have been my intention. But this goes beyond the simple facts about the steps and railing and into more details about me, including my skills and intentions.

But there’s another sense in which you can ask if it’s the way it’s supposed to be, and that’s more of a classification issue: I call these front steps, but are they actually front steps, or should they be considered something else? If, say, the wood was so thin that no one could ever step on them, do they count as steps? If they are at the side of the house, are they still properly front steps? We don’t normally worry about this sort of thing for steps — because even if they aren’t technically steps or front steps no one really cares — but we can see how this might be important for something like a moral code, or a scientific theory. For those, we don’t really care what the intention of the person was, but instead care more about whether they’ve managed to come up with anything like a moral code or scientific theory at all.

So I think we have two related but distinct normativities to consider here. The first is Intentional or Goal-Oriented Normativity, where we judge how something ought to be by the intentions involved: what goal were you trying to achieve or what intention did you have in doing that thing? The second is Conceptual Normativity: what is the thing, really?

In my discussions with Coel, it seems that we keep tripping over this distinction without realizing it. Coel’s entire view of morality and normativity in general is to relate it to goals specifically, and insist that we can only judge a moral statement by appealing to those goals of the agents. Thus, morality, to be normative, is something has to relate directly to the goals and desires of the agent, and thus is subjective, because any kind of normativity without intentions or goals is pointless. My response was about the conceptual form, talking about what morality really is, and dismissing some of his arguments on the basis that he was really talking about something other than morality. So, for the most part, he was worrying about whether the front steps were built the way the builder intended, and I was wondering about whether they counted as front steps at all. He would get frustrated with me for niggling over semantics, and I’d get frustrated with him for trying to draw conclusions about things in general before establishing that his examples were even examples of the things we were talking about. (Of course, there were more disagreements, but this is probably a good summary of the issues over normativity).

This is why he couldn’t understand why my view of morality as conceptual truth was normative, and I kept asking him to tell me how to figure out what was or wasn’t moral. His insistence that it was all the intentions or goals of the agent didn’t in any way help me determine if we were in any way talking about morality, and my conceptual truth angle didn’t reference goals at all. In short, I was answered Goal-Oriented Normativity with Conceptual Normativity, and vice versa, and so neither of us could make any real progress here.

So let me break this down further then into not just an is-ought distinction, but instead to an is-should-ought distinction. “Is”, of course, refers to how things are. “Should” refers to how things should be given the intentions of an agent. “Ought” refers to how things should be if the thing is going to be an actual member of the class that it purports to be. So to use Richard Carrier’s “surgeon” example, if we look at a surgeon and note that he doesn’t disinfect his hands, we can say that he should do that to increase his survival rate if he intends to be one and/or a successful one, but if he never cuts into a patient we can say that he ought to or else it seems ridiculous to say that he’s a surgeon at all. Disinfecting their hands is a thing that all good or ideal surgeons do, but you could be a surgeon who didn’t. You can’t be a surgeon at all if, by definition, you never cut into a patient and so never actually do surgery at all. This allows us to say that the Repo Man in “Repp, the Genetic Opera!” is a surgeon and performs surgery but note that he really should take more care to, you know, keep his patients alive. He fulfills all the oughts, but not all the shoulds.

This, I think, also explains both why I reject moral motivationism — the idea that once we understand what it means to be moral we would be motivated to do moral things — and why people see it as such a concern. I see the issue as one of Conceptual Normativity, but Conceptual Normativity doesn’t motivate you to act on it in any way. One can understand what it means to be moral without having any motivation to do it. But since we consider the two of them to be the same the idea is that if I say “You ought to do X” with moral normativity that I am appealing both to the definition of what it means for something to be moral and that it fits into your intentions or goals. But perhaps I can redefine those statements, where Conceptual Normativity says “In order to be moral, you ought to do X” while Goal-Oriented Normativity says “If you want/intend to be moral, you should do X”. Of course, all Conceptual Normative claims have to make true statements in the Goal-Oriented Normativity context; if you want to be moral, you’re going to have to do the things that are required to be moral. But this allows us to both split out some things where it would merely be desirable for you to do it if you wanted to be moral or are things that you need to do to develop into a moral person and to also understand that those statements will be false if someone does not want to be moral and has no intention of it while the Conceptual Normative claims about morality remain true: In order to be moral, you ought to do X, but if you don’t want to be moral then you don’t have to do them.

The Goal-Oriented Normative claims, then, are always personal, specific and tied to your own situation and desires. Still, given all the facts about that, there are always right answers to those sorts of normative claims. The Conceptual Normative claims are always true and always normatively true … but they are true of the class or concept itself, not of any individual person. In short, all properly moral persons ought to do them, and anyone who doesn’t is not properly moral, but whether any particular person ought to be properly moral can only be determined by determining if “You should be a properly moral person” is a true Goal-Oriented Normative claim for them.

Why I think that the Conceptual Normative claim is more important here is simply this: you can’t decide if you should be properly moral until you know what it means to be properly moral. This is what holes both Coel’s and Richard Carrier’s views because as they don’t recognize conceptual normativity they make an implicit claim about what it really means to be properly moral that they never demonstrate. Both Coel and Carrier use Goal-Oriented Normativity to oppose the Conceptual Normativity claims of their opponents — Coel directly, Carrier through saying that if they have something that would satisfy them more then that would be their base principle — but as we can see here that’s not a valid move. Just as you can’t get from what you want to be true to what the facts of the world are, you can’t get from what you want to do to the conceptual facts either. No matter how happy you are with them, if you didn’t build front steps you didn’t build front steps, and no matter how much you are happy with your life that doesn’t mean that it’s a moral life you’re leading.

So first we need to decide what it means to be moral and then we can decide if we want to be it. And to end on a cliffhanger that I might never take up again, I admit that in the discussions with Coel I am wondering if there can be any rational reason to want to be moral beyond simply valuing the moral for its own sake.

Should I Boycott Ideological Entertainment?

January 17, 2019

So I’ve been talking a bit about ideologically infused entertainment this week, talking about Doctor Who becoming Social Justice Oriented and a bit about how the Persona games, in general, aren’t. Recently, I came across a post at Vox Populi talking about Marvel inserting a drag queen into its comic with reactions to this, especially in the comments, calling for boycotting Marvel. This raises the question: you’ve found that either a new work that you were considering buying or an existing series is or has become ideologically infused, and in particular to an ideology that you aren’t in agreement with (whether that’s Left, Right, Front, Back or whatever). What should you do? Should you boycott it?

The first thing to think about is whether or not it really is ideologically infused. If you just look at this specific Marvel example, that’s not really enough to conclude that it’s ideologically infused. Drag queens as characters aren’t uncommon. After all, Persona 5 includes one and we wouldn’t call that game ideologically infused. The important thing to remember is that while the notion that all media is ideologically infused (or political) is just plain wrong, creators have their own views and biases and sometimes, no matter how careful they are, those views will bleed through. Just because a work expresses positively an idea you dislike or denigrates an idea you like doesn’t mean that it’s pushing that as an ideology. It just might be a creator unconsciously including an idea that they hold that you don’t. It doesn’t seem to be reasonable to stop consuming an entertainment media because they happen to hold different ideas than you do, or at least that’s not reasonable for me.

Now, people will protest that in the Marvel example they’ve done plenty to prove that they are, in general, ideologically infused, which isn’t an unfair complaint. So, what do you do then? Well, what we need to consider here is that the worst ideologically infused works are essentially deliberate propaganda: they are works designed to present a specific view and encourage you to adopt it. And what I think, for me, is that I shouldn’t boycott propaganda works for being propaganda works, but instead should judge them just like I’d judge any other work of entertainment: Are they entertaining or not? If I’m being entertained by them regardless, then I don’t see any reason to stop consuming them. And if I’m not being entertained by them, then the boycott problem solves itself.

I have two main justifications for this:

1) Most works that are deliberately ideologically infused aren’t very entertaining anyway. So the very worst of the lot will solve themselves anyway.

2) If I recognize that something is just propaganda, it’s not likely to impact my actual thinking. In fact, once I recognize the views that it’s trying to promote, I’m actually quite likely to spend my time arguing against them rather than giving in. So there seems little risk of the propaganda having its intended effect on me, so I can indeed treat it like any other form of entertainment.

Now, the objection will arise here that if I and others still buy it, then the companies will continue to produce it. If we don’t like ideologically infused media — and I don’t — then the only way to make people stop producing it is to vote with our dollars and not support those attempts. For me, my counter is that if it’s entertaining, then it is fulfilling the purpose of entertaining, and so is worth my dollars. I don’t feel the need to vote with my dollars for things other than “entertaining” when it comes to my entertainment.

But this is one of those things that is actually subjective. If you don’t like something that a company does and want to stop giving it your money, knock yourself out. We all have our own desires and principles and lines we won’t cross. For me, though, when it comes to entertainment, my line is entertaining. I don’t want to put more thought than that into my entertainment. If you do, then that’s fine, but you don’t really have an argument saying that I shouldn’t.

If I have to put too much effort into filtering my entertainment media, then all I’m going to do is retreat to the things I already have and already like. Ultimately, this is what will kill ideologically infused media. The more work buying entertainment media and being entertained becomes, the more people will find other ways to be entertained … and ideological infusion of entertainment media always adds more work, both in buying it and consuming it.