Archive for December, 2019

End of Fluid Time …

December 31, 2019

After tomorrow I’m going to at least roughly conform in terms of priorities to the new schedule along with doing the normal things that I’d need to do anyway, so this is the end of the fluid time between Christmas and New Year’s where I had no set priorities and didn’t even have to follow the schedule I worked out for my vacation. How — or rather what — did I do?

I managed to get my Sith Marauder to the point where he acquires Jaesa Willsaam, which was the goal I had set. I had played through with her as Light Side, and so wanted to get her as a Dark Side character and romantic interest. I have more to play through with that, but that won’t happen until the New Year. That … was pretty much all I did when it came to games, which is disappointing. I did try playing Spellcasting 201 on Christmas Day but got stuck in places where I had to wait through the lectures, and while the first one was mildly entertaining the second one was boring, and I was not in the mood to do that, so I dropped the game. I’m not sure when or if I’ll pick it up.

I did get through my Star Wars marathon. I was supposed to marathon the Lord of the Rings movies today but a huge list of things I was supposed to do and some inclement weather that required clean up put that one off. I did watch “The Fellowship of the Ring” while copying CDs and am likely to watch the other two movies at times when I want to just watch a movie, so I will likely get through them this week, just not all in one day.

I also obviously managed to burn most of my CDs to my USB drive, although I still think I’m missing one or two. Still, it’s perfect for listening to at work which was the actual point.

For reading, I’m making great progress on the X-Wing books, and am enjoying them immensely.

I did fill some of my time in with watching horror movies. I’m through the “Nightmare on Elm Street” movies and started the “Leprechaun” series, so expect more posts on horror movies over the next while.

Tomorrow, I set my priorities for the year and when I can do them. Since I’m always thinking, I have some ideas on how it will go, but when I sit down and organize things might change.

“Be Mindful of the Living Force”

December 30, 2019

The next essay in “Star Wars and Philosophy” is “Be Mindful of the Living Force”: Environmental Ethics in Star Wars by Elizabeth F. Cooke. Environmental Ethics is not one of my interests or specialties, but as it turns out the most interesting part of it is what Cooke gets wrong about the Star Wars universe: how she characterizes Chewbacca and, through him, the non-human races/species of the universe.

Essentially, Cooke wants to talk about how animals are treated in the Star Wars universe, but then draws a distinction between human-like and animal-like creatures and aliens. And she categorizes Chewbacca as … animal-like. Except despite having some more animal like tendencies — he seems to like to be petted at times, for example — Chewie is clearly a sentient being, if someone who might have some primitive or savage or superstitious traits. Yes, Chewie seems overly frightened by the creature in the garbage disposal after they escape it, and yes, it is at least implied that Chewie will take the rather uncivilized tactic of losing his temper and tearing the arms off his opponents if he loses at the game (although that might be more of a threat than something he actually does), but he also flies the ship — you don’t think that Han went to get Luke to fire at the TIEs because he wanted to leave Leia piloting it, did you? — and fixes the ship, has a loyalty bond in exchange for what Han has done for him, is able to consciously transfer it to Leia at Han’s request. Chewie is clearly sentient, even if he doesn’t act in precisely the same way as humans. So in what sense does he get the “animal-like” designation rather than “human-like”? From the perspective of ethics in general and even in environmental ethics, animals will be associated with those who are at least not fully-sentient and incapable of being so.

I’ve just been re-reading the X-Wing series of novels, and in “The Krytos Trap” Borsk Fey’Lya accuses the human members of the Provisional Council of human-centric thinking, defining what is normal by what humans do and then distinguishing the other races on that basis. Cooke seems to do that here, looking at the differences between Chewie’s behaviour and that of humans and using that to decide that therefore he is more animal-like than human-like, even making the association with a dog that Spaceballs made. Except even there Barf is a sentient being with some dog-like traits, which there are used more for humour but from a science fiction standpoint would be evidence of a different evolutionary path. Wookies specifically are an interesting mix of a technologically capable sentient race that, nevertheless, is still often primitive and has primitive — and yes, potentially animal-like — traits. But they clearly aren’t animals. Chewie’s flying the ship, plotting hyperspace jumps and fixing it and 3P0 are not merely aping Han or some kind of idiot savant type of behaviour. He is capable and intelligent. It’s only if you expect intelligent beings to act like humans that you can argue that he’s more animal-like than human-like.

This also carries over, it seems, into Cooke’s assessment of the Ewoks. She argues that there is no genuine evil on Endor, while excusing the fact that the Ewoks were going to cook and eat Luke, Han and Chewie. Her argument is this:

In Star Wars there’s a big difference between violence done out of duty or necessity (the Jedi and the Ewoks, respectively) and violence done out of anger or greed (Anakin slaughtering the Sand People in revenge and the bounty hunters, respectively).

But this mischaracterizes most of what goes on. While anger is seen as a bad thing for Jedi in general, morally why Anakin’s slaughtering of the Sand People is so bad is that he kills all of them, not just the ones who were indeed responsible for the death of his mother. It was not only done out of anger, but was also hugely disproportionate. Additionally, the real moral turning point for Anakin is probably his killing of the younglings, and yet he would clearly argue that he did that out of duty to his Sith Master and not out of anger. He didn’t want revenge on them or hated them in any way, and yet not only was that still wrong it was, in fact, paradigmatically wrong. Also, it can be argued that in the Star Wars universe bounty hunters aren’t actually evil or unethical at all, as some of the Expanded Universe sources explore. They can be seen as evil because they are targeting the heroes, but to them it’s clearly just another job, as Boba Fett makes clear. Even as representatives of the criminal underworld, a number of the people they capture or kill will be people who clearly deserve it, so they can be seen as people who take on a job that strikes the good and the evil alike. And, as noted earlier, the EU explored this by having bounty hunters that focused on hunting down evil people and criminals, not good people.

But there’s a huge problem with the Ewoks, as they didn’t insist on eating Luke, Han and Chewie out of necessity. It’s not the case that that was the only food they had and they would starve without it. They had food for that day and even for the final party. They are explicit that they are cooking them in preparation for a feast for their new god, Threepio. They continued in this despite it being clear that their god didn’t actually want them to do so, and more damningly despite the fact that they knew that they were sentient beings. So they were willing to cook and eat sentient beings simply to satisfy a religious obligation that they, in fact, also knew that they didn’t really have. It’s only the threat of violence that gets them to stop. I can’t consider the Ewoks in any way ethical for being willing to do so, and anyone concerned at all about animal welfare — and Cooke gives arguments for treating animals well in the essay — should be very uncomfortable with such a situation, as at least most people who are willing to kill animals for food — even when it isn’t necessary — at least tend to argue that it’s okay because they aren’t sentient, an option not available to the Ewoks.

Cooke seems to be having a hard time treating the non-human species like sentient beings. Chewie’s non-human traits get her to consider him animal-like instead of sentient, and the Ewoks not being human gets them a pass for behaviour that we wouldn’t accept from human-like creatures, even if they were primitive and savage. This is the danger of defining intelligent and sentient by human behaviour alone, and not by traits that could apply across species … which is something that environmental ethics is likely to want to do.

Carrier’s Goal Theory (Part 2)

December 27, 2019

So, continuing on with my critique of Richard Carrier’s Goal Theory approach started from here. I went through seven threads last time, and here I’ll go through another ten threads that he’s identified from that debate.

We’re going to be looking at some thought experiments, though, and Carrier spends a lot of time trying to show how they don’t work against his theory. But later, he says this:

There is a similar point to make regarding the fact that “perfect knowledge entails knowing all loopholes” (comparable to the supermagical pill case I discussed in section (2b) above), i.e. someone with perfect knowledge (like a god, say) could navigate all social consequences so as to lie and cheat etc. precisely when nothing bad would result. But I would argue that anyone actually possessed of that knowledge would be morally right to exploit all those loopholes, it’s just there aren’t actually as many such loopholes to exploit as we think …

The thing is, what the thought experiments are trying to get at is just that admission: that if someone could do something that we would generally consider hugely immoral but could do so in such a way that the downsides — including societal — would be outweighed by the benefits then that is not only morally permissible, but would be morally demanded. In short, someone who could take advantage of those loopholes would be morally obliged to do so, or else would be acting immorally. The best defense Carrier can muster is indeed the “It doesn’t happen as often as people think”, but the objection to these sorts of Enlightened Egoist ideas is that it does indeed allow for them, and it seems to be contrary to any notion of morality to say that we can do horrifically immoral things if they benefit us enough and/or we can get away with it. This is of course the weakest sort of objection because Carrier can indeed bite the bullet and declare that, nevertheless, that’s what is moral and that it follows from his proof of the foundational principle of morality (in his case, pursuing the most satisfying life). But Carrier spends a lot of time in the thought experiments denying this only to ultimately admit it, making the defenses not particularly useful for him.

Except for one point: Carrier needs to refute those cases because in general he’s trying to justify an overarching rule in a moral system that is aimed at maximizing the happiness and satisfaction of an individual. As such, he needs it to be the case that we don’t break those rules on general principle and don’t stop to think about whether each individual situation works or not. If we can show that there are a sufficient number of cases where we’d at least have to think about it, then there is no point in following the rule and so Carrier cannot say that his analysis shows that we would never engage in war if we were properly informed. Which, then, brings us to the first point:

(1b) Total Knowledge vs. Limited Knowledge. In our debate there was a digression on the murkiness of war and whether one could ever be in a situation in which it would be genuinely right for both sides to continue fighting, i.e. with both sides acting perfectly morally with all the same correct information. I considered that a red herring, that one should stick with the example I started with, that of slavery, because I use it as an explicit example for analysis in my chapter on GT in The End of Christianity. But war would succumb to the same analysis. The result is that two sides acting on true moral facts with the same information would stop fighting. Therefore in war, always one side is in some way wrong (or both sides are)–which means either morally wrong, or wrong about the facts.

Carrier likes using examples from fiction, so let’s explore that by using examples that you might find in the game “Civilization”. First, let’s presume that each society’s main goal is to preserve its own society and not to uphold some sort of general “get along with everyone” idea. Next, imagine that you have two societies living on an island. They are expanding, and can see that as they expand they both require more resources. At some point, they will be directly competing for the same resources, and so will either have to take them from the other civilization, or else at the very least halt their growth or perhaps might even have to contract. They don’t see any technology that would allow them to colonize other areas and might not even know they exist. Could they not, then, be justified in going to war with each other and trying to wipe the other one out? After all, the consequences might be their destruction anyway. You can argue that even with the war they would run out of resources eventually, but there’s far more hope for something to happen in the extra time the extra territory will give you.

But as it turns out it’s easier to justify a war than Carrier thinks. We can presume that any society that is being attacked by another is always morally justified in defending themselves from that attack. So we don’t need to justify both sides in the war, but only the aggressor. The defender is always justified in defending themselves from an aggressor. So imagine this case: a society has just come off a long war against an aggressor. Due to that, they currently have a significant army and have had to focus their attention on martial advancements rather than on social and cultural ones. There is another society nearby that hasn’t had to face a war, and so has focused on social and cultural advancements. As such, over the long haul their head start in those areas means that they will be able to outcompete that society and so will eventually, through that, convert the society into theirs. But they don’t have advanced military technology or a strong army. Over time, however, the first society’s army will have to degrade and the other society will gain stronger military technology and potentially even a stronger military force. Given that it is likely that in the long run the other society will impose itself on the first one and that the first one currently has a huge military advantage that they could use to prevent that, wouldn’t it seem to be a reasonable move for them to invade? And then reasonable for the second society to defend themselves? Carrier can’t use the idea that they can’t know that those social and cultural advantages will ultimately end in the loss of their society because to make his view work he has to rely on us reacting based on what is probably or likely, and it’s certainly reasonable to find a case where it is more likely that the cultural or social advantages will be overwhelming than that it isn’t.

Now, Carrier might be able to find arguments where there might be a workable compromise. That’s not the point. The point is that the only way he’s going to be able to do that is not by appealing to a general rule, but to specifics that say that, in this case, it still isn’t going to work. But we’d still have to consider it. The same thing will apply to his “slavemaster” example: you only don’t keep slaves because in the long run it isn’t worth it to do so. But if it ever was, then you should. So if you could get away with it or if it benefited you enough, you should. And this would have to be done on an individual basis, not as a general rule. The only case where it would be done generally is when you go up to the next higher level. For slaves, that’s at the level of a society, where it decides if it should make keeping slaves legal or not. For war, that would be at the level of a United Nations. But all this would do at the level below it — the individual or the societal level in these cases — is introduce extra “drags” on the decision, through either the risk of legal punishments or alliances/peacekeeping forces that will respond if an invasion is undertaken. All it does it make it more difficult to find cases where it is justified. But, ultimately, we’re still justifying it individually and not as a general rule. If you would benefit from keeping slaves or benefit from warring, that’s what you’re morally obligated to do.

In Q&A I gave an analogy from surgery and knowledge thereof, and the difference between doing the best you know while recognizing there are things you don’t know, such that if you knew them you’d be doing something different. There is a difference between the “best way to perform a surgery that I know” and the “best way to perform a surgery” full stop, which you might not yet know, but you still know you would replace the one with the other as soon as you know it’s better. Ditto the war scenario.

This raises the issue of what the moral status of an action is if it isn’t the perfect ideal but is the ideal based on the information the person has at the time. Above, Carrier says that if you don’t have all the facts what you did was immoral. Here and later, he wants to imply that you aren’t really acting immorally to avoid being moral requiring perfect knowledge. He kinda hedges on this:

In the limited knowledge situation, both sides can be warranted in believing they are right, and thus not be culpable, but still even then one of them must actually be wrong and just unaware of it, and would want to be aware of it if there were any means to be.

But the problem still remains: are they actually acting immorally or not? Carrier implies that they aren’t culpable, but that they’re still wrong. But being wrong can only have meaning here if morally we are supposed to avoid being wrong, but it can’t have that connotation if in any case where we have limited knowledge we wouldn’t condemn someone for their actions taken in that state. So either we can reasonably condemn it or we can’t. If we can, then to be properly moral is always going to require perfect knowledge; we will all be “sinners” under Carrier’s view and are always going to be incapable of being properly moral. If we can’t, then it being wrong in those cases is irrelevant to its overall moral status and so the “total knowledge” case is not relevant to our moral reasoning, since we can’t ever be in that situation. Carrier needs the total knowledge cases to make rules, but then has to abandon them because no individual can actual work outside of the limited knowledge case, and ought has to imply can, so if we would need to act on total knowledge to be moral but never can work that way, then we can’t be moral and so no one can demand that we do so, or call us immoral if we don’t.

He also makes an error in talking about how all moral systems have the issue where if we don’t have all the facts about what the actual outcome would be. First, this only holds for consequentialist moralities. Any intentionalist one does not have this problem because the morality of an action is determined by the intent when it is taken, not on the ultimate outcome. So if there are facts that someone didn’t know that, if they had known it, would have led them to take another action that would have had a “better” result, that doesn’t and can’t impact the moral status of their action. The morality of their action is decided by what they intended, not what happened. Even a consequentialist morality like Utilitarianism can dodge this by leaning on “Ought implies can” and noting that the morally right action is the one that maximizes utility based on what the person actually knows at the time, instead of saying that it’s the one that maximizes utility generically. It’s not clear that Carrier can make that move, since in order to make some actions immoral he has to rely on the idea that we consider it immoral based on what we with perfect knowledge and perfect reasoning would make. Utilitarianism says “Maximize utility for all”, but doesn’t have to specify that there even is a perfectly objective action in any case that would do so. Carrier has to argue that there always is a perfectly objective moral action that maximizes that individual’s happiness and that that action is the only one that is moral. That makes this problem much more significant for Carrier than pretty much any other moral system.

(2b) The Magic Pill Challenge. The “magic pill” objection, ever popular in metaethical debate, also came up in Q&A, and I think my responses might not have been sufficiently clear. The gist of the question is this: what if there is a pill that, if you take it, a thousand people suffer and die horribly but you get a thousand years of a fully satisfying life and no downside, and (this is key) the pill also erases all your memory and knowledge of those other thousand people, so you never even know the cost of your good outcome. I answered, in effect, that you’d have to be evil to knowingly take such a pill, thus by definition it would always be immoral to do so. Thus the desire for greater satisfaction, in that moment, would prevent you taking the pill, because though you would forget later, you know now what the cost is, and it would not satisfy you to pursue your own happiness at that cost. Indeed, anyone who had already cultivated their compassion in order to reap the internal and external benefits to their own life satisfaction, as one ought to do (because it maximizes life satisfaction to do so, i.e. all else being equal, it is more satisfying to live in that state than not), would be incapable of choosing the pill.

Carrier here says that you would have to be evil to take the pill, and that by definition that would make it immoral. This implies that he is using evil in the moral sense. He denies this later:

Here by the term “evil” I mean simply malignant (thus I imply no circularity: see Sense and Goodness without God, V.2.2.7, pp. 337-39). Malignancy (“being evil”) is then immoral on GT because having a malignant personality does not maximize life satisfaction, but in fact greatly depresses it (in comparative terms).

But this couldn’t be by definition, then, but instead would be an empirical result. So if this is the case, then taking the magic pill can’t be immoral by definition. So we’d have to examine the “malignant personality” case, but here it is clear that you don’t have to have an overall malignant personality — presumably wanting to hurt others — to take the pill here. All you need is a calculating one, where the only concern you have for others is precisely how it impacts your own satisfaction. Given that, what you would do is look to see if taking the magic pill will improve your satisfaction more than than downsides of taking it. And since by definition there are no downsides to taking it, it seems pretty clear that the analysis would work out in favour of taking it. So the only defense Carrier can have here is that calculating to see what would give you the most personal satisfaction regardless of others is not consistent with his philosophy. Since that seems to be the definition of his philosophy, that would be a hard sell.

Just ask anyone which person they would rather be right now: (A) a mass murderer without a memory of it, or (B) someone with reasonably accurate memories of who they are and what they’ve done. Would it disturb them to find out they were (A)? Yes. So why would they choose to be that person? Indeed, when would they ever?

Carrier can only make this an obvious line by removing the reasons for committing the mass murder in the first place: the fact that doing so will guarantee you a satisfying life with no downsides. First, it is entirely reasonable to say that if the only thing that is preventing you from having a satisfying life is one particular memory and the downsides of removing it are less than the downsides of keeping it, then the rational thing to do under Carrier’s view is to remove it. How can it be otherwise? Some nebulous “We must maintain our rationality!” rule cannot trump the fact that in an individual case it would work out better to remove that memory. Otherwise, Carrier would be dumping the whole empirical and reality-based aspects to his morality. Second, if they accepted Carrier’s view and so accepted that the choice was “Be someone who sacrificed a thousand people to gain personal satisfaction forever” or “Be someone who didn’t”, the choice isn’t going to be as obvious. It’s only by casting it as “Mass murderer!” that Carrier can get the emotional push to get us to eschew it. But it isn’t clear that under Carrier’s view that “mass murderer” wouldn’t be an ideally moral person. Carrier makes the common mistake of using existing morality to determine what is right or wrong instead of seeing what follows from his own view. This sort of thing seems clearly morally wrong, so Carrier contorts himself into explaining it as still being morally wrong.

That’s why the question is, what sort of person would you rather be right now? (A) or (B)? Which would it satisfy you more to be? If (B), then you will be more satisfied being (B) every single moment. Because the same question, asked at any moment, gets the same answer, thus all your moments added up gives you no greater satisfaction in (A) than in (B). And this extends even after death. By the same logic: would you rather be (A) or cease to exist? I would rather not exist (because I would not wish that “evil me” on the world).

But who says that a person that would do that is “evil”? And what do these personal emotions have to do with morality as per Carrier? Either it would work out better or it wouldn’t. Carrier uses that your conscience wouldn’t tolerate it in a number of cases, but if the only thing stopping you from doing something is your conscience then under Carrier’s view you should ignore your conscience and retrain it. For example, imagine that there is a couple that wants to become polyamorous and has worked out that doing so will be better for them rationally, but every time they try they get guilty from — erroneously — considering it adultery and so can’t enjoy it. Will Carrier really say that being polyamorous is just plain morally wrong for them, or rather that they need to reconsider the facts and eliminate their erroneous one? The same thing could apply here: they could have the erroneous belief that taking the pill is “mass murder” and morally wrong and, once they have eliminated that, wouldn’t have the conscience issues either. Which leads to this:

To make this dilemma more visceral: if some just persons come to kill you to avenge those thousand people you killed, would you honestly disagree with their right to kill you? You could not rationally do so (because you would agree with them–indeed, you might even ignorantly be helping them find the culprit).

Well, first, by definition those people couldn’t come to kill you for that because that would be a negative consequence and the pill eliminates all such negative consequences. But more seriously, would a just person really be coming to kill you to avenge those people? In Carrier’s view, “just” can only be defined relative to his moral system, so if the reasoning works out no one can object to someone taking the action that will indeed provide them with the most personal satisfaction. Doing so would be unjust, as we would be condemning someone for acting properly morally. So this objection only works if Carrier has established that it is morally wrong, but he’s using it to demonstrate that, which obviously isn’t going to work. See what I mean about him having to contort it to make it work?

Thus something Ben once said to me about an amped up version of the “magic pill” example was spot on–in which one massively alters every conceivable thing about yourself and the universe so as to produce a “get-out-of-every-conceivable-negative-outcome” scenario. It’s the reason why all these “magic pill” examples are lame: if you have to completely change reality to get a different result, then that actually proves my point. My conclusions follow from reality as it is, and even these “magic pill”-proposing objectors agree with me on that. In fact they agree with me so inescapably that the only way they can escape my conclusion is by completely changing reality. (And I do believe I said something to that effect in Q&A.)

As stated above, the point is to get him to accept horrific actions as being moral if they benefited someone enough and they could avoid enough negative consequences. So if you had such a pill that could do that, it would be moral to do so and kill 1000 people for your personal satisfaction. To say that this isn’t realistic runs into the issue that Carrier accepts that it would be moral, and we’re just negotiating the price that would justify sacrificing 1000 people for our own happiness.

(3b) What If Rape Is Fun? One thing that came up in McKay’s responses was something like, “What if rape is fun?”

The rapist “thrill seeker” scenario is actually answered by a good episode of Law & Order: Criminal Intent (Blink): in the end a psychopath breaks down, realizing his fearless thrill-seeking was going to destroy the woman he loved, and that in consequence he couldn’t experience the joys of a real loving relationship with a woman at all, and thus was better off in jail–a tale reminding us to consider not just the more evident consequences of a behavior or disposition, but all the things it deprives you of as well.

Again, this runs into the problem of not considering individual cases and trying to assert general rules from very specific individual cases. If someone liked rape enough and could eliminate the negative consequences enough, then even under Carrier’s view they would be morally justified in doing so. That’s something that Carrier cannot merely explain away. He can bite the bullet and note that it’s not going to be justified as often as we think, but he spends a lot of time trying to avoid these consequences of his moral view.

Ben similarly proposed a scifi example of the character named Fran in an ep of Stargate Atlantis (Be All My Sins Remember’d), in which she was engineered to be happy suiciding herself to destroy a whole population. Of course that’s a perfect reason never to make one of those. See my remark on AI in the endnote in the TEC (p. 428 n. 44). There I gave the Dark Star and 2010 films as exemplifying a similar problem.

Nevertheless, objectively speaking, Fran will be missing out on a lot (temporally and qualitatively) and thus can be objectively described as defective–e.g. if she could choose between being Fran and being Unfran, an otherwise exactly identical being who saves a civilization of people instead of killing them and is just as happy doing so, which then would she choose? This brings us to the law of unintended consequences: if she would really see no difference between those two options, then she must lack compassion, which is a state of character that has a whole galaxy of deprivation consequences on her happiness and which cannot fail to mess up her behavior in every other life endeavor. Thus Fran, as conceived in that ep, is logically impossible, unless her compassion and other virtues were intact, in which case she would prefer to be Unfran, when rational and informed (and thus to convince her of this, you need to make her rational and informed).

Carrier can justify here not making a Fran as they would be deprived, but the question is “What should Fran do?”. If that desire really is what would make her the most satisfied and she’s biologically constrained to think that way, then it would seem that under Carrier’s view that’s what she should attempt to do. Carrier, obviously, can’t have his morality ignore biological realities. Given that biological reality, Fran can never be convinced that she should be Unfran because she can’t be Unfran. Carrier’s view cannot demand that someone like things or be satisfied by things that they can’t be satisfied by. Carrier can turn around and call her insane, but this leads to issues with differences in individuals. It cannot be the case that someone is insane unless they can be satisfied by what Carrier considers the ultimate set of values, since Carrier things that that has to be empirical anyway. Fran might be constrained by people trying to stop her from blowing up and so have to be careful about it, but it can never be immoral for her to seek out the one thing that would give her life satisfaction.

(4b) Why Would Morality Evolve to Maximize Happiness? McKay asked at some point in the debate why on GT should morality just happen to optimize happiness. On GT the behavior that optimizes happiness is the definition of what is moral, so his question is analytically meaningless. Possibly here McKay was confusing different definitions of happiness (see point 6 to follow), replacing satisfaction maximization with “joy” or “pleasure” or something, even though I specifically warned against that mistake in my opening.

If happiness isn’t something like joy or pleasure for Carrier, then what is it? We tend to determine our satisfaction — at least hedonistically — by appealing to those, and sacrifice short-term ones for long-term ones when we apply rationality. If Carrier is going to argue against that, then he’ll run into the same problems that non-hedonistic philosophies — Kant, the Stoics, etc — run into: why sacrifice hedonistic values for Carrier’s? Kant and the Stoics, at least, don’t appeal to satisfaction to justify their morality but Carrier does. So how does he untie that knot?

But McKay also confused the fact that our desires are products of evolution with the claim (which I never made) that we ought to behave as we evolved to (a claim easily refuted by pointing out how self-defeating that ultimately becomes).

The problem with this is that Carrier will later talk about how rape doesn’t actually increase fitness and so was selected out, but by his own view that’s not relevant. Anything in our evolved sense of morality is something that we have to examine under his view and see if it meets his principle, especially if we want to “get ahead of evolution” as he argues later. So our evolutionary path and evolved morality is irrelevant if he really has the right view here. We need to examine based on his view, not on our evolved senses. He makes the same mistake many people who rely on evolution do: forgetting that as soon as we argue that evolved senses can be wrong, we have to judge all evolved senses against the principle, making our evolved senses irrelevant.

The proof of this is in earthquake building codes: evolution doesn’t teach animals to make houses to withstand earthquakes; we had to develop a way of thinking correctly that bypassed our errors in order to reach that incredible maximization of DRS; so, too, CPR (cardiopulmonary resuscitation); for example: you’re not likely to “evolve” a knowledge of CPR, yet a knowledge of it increases DRS. Likewise the technology of morality, which is just the same thing: a technology for maximizing our ability to exploit the best scenarios achievable according to Game Theory, which bypasses our evolved error-making. Even though evolution has moved us in that direction already, just as in all these other areas, it still has a lot farther to go, and technology is what we use to fill that gap.

The problem is that all of these things follow from general reasoning, not specific evolved senses. Building codes just are technology. So is CPR. So, then, likely is morality.

(5b) The Role of Rationality in Defining Truth. Related to that point is where again I think McKay confuses criteria of truth (what makes a proposition about the facts of the world true) with criteria of persuasion (what convinces someone that a proposition is true).

This is, of course, true, but if Carrier cannot convince people that his view of morality is correct while appealing to personal satisfaction then he might have some empirical issues to work out. More importantly, since many of the objections are going to be “This doesn’t seem like morality” he’s going to need to find a way to settle that before we can get into the details of the facts.

Hence when I say moral facts follow from what a rationally informed person would conclude, I am not saying one has to be perfectly rational to be moral, but that moral facts are what rationally follows from the facts of the world (whether we or anyone ever recognize this). That is, just as the true facts of physics are “what we would conclude if we were adequately informed and reasoned from that information with logical validity” so the true facts of morality are “what we would conclude if we were adequately informed and reasoned from that information with logical validity.” Getting people to believe such conclusions is a completely unrelated problem; likewise getting people to behave consistently with their beliefs.

But if we aren’t capable of acting on Carrier’s view, then we would void “Ought implies can” and so won’t have a morality that can make sense. Carrier, then, needs to establish that we, in general, can accept and act on his moral view, and responses like these don’t, in fact, get him at all to there.

Likewise the question “what if our evil desires are simply overwhelming,” which essentially sets up a fork between sanity and insanity. There are certainly insane people who can’t conform their desires or behavior to rational beliefs, even when they have them; or who are incapable of forming rational beliefs, regardless of what technologies and “software patches” we equip them with. Hence my point in my TEC chapter: it is no defect of any moral theory that madmen cannot be persuaded of it.

The question, though, is what happens if we are all “madmen”, not whether or not there are some (and also how would we tell who are the madmen and who aren’t).

Of course, even apart from that, we would know that an omniscient neighbor could and would exploit all loopholes, so our behavior would change accordingly, which would create an arms race scenario that is unlikely to go well for any libertine omniscient neighbor–which an omniscient person would in turn know. So how would such a person prevent the extreme restrictions on their liberty and happiness that would result from the extreme self-protective measures taken by everyone else? By being more consistently moral. Or ending up dead. It’s obvious which is the better choice.

But, of course, that only holds if the neighbour will take actions that are bad enough to us to justify the extreme restrictions. But an omniscient neighbour will know precisely what they can get away with to avoid that. So that will never actually happen. So this doesn’t defeat the charge that he accepted: under his view, if it benefits you enough and you can avoid enough negative consequences you can shaft other people. He needs to accept this as he stated instead of constantly trying to dodge around it.

(6b) But Isn’t the Pursuit of Happiness at Odds with Morality? The question kept being asked why should we want happiness above all things? Maybe there’s something we want more?

I don’t think this is the actual question. Either it’s “Maybe there’s something that we ought to want more” or else the action question in bold which is the idea that morality is there to put constraints on our seeking of personal satisfaction, so that it should never be the case that we choose to commit what we consider to be a morally horrific action because it works out to give us a better life otherwise.

Thus when we rephrase the matter, McKay’s objection dissolves: a mother aiming to sacrifice herself to save her daughter (TEC p. 350) will always choose the option that most satisfies her. The only time she will not is (1) when she does so uninformed (realizing after the fact, say, that she would have been happier the other way around), but that’s precisely why the truth follows from what would have been the informed decision (and her acknowledging this after the fact proves the point that it was indeed the correct decision and her actual decision was in error–not necessarily a moral error, but certainly at least an error of ignorance, since had she known the actual consequences she would have chosen differently, thereby demonstrating what the actually correct decision was) or (2) when she does so irrationally (deducing how to behave by a logically invalid inference from the facts), but no correct decision can ever result from an irrational decision making process (so her decision in that case is by definition incorrect, unless she lights upon the correct decision purely by accident, as when a fallacious argument just “by accident” produces a true conclusion, but then of course her decision won’t have been incorrect–she then will be more satisfied in result of it than she would have been otherwise).

The objection is not whether someone will do that, but whether under Carrier’s view anyone ought to do that. Ought that satisfy her? If Carrier can’t justify that under his view, then she is wrong and can only be acting on it on the basis of being irrational or ill-informed. And Carrier only asserts it, and never proves it.

(7b) What About Desiring Results That We Ourselves Will Never See or Enjoy? McKay gave an example of desiring his kids’ future college education and claimed this desire didn’t benefit his happiness in any way, indeed he might not even live to see the fruit of his labors toward it. But that’s not a correct analysis. In that scenario we have two desires to choose from: to desire (a) our children’s future college educations, or (b) not. Which desire will satisfy us the more to have, and to act on, and to fulfill? If we desire (a) more, then by definition (a) satisfies us more; whereas if (b) would satisfy us more, we would want (b) more, and thus even DU would entail we should want (b) and not (a); there can therefore be no logically valid objection to GT here. Any DU theorist must accept the exact same conclusion as GT produces in this case.

The reality is that the having of a desire itself brings us pleasure and satisfaction. It is not all about seeing the outcome.

The same objection applies here. Carrier appeals to us actually doing it now, but never justifies it under his view. At the extreme, this runs into an argument of “If it satisfies you, then do it!”. But since Carrier is arguing that even some things that we do feel satisfy us are wrong if we were fully-informed, then he needs to do a total analysis at the level of fully-informed and fully-rational, which is not what he does here.

(8b) What If We Only Have Dissatisfying Choices? Another issue regards outcomes that cannot be achieved, i.e. sometimes you just can’t maximize your satisfaction or must live with a dissatisfying result. But in every case the correct act is the one that produces the greater satisfaction of all available choices.

Carrier’s totally right here. It’s about best choices available, which may mean one that sets us back if that’s the one that sets us back the least.

(9b) Avoiding Circular Rebuttals. Several times McKay used the common fallacy of assuming a conclusion a priori and then judging GT by the fact that it doesn’t produce that conclusion. On why that’s a fallacy see my TEC chapter (p. 350, with pp. 343-47). There are many “question begging” objections to moral theories like this, which are perversely popular among philosophers, who of all people ought to know better. McKay repeatedly assumes a priori that a decision he describes is the correct one, and then finds GT doesn’t entail it, and then he concludes GT is false. That’s circular reasoning. If GT is true, and if it is true that GT really doesn’t entail that “expected” result, then McKay is simply wrong (that, for instance, self-sacrifice is moral). So he then can’t use his erroneous belief (e.g. that self sacrifice is moral) to demonstrate GT is false.

What McKay would be doing is taking something that seems obvious morally, arguing that Carrier’s view can’t come to the same conclusion, and then using that against Carrier. Again, this is a weak approach because Carrier can easily bite the bullet, but Carrier a) doesn’t seem to want to much of the time and b) also does bite the bullet, making this objection pointless in both cases.

(10b) Wouldn’t GT Destroy Individuality? Finally, McKay argues that GT would destroy individuality by requiring everyone to behave exactly the same. But this confuses covering laws with instantiations.

In theory, there is room for individuality in Carrier’s GT. The problem is with the insistence that the morally right thing is the thing done with perfect knowledge and rationality combined with Carrier feeling free to criticize the individual preferences of others based on that, combined finally with the fact that for GT to work our desires must be rational and fully-informed as well. As I said, there’s room for it, but Carrier really needs a way to shake that out that doesn’t just boil down to us being immoral and insane if we like and disliek different things than Carrier.

There’s another section here that I’ll pick up another time.

Thoughts on “The Complete Sherlock Holmes”

December 26, 2019

So, yesterday, I commented that I had finished reading “The Complete Sherlock Holmes”. I had read that collection at least once and probably twice before, but as part of reading classic works I wanted to read it again. It was also interesting because of a couple of other things that had been going on around that time, including reading a puzzle book themed on Sherlock Holmes.

And the post talking about that raised some interesting points in the comments. First, from natewinchester:

Funny, because I was just reading up on the “clueless mystery” tropes the other day:

And Holmes’ stories apparently fell under that style back in the day. Apparently the “fair play” type of mystery stories didn’t become popular until after his era.

Responded to by malcolmthecynic:

The Holmes stories are less mysteries and more adventure stories, for the most part.

After re-reading them, I have to agree with malcolmthecynic. The stories can be a bit unfair — there are times when Holmes will go off on his own to investigate something and will only tell us the results as he’s revealing the crime — but even then you could figure out what was going on as soon as Holmes reveals the details. Still, sometimes he reveals the villain beforehand. But what really makes them more mystery adventures is the fact that in many of them Holmes only identifies the basic details — who is doing it and what their very basic motivation is — and then much of the story is that person explaining the backstory of how they got there (generally only for sympathetic “villains”, of course). This will take up at least half the story and is generally the most interesting and dramatic part of the story. For the most part, the Holmes stories seem to be aiming at doing two things: showing off the deductions that Holmes is famous for (some of which might be a bit dubious) and building dramatic and adventurous stories around those deductions.

I also commented before that Holmes should deal with his issues of an idling mind by taking up philosophy. It turns out that he actually does that when he retires, combining it with beekeeping. Which is an interesting little note.

Anyway, I still did enjoy reading them, and will probably read them again.

The Tradition Still Continues …

December 25, 2019

Merry Christmas and Happy New Year to the reader of this blog.

Don’t you mean “the readers”?

Nope, WordPress still says it’s pretty much just the one.

Something Beginning with “E”

December 24, 2019

Even more thoughts on Babylon 5 the series, the movies, and Crusade.

Season 5 gave JMS the time to do a lot of things, including doing a long series of wrap-ups of all the character arcs and giving them all farewells. This is technically a good thing, but it’s also a bit of a bad thing since, well, it seems to be dragged out. And I was binging on them, and so don’t know how it would seem if you were watching one episode a week. I suspect it would be worse, but it might be better as you wouldn’t have the impression that this was what they did last week. The last scene made me regret that they didn’t continue it because it showed essentially the replacements for the originals, which might have been interesting.

Lochley wasn’t really served well by Season 5, her prominent role in the movies, and being added to Crusade. While it made sense for her to be in the movies, she didn’t really need to be a prominent character in Crusade and her interactions with Gideon relied on us caring about her from Babylon 5, which since she was only in about half the episodes — at least prominently — wasn’t likely to happen. So adding her took away from other characters that might have been more interesting, especially developing the romance plot that we probably didn’t need in Season 1 of Crusade.

As I commented before, lots of people dislike Season 1 of Babylon 5, but I’d put it up against the first seasons of pretty much any other science fiction show. I can’t think of one that did it better. It’s better than TNG’s and DS9’s, and the less said about the other Star Trek series the better. Firefly might be an example– but is a better comparison to Crusade, and I think they line up fairly well although Firefly probably does win by focusing on the good characters more — and maybe the first season of the modern Doctor Who. I unfortunately can’t say the same about Season 5, which justifies my usual decision to skip it.

I found the movies underwhelming. I think I had seen Thirdspace at some point, but didn’t remember it and fell asleep during it this time, but it didn’t really impress me while I was awake. I had seen “The River of Souls” at some point, but it was only memorable for the fact that Lochley’s image was being used in the holo-brothel and was popular with women and not men. The main problem with it is that it focuses on the Soul Hunters, which aren’t particularly popular, and even those who wanted to learn more about them will be disappointed by how dogmatic the movie makes them, unwilling to accept that they could possibly make a mistake and refusing to look to see if they had based on that belief. There’s a reason the technomage Galen worked out so much better. And “A Call of Arms” was a decent set-up to Crusade but wasn’t that thrilling in and of itself.

Now, Crusade. I binged it all in one day — at least in part of get it out of the day — and I think my main theory stands: Crusade is okay at worst when Galen is in the episode and might rise to the level of okay when it isn’t … and most of the time when it is more than “Meh” it’s because of B5 characters or actors. One reason for this is that Galen is probably the perfect character for a show like this. He can provide humour through arrogant superiority without annoying us that much because, well, he is superior. He can use those superior abilities to get them out of trouble but in the second episode they make it clear that there are both practical and moral reasons that he might not be able to do so at times. If his established abilities would ruin a plot, it’s easy to simply have him go away for a while since it’s established that he does that frequently. He can easily provide exposition but also be able to say that he doesn’t know something. And for all his superiority, he’s also very flawed in a number of ways that make him interesting. Another reason is that his plots tend to tie into the exploring oddities and mysteries plots — as that’s what he’s best suited for — and those are the most interesting plots. And, finally, Peter Woodward does an excellent job with the role.

To be honest, I’d like to see a crossover with him and, well, any of the modern Doctor Whos, since they have similar traits. Eccleston’s Doctor would be interesting since they look fairly alike and both have anger issues, but Galen would puncture Capaldi’s arrogance and find Smith’s childish wonder amusing.

Overall, Crusade is worth watching when I watch Babylon 5, and the three movies listed and Season 5 aren’t. This leads to an obvious conclusion, I think …

Vacation Status …

December 23, 2019

So, I’ve been on vacation for about three weeks and as Christmas is pretty much here things are going to change between now and New Years Day (when things will change again), so it’s a good time to stop and look at what I managed to get done in that time.

As per usual, DVDs and TV shows/movies are doing the best. In fact, they did so well that it actually caused a problem for me, as I finished Charmed and Babylon 5/Crusade so quickly that I had to try to find something else to watch, which didn’t work out that well as I tried the original Battlestar Galactica series (which petered out because the episodes are arranged eccentrically on the disk which meant that I’d have to get up to change disks at times when I wouldn’t want to) and Space: Above and Beyond (which petered out because I really wasn’t in the mood for a show that serious). Instead, I watched Avengers: Infinity War and Avengers: Endgame back-to-back, as well as Deadpool and Deadpool 2 back-to-back. I’m not sure how I’ll handle the next week with that.

Books did pretty well as well. I managed to get through the complete works of Sherlock Holmes, which I had been working on for quite a while. This has freed me up to start re-reading the X-Wing series of books and for whatever I’m going to do after New Years.

Video games didn’t do as well as I’d hoped. I did manage to play The Old Republic a bit — finishing a Smuggler character and starting a Dark Side Marauder character — but I barely played anything else (I played Steins;Gate for an hour or two before deciding that it wouldn’t really work for my vacation). I have some plans to play Spellcasting 201/301 and Elsinore before I go back.

Projects didn’t go well at all, as I didn’t do one thing on them. I’m hoping to schedule them in and work on them regularly when my vacation ends, but considering how that’s gone in the past I’m not that hopeful about it.

On the other hand, things outside of these categories worked pretty well. I had a lot of appointments early in my vacation that I did manage to get completed, and managed to get a lot of straightening up and cleaning done. I had bought a living room stand and some bookcases from Ikea, and managed to get those assembled and stocked, meaning that things are a lot less cluttered and, more importantly, that I have more places to put things so that I get less clutter. I also managed to get a fair bit of exercise — generally two but at least one walk a day — and cooking and the like.

We’ll see how the last two weeks go.


December 20, 2019

So, Richard Carrier is posting in multiple parts about articles in a peer-reviewed book that he’s contributed a number of articles to, or at least parts of multiple articles to. The one I’m going to talk about here is about miracles, which as you might imagine Carrier addresses with his usual humility and care. But the most interesting part of it is how it relies on Hume’s view of miracles and how all of it together causes problems for Bayesian/probability based moralities and naturalistic attempts to disprove and discredit the supernatural.

So let’s start with Carrier’s discussion of Schnall’s article.

Schnall then spends a lot of time trying to rebut Hume, of course. But he never addresses any modern revisions of Hume. His chapter was thus already obsolete before he even submitted it to Macmillan’s editors.

So Hume is going to be important here. Carrier summarizes a very small part of the article — despite it being in a published and not publicly accessible book that Carrier himself admits is overly expensive — but as usual jumps right to trying to rebut it before telling us that Schnall really said. Unfortunately — and this will be important for later — Carrier’s arguments all end up being attempts to show that the probability of miracles in the general are so low that there is no reason to think that any new claim of them could possibly be true. This, then, leads us to Hume:

As Ahmed explains in his own chapter, presciently refuting Schnall, Hume was ultimately right: only testimony whose falsity is more improbable than the claim testified to can justify believing that claim—to any degree at all. There is no logically valid way around this (see my demonstrations in Proving History). By not addressing any of the modern revisions of Hume, Schnall completely gets wrong the actual Neohumean argument against miracles. The entire issue is one of relative frequencies: What happens more often? That when people make wildly extraordinary claims they are lying or mistaken? Or that when people make wildly extraordinary claims they are reporting the truth? The latter simply isn’t what usually happens. Human testimony therefore is not likely to ever satisfy the requirement for rationally warranting belief in the miraculously extraordinary.

The problem with this in general — whether classical Hume or modern Hume — is that if someone knows that the person is at least generally truthful and does not know of any reason why that person might be lying in this case it can’t rationally be the case that someone can reasonably say that that person is lying simply because they don’t like the consequences if that person is telling the truth. In order for us to consider testimony a method that produces knowledge — and we generally do think that if someone who is reliable and has no reason to lie to us tells us something that then we could claim to know that it is true — we have to think that it is generally reliable under the conditions where it is reliable. Those conditions would have to include the case where the person giving the testimony is trustworthy and we don’t have any reason to think that they’re motivated to lie about it. Those conditions would certainly not include that we don’t want to believe what they’re saying. So Hume’s move here really does boil down to “I don’t want to believe what they’re saying, so it’s easier for me to accuse them of lying for no actual reason”. The move here would be the equivalent of someone refusing to believe their best friend when he tells him that his wife is cheating on him because they can’t bear the thought that their wife has been unfaithful. We may understand why someone might do that, but we usually wouldn’t call it rational and certainly wouldn’t consider it the only rational response.

Now, if the claim is so improbable — or, rather, so in conflict with how they think the world is — we may well stop to consider if there might be an unstated motive for lying. So, if someone was utterly convinced that their wife was dedicated and loyal to them, they might wonder if their friend had a motive for trying to break them up, say, and so lying to them. But this would certainly not be demanded by reason, and they certainly couldn’t claim to know that. And that’s what would be required to make the Humean argument come of: they can make a knowledge claim that they must be lying because the probability of miracles is so low. But there is no probability so low that it can overcome the fact that the person is probably telling the truth about whatever it is they are talking about before we consider whether or not we want to believe that what they’re saying is the truth.

This is one of the things that holes Bayesian/probabilistic epistemology: there are a number of cases where the purported priors aren’t relevant to the consideration as the probability of the proposition itself covers the other priors. We saw this with the classical cab problem. If the probability of the cab being a blue one is not the same as the probability that the witness identified the colour of the cab correctly, what could it be? Similarly, the probability that the event that we are calling a miracle — this is an important distinction for later — happened must surely be identical to the probability that the person is telling the truth about what they experienced, which is for the most part determined by how trustworthy they are in general and how likely it is that they have a motivation to lie in this instance. So the probability argument, then, doesn’t even really allow for us to take the reasonable step of looking for a motivation if by all the information we have the probability they have such a motivation is extremely low.

But my preferred epistemology, the Web of Belief, does allow for that, and would at least allow an individual to make that assessment for themselves. If it would be the case that they would have to abandon either too many beliefs or too many important ones, then they could on their own decide that they would rather believe that the person is lying than believe their claim, no matter how trustworthy they seemed. They couldn’t claim to know it is true, and couldn’t demand rationally that everyone else has to go along with them. But, at least for a time or until they get extra evidence, they could potentially rationally believe it themselves.

In the case of miracles, Carrier et al could indeed hold the belief — as they do — that miracles and the supernatural don’t exist, and that based on all that evidence they would rather believe that a person they thought was completely trustworthy and had no reason to lie was indeed lying about one rather than give up that belief. I, on the other hand, lacking that belief — I don’t think they absolutely exist but don’t see any reason why they couldn’t either — couldn’t do so. I might be able to claim their stance irrational, but they would not be able to claim that mine is, and that’s the claim they’re making here.

There’s an obvious additional issue with their approach here, which is another issue with setting priors: limiting them to the proper scope. Carrier tends to include all supernatural claims in his priors here, but this is the equivalent of saying that the fact that we’ve shown that snake oil salesman rarely make medicines that are in any way effective that this should impact the probability when we are deciding whether to believe that a long-used native medicine is effective. Carrier could try to argue that we’ve already tested and least some native medicines and found them more likely than snake oil, but then the probabilities work in reverse, with us having to consider the snake oil cases more likely because the native ones have worked. Or we could try to constrain our priors to as narrow a subset as possible or as is reasonable, which goes against Carrier’s general move of making them as broad as possible, mostly because narrowing them leaves few priors to consider and so ends up being mostly indistinguishable from our normal folk calculations, mostly made by what we see or can recall in the moment. The Bayesian epistemologies I have seen tend to play fast and loose with the limits of what priors to consider.

So let’s move on to the next section:

He rarely makes the crucial distinction between whether an observation is true (“The Bible says the Apostles saw Jesus fly into space” or “I saw Aleister Crowley cure a paralytic with a crystal wand”) and whether that observation’s explanation—as in, what caused it—is even supernatural, much less specifically “God.”

One has to first establish an observation is even true before one can then evaluate what its most likely cause is.

If Schnall is focusing on Hume, then yes, this distinction is going to be rare. The reason is that the Humean move is only in play if one cannot come up with a reasonable alternative explanation for what the person claims to see. Remember, Hume’s move is to question the veracity of the purported witness, even if there is no real reason to question their veracity other than the fact that they claim to have experienced something improbable. What would be the point in doing that if one could come up with a likely cause that explains how they could be telling the truth about what they experienced but aren’t really making a claim that that really improbable proposition is true? So if you’re going to make Hume’s move, it means that you can’t come up with such an explanation. Which is incredibly important, because it puts the miracle claim on this footing: either the miracle occurred, or the person is lying. If the person is not lying, then a miracle occurred.

Why is this important? Because while Carrier might be able to claim that he can’t know that the person is telling the truth, there is one person who, in fact, knows whether they are telling the truth or not: the person who is making the claim. What this means is that if we get to the point where the atheist is making the Humean claim against them, if the person is not lying then not only are they justified and rational in believing that a miracle occurred, they would at that point know that a miracle occurred, even if others might not be so justified. So the last thing an atheist should want is to get to the point where their only response is to say that the person claiming a miracle occurred is lying about what they experienced.

Now, one of the risks here is that Carrier et al cannot claim to hold a reasonable position on miracles and/or the supernatural if there is no reasonable approach that they could take where someone could reasonably demonstrate, even to themselves, that miracles or the supernatural exist. The problem is that they use the dearth of evidence as such a strong indicator that it’s nearly impossible to imagine a scenario where someone could do so. They’d pretty much have to accept any natural explanation first, but since that includes an argument that the person is lying when there’s no other indication that they are lying how could one ever demonstrate that?

Carrier also says this:

Of those two scenarios, if we’re being honest, only the latter is analogous to God. Because God doesn’t “vanish” or “fall asleep.” If he performed miracles anciently, he should be doing so presently, indeed all the more, as the population in need of them is now a thousand times in size—so miracles should be thousands of times more frequent. You can explain your way out of that with a bunch of made-up “assumptions” about how God would behave differently than any other person in the same circumstances; but such “gerrymandering” your theory would only reduce the probability of that God existing, not rescue it from disproof as you might irrationally have thought.

The problem is that Carrier — and a lot of atheists — like to assert this but only do so with shallow examinations of what God’s motivations might be and what the consequences of the actions might be. There are two issues with this. One of these is that they talk about what they might want someone to do, but don’t necessarily consider that even we would not be obligated to do so. The second, and more serious for Carrier’s argument here, is that God is differently situated than we are. For example, we might claim that if someone could relieve someone’s hunger they are morally obligated to do so, but we would never be obligated to do that for everyone because ought implies can and we can’t. God, however, could do that and so would be obligated to eliminate all hunger in the world. Thus, to obligate God to do it for one person and one form of suffering obligates him to do it for all people and all forms of suffering, and thus to create a perfect world. And there is no reason to insist that God create a perfect world.

The same thing could apply to miracles. In the past, there were more miracle and superstitious claims, so having to demonstrate real ones was more required than in this modern world. Or any number of other explanations. We need to consider the motivations of any intentional entity, and as long as the reasoning works as an intentional explanation it’s not “gerrymandering”; it’s merely what we would need to do for any person.

Atheists can make a good point here that needs to be addressed at some point: now that we can investigate miracle claims and are more willing to do so, those claims seem to have dropped off. This then could suggest that in the past people were making them up knowing that they’d avoid detection, but can’t now, or that again for some reason God isn’t doing miracles now that we could check them out easier. My theology is not importantly attached to miracle claims, so I won’t address it here, but it is something that someone relying on miracles in general might want to explain.

Carrier describes how we’d go about demonstrating miracles, and runs right into the problem of it being too stringent:

In Clarke’s novel, Mr. Norrell is faced with the need of proving to important persons in British government that he is now a sorcerer who can cast spells, that “magic has returned to England.” Jonathan Strange, an intellectual among those persuaded by Norrell’s demonstrations, then starts practicing this magic himself, and soon encounters the question of which ancient tales of magic are real or false—in order to ascertain what spells he might be able to discover or recreate. This research leads him eventually to discover an enormous hidden network of paths through and to every mirror on earth, by way of another world, a parallel dimension, with anciently eroding “roads” from place to place, called “The King’s Roads,” after the legendary Raven King of yore who supposedly created that network by some since-lost fantastic magic. This all transpires in the early 19th century, overlapping the Napoleonic Wars, in which this newly discovered magic, now under royal patronage, is soon deployed as a weapon.

The records we’d have if that actually happened, especially as told in the novel, would be extensive. They would range from government documents and journalistic inquiries and published scientific papers recording carefully controlled experiments and observations, all the way to (just as for the holocaust) thousands of eyewitness testimonies. Not some claim that there “were” thousands of witnesses. Actually thousands of accounts actually written by those witnesses. These would include witnesses consulted independently and thus who couldn’t have coordinated their testimony, witnesses interrogated skillfully and thus whose accounts were vetted for their precision and credibility, and so on.

There would be soldiers from both sides of the war attesting to the magic used, and the outcome of battles would have been decided by it. There would be statesmen and spouses and people on the street who saw this stuff and wrote it down or had it written for them by journalists and historians and investigators who can verify actually having interviewed them and that what they recorded the witness approved as accurate. There would be skeptics attempting to disprove the magic and recording their failure. There would be vast numbers of hospital records of magical healings.

The probability of all those records being lies or hallucinations or mistakes would be extraordinarily small. So small it would be undeniable that some wizards really did turn the tide of history with magic, that England really did experience a few decades of genuine marvels effected by well-recorded magical procedures. Because the probability of some vast conspiracy to doctor those records, or any other incredible explanation of them, would be far less than the probability of the miracles thus attested to. Therefore the authenticity of those miracles would be as certain as nearly any other fact of history.

So, this is fine for trying to prove the general case, but before we can do that we would have to have one person come to believe it first. What if it can’t be demonstrated to everyone? What if it isn’t systematic? Consider this claim: on Wednesday night, I stayed up until about midnight. Not an extraordinary claim, Carrier might scoff? Ask my co-workers or parents about that, as I tend to get up very early and so am generally asleep by 8:30 pm. How would someone prove that claim? It leaves no observable evidence and the evidence one could have is my word. If you doubt my word because it’s improbable, then how could you demonstrate it? A number of specific miracle claims, the sort of things that we would need to aggregate to get to Carrier’s standard of evidence, would be precisely these sorts of claims: something happened that could be supernatural but need not be, and the evidence cannot determine that itself. How could we ever find an example using Carrier’s epistemology that we could use as part of the aggregate experiences necessary to overturn the presumption that such things never happen?

In general, we need to consider each situation on its own basis, informed but not determined by what has happened in similar cases. Carrier’s view here uses the past to determine the present case, but this just means that if we ever actually discovered a real case of the thing we could never determine that it was true. This does not seem to be a useful epistemology.

Thoughts on “A Nightmare on Elm Street 2: Freddy’s Revenge”

December 19, 2019

Continuing on with this series, I watched the second movie. This does one thing really well that the first movie failed at, which is that it didn’t clutter itself up with a nonsensical story. The main plot is that Freddy is trying to come back by taking over the body of a teen, but while the movie handwaves at how this might be possible for the most part it ignores it. Which is fine, because we don’t really need to have that kind of explanation. This is what’s happening, and it has to be stopped in some way. That’s all we need to carry the horror.

There is an issue with the entire series, however: for whatever reason, the movies are forgettable. I’m writing this a bit after watching the movie, but I noticed that even after a couple of days I had a really hard time finding anything memorable about the movies or even being able to remember my main impressions of it. Very quickly the movies became things that I had watched at some point, with nothing really standing out about them. Which makes for short posts talking about them.

The movie was okay, but as stated not very memorable. I would likely watch it again.

Some Thoughts on “The Old Republic”

December 18, 2019

The one game that I’ve managed to actually play to any extent is “The Old Republic”. I’ve finished off Ranathawn and am working on another character for a TOR-Diary-type-thing, which is a Dark Side aligned Sith Marauder. I’m also moving from a character where I stealthed through most of the missions to a character where I not only kill the things in my way, but I also often kill things that aren’t in my way, because it’s consistent with the character. Since the combat can get repetitive, I wonder if I’ll still want to do that later in the game. Maybe that can be part of the character’s progression.

Anyway, TOR still has the common MMO problem with “glowies”, which are things in the shared world that for certain quests you need to click on to complete the quest. You usually need to do this to a number of them and once activated they usually deactivate for a set time. I was on Balmorra, and there were three or so people going after maybe six of these things, needing four apiece. I spent a bit of time waiting at a couple of them for them to reactivate because running around trying to find one that hadn’t been activated yet would likely result in my missing one. That was incredibly annoying, and I needed it for an interesting quest.

I’m struggling to find interesting outfits for my characters. I might not have enough credits for the Galactic Market, and the Cartel Market doesn’t have much interesting. I bought one outfit for Vette but still have thousands of Cartel Coins — I get them as part of my subscription — that I can’t spend because there’s nothing interesting in the Market. City of Heroes is the only MMO that had such a market that was at all interesting, as it had a lot of appearance changes and powersets and the like that were cool. TOR has never impressed me with that.

I’ve also found that playing as a Dark Side character can be uninteresting, because it’s difficult to find interesting Dark Side choices. This, I think, isn’t because you can’t make choices like that but because we have an odd idea of what makes something good or evil. For the most part, the evil choices are either reveling in brutality or else being rather directly self-interested. There is no notion of being an Enlightened Egoist and thinking ahead. At the end of Balmorra, you get the choice to keeping the General alive to discredit the Republic or else killing him. Keeping him alive is Light Side, despite the fact that even evil characters would see the use of discrediting the Republic. And there was already a way to avoid this, as the General asks you to let the civilians and others go in exchange for that. So a Dark Side character could keep him alive without letting the others go, and rely on torture to make him talk. This would be evil, but would also not require the Dark Side character to seem short-sighted. There is some handwaving that they have enough evidence against the Republic without him … but then the Light Side character can look like they’re just being good for the sake of being good as they get nothing out of it.

Anyway, I do still enjoy the game, and am planning on finishing at least two more characters over the next few months.