So, continuing on with my critique of Richard Carrier’s Goal Theory approach started from here. I went through seven threads last time, and here I’ll go through another ten threads that he’s identified from that debate.
We’re going to be looking at some thought experiments, though, and Carrier spends a lot of time trying to show how they don’t work against his theory. But later, he says this:
There is a similar point to make regarding the fact that “perfect knowledge entails knowing all loopholes” (comparable to the supermagical pill case I discussed in section (2b) above), i.e. someone with perfect knowledge (like a god, say) could navigate all social consequences so as to lie and cheat etc. precisely when nothing bad would result. But I would argue that anyone actually possessed of that knowledge would be morally right to exploit all those loopholes, it’s just there aren’t actually as many such loopholes to exploit as we think …
The thing is, what the thought experiments are trying to get at is just that admission: that if someone could do something that we would generally consider hugely immoral but could do so in such a way that the downsides — including societal — would be outweighed by the benefits then that is not only morally permissible, but would be morally demanded. In short, someone who could take advantage of those loopholes would be morally obliged to do so, or else would be acting immorally. The best defense Carrier can muster is indeed the “It doesn’t happen as often as people think”, but the objection to these sorts of Enlightened Egoist ideas is that it does indeed allow for them, and it seems to be contrary to any notion of morality to say that we can do horrifically immoral things if they benefit us enough and/or we can get away with it. This is of course the weakest sort of objection because Carrier can indeed bite the bullet and declare that, nevertheless, that’s what is moral and that it follows from his proof of the foundational principle of morality (in his case, pursuing the most satisfying life). But Carrier spends a lot of time in the thought experiments denying this only to ultimately admit it, making the defenses not particularly useful for him.
Except for one point: Carrier needs to refute those cases because in general he’s trying to justify an overarching rule in a moral system that is aimed at maximizing the happiness and satisfaction of an individual. As such, he needs it to be the case that we don’t break those rules on general principle and don’t stop to think about whether each individual situation works or not. If we can show that there are a sufficient number of cases where we’d at least have to think about it, then there is no point in following the rule and so Carrier cannot say that his analysis shows that we would never engage in war if we were properly informed. Which, then, brings us to the first point:
(1b) Total Knowledge vs. Limited Knowledge. In our debate there was a digression on the murkiness of war and whether one could ever be in a situation in which it would be genuinely right for both sides to continue fighting, i.e. with both sides acting perfectly morally with all the same correct information. I considered that a red herring, that one should stick with the example I started with, that of slavery, because I use it as an explicit example for analysis in my chapter on GT in The End of Christianity. But war would succumb to the same analysis. The result is that two sides acting on true moral facts with the same information would stop fighting. Therefore in war, always one side is in some way wrong (or both sides are)–which means either morally wrong, or wrong about the facts.
Carrier likes using examples from fiction, so let’s explore that by using examples that you might find in the game “Civilization”. First, let’s presume that each society’s main goal is to preserve its own society and not to uphold some sort of general “get along with everyone” idea. Next, imagine that you have two societies living on an island. They are expanding, and can see that as they expand they both require more resources. At some point, they will be directly competing for the same resources, and so will either have to take them from the other civilization, or else at the very least halt their growth or perhaps might even have to contract. They don’t see any technology that would allow them to colonize other areas and might not even know they exist. Could they not, then, be justified in going to war with each other and trying to wipe the other one out? After all, the consequences might be their destruction anyway. You can argue that even with the war they would run out of resources eventually, but there’s far more hope for something to happen in the extra time the extra territory will give you.
But as it turns out it’s easier to justify a war than Carrier thinks. We can presume that any society that is being attacked by another is always morally justified in defending themselves from that attack. So we don’t need to justify both sides in the war, but only the aggressor. The defender is always justified in defending themselves from an aggressor. So imagine this case: a society has just come off a long war against an aggressor. Due to that, they currently have a significant army and have had to focus their attention on martial advancements rather than on social and cultural ones. There is another society nearby that hasn’t had to face a war, and so has focused on social and cultural advancements. As such, over the long haul their head start in those areas means that they will be able to outcompete that society and so will eventually, through that, convert the society into theirs. But they don’t have advanced military technology or a strong army. Over time, however, the first society’s army will have to degrade and the other society will gain stronger military technology and potentially even a stronger military force. Given that it is likely that in the long run the other society will impose itself on the first one and that the first one currently has a huge military advantage that they could use to prevent that, wouldn’t it seem to be a reasonable move for them to invade? And then reasonable for the second society to defend themselves? Carrier can’t use the idea that they can’t know that those social and cultural advantages will ultimately end in the loss of their society because to make his view work he has to rely on us reacting based on what is probably or likely, and it’s certainly reasonable to find a case where it is more likely that the cultural or social advantages will be overwhelming than that it isn’t.
Now, Carrier might be able to find arguments where there might be a workable compromise. That’s not the point. The point is that the only way he’s going to be able to do that is not by appealing to a general rule, but to specifics that say that, in this case, it still isn’t going to work. But we’d still have to consider it. The same thing will apply to his “slavemaster” example: you only don’t keep slaves because in the long run it isn’t worth it to do so. But if it ever was, then you should. So if you could get away with it or if it benefited you enough, you should. And this would have to be done on an individual basis, not as a general rule. The only case where it would be done generally is when you go up to the next higher level. For slaves, that’s at the level of a society, where it decides if it should make keeping slaves legal or not. For war, that would be at the level of a United Nations. But all this would do at the level below it — the individual or the societal level in these cases — is introduce extra “drags” on the decision, through either the risk of legal punishments or alliances/peacekeeping forces that will respond if an invasion is undertaken. All it does it make it more difficult to find cases where it is justified. But, ultimately, we’re still justifying it individually and not as a general rule. If you would benefit from keeping slaves or benefit from warring, that’s what you’re morally obligated to do.
In Q&A I gave an analogy from surgery and knowledge thereof, and the difference between doing the best you know while recognizing there are things you don’t know, such that if you knew them you’d be doing something different. There is a difference between the “best way to perform a surgery that I know” and the “best way to perform a surgery” full stop, which you might not yet know, but you still know you would replace the one with the other as soon as you know it’s better. Ditto the war scenario.
This raises the issue of what the moral status of an action is if it isn’t the perfect ideal but is the ideal based on the information the person has at the time. Above, Carrier says that if you don’t have all the facts what you did was immoral. Here and later, he wants to imply that you aren’t really acting immorally to avoid being moral requiring perfect knowledge. He kinda hedges on this:
In the limited knowledge situation, both sides can be warranted in believing they are right, and thus not be culpable, but still even then one of them must actually be wrong and just unaware of it, and would want to be aware of it if there were any means to be.
But the problem still remains: are they actually acting immorally or not? Carrier implies that they aren’t culpable, but that they’re still wrong. But being wrong can only have meaning here if morally we are supposed to avoid being wrong, but it can’t have that connotation if in any case where we have limited knowledge we wouldn’t condemn someone for their actions taken in that state. So either we can reasonably condemn it or we can’t. If we can, then to be properly moral is always going to require perfect knowledge; we will all be “sinners” under Carrier’s view and are always going to be incapable of being properly moral. If we can’t, then it being wrong in those cases is irrelevant to its overall moral status and so the “total knowledge” case is not relevant to our moral reasoning, since we can’t ever be in that situation. Carrier needs the total knowledge cases to make rules, but then has to abandon them because no individual can actual work outside of the limited knowledge case, and ought has to imply can, so if we would need to act on total knowledge to be moral but never can work that way, then we can’t be moral and so no one can demand that we do so, or call us immoral if we don’t.
He also makes an error in talking about how all moral systems have the issue where if we don’t have all the facts about what the actual outcome would be. First, this only holds for consequentialist moralities. Any intentionalist one does not have this problem because the morality of an action is determined by the intent when it is taken, not on the ultimate outcome. So if there are facts that someone didn’t know that, if they had known it, would have led them to take another action that would have had a “better” result, that doesn’t and can’t impact the moral status of their action. The morality of their action is decided by what they intended, not what happened. Even a consequentialist morality like Utilitarianism can dodge this by leaning on “Ought implies can” and noting that the morally right action is the one that maximizes utility based on what the person actually knows at the time, instead of saying that it’s the one that maximizes utility generically. It’s not clear that Carrier can make that move, since in order to make some actions immoral he has to rely on the idea that we consider it immoral based on what we with perfect knowledge and perfect reasoning would make. Utilitarianism says “Maximize utility for all”, but doesn’t have to specify that there even is a perfectly objective action in any case that would do so. Carrier has to argue that there always is a perfectly objective moral action that maximizes that individual’s happiness and that that action is the only one that is moral. That makes this problem much more significant for Carrier than pretty much any other moral system.
(2b) The Magic Pill Challenge. The “magic pill” objection, ever popular in metaethical debate, also came up in Q&A, and I think my responses might not have been sufficiently clear. The gist of the question is this: what if there is a pill that, if you take it, a thousand people suffer and die horribly but you get a thousand years of a fully satisfying life and no downside, and (this is key) the pill also erases all your memory and knowledge of those other thousand people, so you never even know the cost of your good outcome. I answered, in effect, that you’d have to be evil to knowingly take such a pill, thus by definition it would always be immoral to do so. Thus the desire for greater satisfaction, in that moment, would prevent you taking the pill, because though you would forget later, you know now what the cost is, and it would not satisfy you to pursue your own happiness at that cost. Indeed, anyone who had already cultivated their compassion in order to reap the internal and external benefits to their own life satisfaction, as one ought to do (because it maximizes life satisfaction to do so, i.e. all else being equal, it is more satisfying to live in that state than not), would be incapable of choosing the pill.
Carrier here says that you would have to be evil to take the pill, and that by definition that would make it immoral. This implies that he is using evil in the moral sense. He denies this later:
Here by the term “evil” I mean simply malignant (thus I imply no circularity: see Sense and Goodness without God, V.2.2.7, pp. 337-39). Malignancy (“being evil”) is then immoral on GT because having a malignant personality does not maximize life satisfaction, but in fact greatly depresses it (in comparative terms).
But this couldn’t be by definition, then, but instead would be an empirical result. So if this is the case, then taking the magic pill can’t be immoral by definition. So we’d have to examine the “malignant personality” case, but here it is clear that you don’t have to have an overall malignant personality — presumably wanting to hurt others — to take the pill here. All you need is a calculating one, where the only concern you have for others is precisely how it impacts your own satisfaction. Given that, what you would do is look to see if taking the magic pill will improve your satisfaction more than than downsides of taking it. And since by definition there are no downsides to taking it, it seems pretty clear that the analysis would work out in favour of taking it. So the only defense Carrier can have here is that calculating to see what would give you the most personal satisfaction regardless of others is not consistent with his philosophy. Since that seems to be the definition of his philosophy, that would be a hard sell.
Just ask anyone which person they would rather be right now: (A) a mass murderer without a memory of it, or (B) someone with reasonably accurate memories of who they are and what they’ve done. Would it disturb them to find out they were (A)? Yes. So why would they choose to be that person? Indeed, when would they ever?
Carrier can only make this an obvious line by removing the reasons for committing the mass murder in the first place: the fact that doing so will guarantee you a satisfying life with no downsides. First, it is entirely reasonable to say that if the only thing that is preventing you from having a satisfying life is one particular memory and the downsides of removing it are less than the downsides of keeping it, then the rational thing to do under Carrier’s view is to remove it. How can it be otherwise? Some nebulous “We must maintain our rationality!” rule cannot trump the fact that in an individual case it would work out better to remove that memory. Otherwise, Carrier would be dumping the whole empirical and reality-based aspects to his morality. Second, if they accepted Carrier’s view and so accepted that the choice was “Be someone who sacrificed a thousand people to gain personal satisfaction forever” or “Be someone who didn’t”, the choice isn’t going to be as obvious. It’s only by casting it as “Mass murderer!” that Carrier can get the emotional push to get us to eschew it. But it isn’t clear that under Carrier’s view that “mass murderer” wouldn’t be an ideally moral person. Carrier makes the common mistake of using existing morality to determine what is right or wrong instead of seeing what follows from his own view. This sort of thing seems clearly morally wrong, so Carrier contorts himself into explaining it as still being morally wrong.
That’s why the question is, what sort of person would you rather be right now? (A) or (B)? Which would it satisfy you more to be? If (B), then you will be more satisfied being (B) every single moment. Because the same question, asked at any moment, gets the same answer, thus all your moments added up gives you no greater satisfaction in (A) than in (B). And this extends even after death. By the same logic: would you rather be (A) or cease to exist? I would rather not exist (because I would not wish that “evil me” on the world).
But who says that a person that would do that is “evil”? And what do these personal emotions have to do with morality as per Carrier? Either it would work out better or it wouldn’t. Carrier uses that your conscience wouldn’t tolerate it in a number of cases, but if the only thing stopping you from doing something is your conscience then under Carrier’s view you should ignore your conscience and retrain it. For example, imagine that there is a couple that wants to become polyamorous and has worked out that doing so will be better for them rationally, but every time they try they get guilty from — erroneously — considering it adultery and so can’t enjoy it. Will Carrier really say that being polyamorous is just plain morally wrong for them, or rather that they need to reconsider the facts and eliminate their erroneous one? The same thing could apply here: they could have the erroneous belief that taking the pill is “mass murder” and morally wrong and, once they have eliminated that, wouldn’t have the conscience issues either. Which leads to this:
To make this dilemma more visceral: if some just persons come to kill you to avenge those thousand people you killed, would you honestly disagree with their right to kill you? You could not rationally do so (because you would agree with them–indeed, you might even ignorantly be helping them find the culprit).
Well, first, by definition those people couldn’t come to kill you for that because that would be a negative consequence and the pill eliminates all such negative consequences. But more seriously, would a just person really be coming to kill you to avenge those people? In Carrier’s view, “just” can only be defined relative to his moral system, so if the reasoning works out no one can object to someone taking the action that will indeed provide them with the most personal satisfaction. Doing so would be unjust, as we would be condemning someone for acting properly morally. So this objection only works if Carrier has established that it is morally wrong, but he’s using it to demonstrate that, which obviously isn’t going to work. See what I mean about him having to contort it to make it work?
Thus something Ben once said to me about an amped up version of the “magic pill” example was spot on–in which one massively alters every conceivable thing about yourself and the universe so as to produce a “get-out-of-every-conceivable-negative-outcome” scenario. It’s the reason why all these “magic pill” examples are lame: if you have to completely change reality to get a different result, then that actually proves my point. My conclusions follow from reality as it is, and even these “magic pill”-proposing objectors agree with me on that. In fact they agree with me so inescapably that the only way they can escape my conclusion is by completely changing reality. (And I do believe I said something to that effect in Q&A.)
As stated above, the point is to get him to accept horrific actions as being moral if they benefited someone enough and they could avoid enough negative consequences. So if you had such a pill that could do that, it would be moral to do so and kill 1000 people for your personal satisfaction. To say that this isn’t realistic runs into the issue that Carrier accepts that it would be moral, and we’re just negotiating the price that would justify sacrificing 1000 people for our own happiness.
(3b) What If Rape Is Fun? One thing that came up in McKay’s responses was something like, “What if rape is fun?”
…
The rapist “thrill seeker” scenario is actually answered by a good episode of Law & Order: Criminal Intent (Blink): in the end a psychopath breaks down, realizing his fearless thrill-seeking was going to destroy the woman he loved, and that in consequence he couldn’t experience the joys of a real loving relationship with a woman at all, and thus was better off in jail–a tale reminding us to consider not just the more evident consequences of a behavior or disposition, but all the things it deprives you of as well.
Again, this runs into the problem of not considering individual cases and trying to assert general rules from very specific individual cases. If someone liked rape enough and could eliminate the negative consequences enough, then even under Carrier’s view they would be morally justified in doing so. That’s something that Carrier cannot merely explain away. He can bite the bullet and note that it’s not going to be justified as often as we think, but he spends a lot of time trying to avoid these consequences of his moral view.
Ben similarly proposed a scifi example of the character named Fran in an ep of Stargate Atlantis (Be All My Sins Remember’d), in which she was engineered to be happy suiciding herself to destroy a whole population. Of course that’s a perfect reason never to make one of those. See my remark on AI in the endnote in the TEC (p. 428 n. 44). There I gave the Dark Star and 2010 films as exemplifying a similar problem.
Nevertheless, objectively speaking, Fran will be missing out on a lot (temporally and qualitatively) and thus can be objectively described as defective–e.g. if she could choose between being Fran and being Unfran, an otherwise exactly identical being who saves a civilization of people instead of killing them and is just as happy doing so, which then would she choose? This brings us to the law of unintended consequences: if she would really see no difference between those two options, then she must lack compassion, which is a state of character that has a whole galaxy of deprivation consequences on her happiness and which cannot fail to mess up her behavior in every other life endeavor. Thus Fran, as conceived in that ep, is logically impossible, unless her compassion and other virtues were intact, in which case she would prefer to be Unfran, when rational and informed (and thus to convince her of this, you need to make her rational and informed).
Carrier can justify here not making a Fran as they would be deprived, but the question is “What should Fran do?”. If that desire really is what would make her the most satisfied and she’s biologically constrained to think that way, then it would seem that under Carrier’s view that’s what she should attempt to do. Carrier, obviously, can’t have his morality ignore biological realities. Given that biological reality, Fran can never be convinced that she should be Unfran because she can’t be Unfran. Carrier’s view cannot demand that someone like things or be satisfied by things that they can’t be satisfied by. Carrier can turn around and call her insane, but this leads to issues with differences in individuals. It cannot be the case that someone is insane unless they can be satisfied by what Carrier considers the ultimate set of values, since Carrier things that that has to be empirical anyway. Fran might be constrained by people trying to stop her from blowing up and so have to be careful about it, but it can never be immoral for her to seek out the one thing that would give her life satisfaction.
(4b) Why Would Morality Evolve to Maximize Happiness? McKay asked at some point in the debate why on GT should morality just happen to optimize happiness. On GT the behavior that optimizes happiness is the definition of what is moral, so his question is analytically meaningless. Possibly here McKay was confusing different definitions of happiness (see point 6 to follow), replacing satisfaction maximization with “joy” or “pleasure” or something, even though I specifically warned against that mistake in my opening.
If happiness isn’t something like joy or pleasure for Carrier, then what is it? We tend to determine our satisfaction — at least hedonistically — by appealing to those, and sacrifice short-term ones for long-term ones when we apply rationality. If Carrier is going to argue against that, then he’ll run into the same problems that non-hedonistic philosophies — Kant, the Stoics, etc — run into: why sacrifice hedonistic values for Carrier’s? Kant and the Stoics, at least, don’t appeal to satisfaction to justify their morality but Carrier does. So how does he untie that knot?
But McKay also confused the fact that our desires are products of evolution with the claim (which I never made) that we ought to behave as we evolved to (a claim easily refuted by pointing out how self-defeating that ultimately becomes).
The problem with this is that Carrier will later talk about how rape doesn’t actually increase fitness and so was selected out, but by his own view that’s not relevant. Anything in our evolved sense of morality is something that we have to examine under his view and see if it meets his principle, especially if we want to “get ahead of evolution” as he argues later. So our evolutionary path and evolved morality is irrelevant if he really has the right view here. We need to examine based on his view, not on our evolved senses. He makes the same mistake many people who rely on evolution do: forgetting that as soon as we argue that evolved senses can be wrong, we have to judge all evolved senses against the principle, making our evolved senses irrelevant.
The proof of this is in earthquake building codes: evolution doesn’t teach animals to make houses to withstand earthquakes; we had to develop a way of thinking correctly that bypassed our errors in order to reach that incredible maximization of DRS; so, too, CPR (cardiopulmonary resuscitation); for example: you’re not likely to “evolve” a knowledge of CPR, yet a knowledge of it increases DRS. Likewise the technology of morality, which is just the same thing: a technology for maximizing our ability to exploit the best scenarios achievable according to Game Theory, which bypasses our evolved error-making. Even though evolution has moved us in that direction already, just as in all these other areas, it still has a lot farther to go, and technology is what we use to fill that gap.
The problem is that all of these things follow from general reasoning, not specific evolved senses. Building codes just are technology. So is CPR. So, then, likely is morality.
(5b) The Role of Rationality in Defining Truth. Related to that point is where again I think McKay confuses criteria of truth (what makes a proposition about the facts of the world true) with criteria of persuasion (what convinces someone that a proposition is true).
This is, of course, true, but if Carrier cannot convince people that his view of morality is correct while appealing to personal satisfaction then he might have some empirical issues to work out. More importantly, since many of the objections are going to be “This doesn’t seem like morality” he’s going to need to find a way to settle that before we can get into the details of the facts.
Hence when I say moral facts follow from what a rationally informed person would conclude, I am not saying one has to be perfectly rational to be moral, but that moral facts are what rationally follows from the facts of the world (whether we or anyone ever recognize this). That is, just as the true facts of physics are “what we would conclude if we were adequately informed and reasoned from that information with logical validity” so the true facts of morality are “what we would conclude if we were adequately informed and reasoned from that information with logical validity.” Getting people to believe such conclusions is a completely unrelated problem; likewise getting people to behave consistently with their beliefs.
But if we aren’t capable of acting on Carrier’s view, then we would void “Ought implies can” and so won’t have a morality that can make sense. Carrier, then, needs to establish that we, in general, can accept and act on his moral view, and responses like these don’t, in fact, get him at all to there.
Likewise the question “what if our evil desires are simply overwhelming,” which essentially sets up a fork between sanity and insanity. There are certainly insane people who can’t conform their desires or behavior to rational beliefs, even when they have them; or who are incapable of forming rational beliefs, regardless of what technologies and “software patches” we equip them with. Hence my point in my TEC chapter: it is no defect of any moral theory that madmen cannot be persuaded of it.
The question, though, is what happens if we are all “madmen”, not whether or not there are some (and also how would we tell who are the madmen and who aren’t).
Of course, even apart from that, we would know that an omniscient neighbor could and would exploit all loopholes, so our behavior would change accordingly, which would create an arms race scenario that is unlikely to go well for any libertine omniscient neighbor–which an omniscient person would in turn know. So how would such a person prevent the extreme restrictions on their liberty and happiness that would result from the extreme self-protective measures taken by everyone else? By being more consistently moral. Or ending up dead. It’s obvious which is the better choice.
But, of course, that only holds if the neighbour will take actions that are bad enough to us to justify the extreme restrictions. But an omniscient neighbour will know precisely what they can get away with to avoid that. So that will never actually happen. So this doesn’t defeat the charge that he accepted: under his view, if it benefits you enough and you can avoid enough negative consequences you can shaft other people. He needs to accept this as he stated instead of constantly trying to dodge around it.
(6b) But Isn’t the Pursuit of Happiness at Odds with Morality? The question kept being asked why should we want happiness above all things? Maybe there’s something we want more?
I don’t think this is the actual question. Either it’s “Maybe there’s something that we ought to want more” or else the action question in bold which is the idea that morality is there to put constraints on our seeking of personal satisfaction, so that it should never be the case that we choose to commit what we consider to be a morally horrific action because it works out to give us a better life otherwise.
Thus when we rephrase the matter, McKay’s objection dissolves: a mother aiming to sacrifice herself to save her daughter (TEC p. 350) will always choose the option that most satisfies her. The only time she will not is (1) when she does so uninformed (realizing after the fact, say, that she would have been happier the other way around), but that’s precisely why the truth follows from what would have been the informed decision (and her acknowledging this after the fact proves the point that it was indeed the correct decision and her actual decision was in error–not necessarily a moral error, but certainly at least an error of ignorance, since had she known the actual consequences she would have chosen differently, thereby demonstrating what the actually correct decision was) or (2) when she does so irrationally (deducing how to behave by a logically invalid inference from the facts), but no correct decision can ever result from an irrational decision making process (so her decision in that case is by definition incorrect, unless she lights upon the correct decision purely by accident, as when a fallacious argument just “by accident” produces a true conclusion, but then of course her decision won’t have been incorrect–she then will be more satisfied in result of it than she would have been otherwise).
The objection is not whether someone will do that, but whether under Carrier’s view anyone ought to do that. Ought that satisfy her? If Carrier can’t justify that under his view, then she is wrong and can only be acting on it on the basis of being irrational or ill-informed. And Carrier only asserts it, and never proves it.
(7b) What About Desiring Results That We Ourselves Will Never See or Enjoy? McKay gave an example of desiring his kids’ future college education and claimed this desire didn’t benefit his happiness in any way, indeed he might not even live to see the fruit of his labors toward it. But that’s not a correct analysis. In that scenario we have two desires to choose from: to desire (a) our children’s future college educations, or (b) not. Which desire will satisfy us the more to have, and to act on, and to fulfill? If we desire (a) more, then by definition (a) satisfies us more; whereas if (b) would satisfy us more, we would want (b) more, and thus even DU would entail we should want (b) and not (a); there can therefore be no logically valid objection to GT here. Any DU theorist must accept the exact same conclusion as GT produces in this case.
The reality is that the having of a desire itself brings us pleasure and satisfaction. It is not all about seeing the outcome.
The same objection applies here. Carrier appeals to us actually doing it now, but never justifies it under his view. At the extreme, this runs into an argument of “If it satisfies you, then do it!”. But since Carrier is arguing that even some things that we do feel satisfy us are wrong if we were fully-informed, then he needs to do a total analysis at the level of fully-informed and fully-rational, which is not what he does here.
(8b) What If We Only Have Dissatisfying Choices? Another issue regards outcomes that cannot be achieved, i.e. sometimes you just can’t maximize your satisfaction or must live with a dissatisfying result. But in every case the correct act is the one that produces the greater satisfaction of all available choices.
Carrier’s totally right here. It’s about best choices available, which may mean one that sets us back if that’s the one that sets us back the least.
(9b) Avoiding Circular Rebuttals. Several times McKay used the common fallacy of assuming a conclusion a priori and then judging GT by the fact that it doesn’t produce that conclusion. On why that’s a fallacy see my TEC chapter (p. 350, with pp. 343-47). There are many “question begging” objections to moral theories like this, which are perversely popular among philosophers, who of all people ought to know better. McKay repeatedly assumes a priori that a decision he describes is the correct one, and then finds GT doesn’t entail it, and then he concludes GT is false. That’s circular reasoning. If GT is true, and if it is true that GT really doesn’t entail that “expected” result, then McKay is simply wrong (that, for instance, self-sacrifice is moral). So he then can’t use his erroneous belief (e.g. that self sacrifice is moral) to demonstrate GT is false.
What McKay would be doing is taking something that seems obvious morally, arguing that Carrier’s view can’t come to the same conclusion, and then using that against Carrier. Again, this is a weak approach because Carrier can easily bite the bullet, but Carrier a) doesn’t seem to want to much of the time and b) also does bite the bullet, making this objection pointless in both cases.
(10b) Wouldn’t GT Destroy Individuality? Finally, McKay argues that GT would destroy individuality by requiring everyone to behave exactly the same. But this confuses covering laws with instantiations.
In theory, there is room for individuality in Carrier’s GT. The problem is with the insistence that the morally right thing is the thing done with perfect knowledge and rationality combined with Carrier feeling free to criticize the individual preferences of others based on that, combined finally with the fact that for GT to work our desires must be rational and fully-informed as well. As I said, there’s room for it, but Carrier really needs a way to shake that out that doesn’t just boil down to us being immoral and insane if we like and disliek different things than Carrier.
There’s another section here that I’ll pick up another time.
Thoughts on “The Complete Sherlock Holmes”
December 26, 2019So, yesterday, I commented that I had finished reading “The Complete Sherlock Holmes”. I had read that collection at least once and probably twice before, but as part of reading classic works I wanted to read it again. It was also interesting because of a couple of other things that had been going on around that time, including reading a puzzle book themed on Sherlock Holmes.
And the post talking about that raised some interesting points in the comments. First, from natewinchester:
Responded to by malcolmthecynic:
After re-reading them, I have to agree with malcolmthecynic. The stories can be a bit unfair — there are times when Holmes will go off on his own to investigate something and will only tell us the results as he’s revealing the crime — but even then you could figure out what was going on as soon as Holmes reveals the details. Still, sometimes he reveals the villain beforehand. But what really makes them more mystery adventures is the fact that in many of them Holmes only identifies the basic details — who is doing it and what their very basic motivation is — and then much of the story is that person explaining the backstory of how they got there (generally only for sympathetic “villains”, of course). This will take up at least half the story and is generally the most interesting and dramatic part of the story. For the most part, the Holmes stories seem to be aiming at doing two things: showing off the deductions that Holmes is famous for (some of which might be a bit dubious) and building dramatic and adventurous stories around those deductions.
I also commented before that Holmes should deal with his issues of an idling mind by taking up philosophy. It turns out that he actually does that when he retires, combining it with beekeeping. Which is an interesting little note.
Anyway, I still did enjoy reading them, and will probably read them again.
Tags:classics
Posted in Books, Not-So-Casual Commentary | Leave a Comment »