Archive for the ‘Philosophy’ Category

The Improbability Trap

May 12, 2017

So, over at Tippling Philosopher, Jonathan MS Pearce posted about doubts about the Easter story, and specifically in this post about naturalistic explanations for the supernatural happenings around Easter. He quotes Bart Ehrman talking about a completely made up explanation where some followers moved the body, got caught by soldiers, attacked, and were killed by them. Ehrman finishes with this:

Is this scenario likely? Not at all. Am I proposing this is what really happened? Absolutely not. Is it more probable that something like this happened than that a miracle happened and Jesus left the tomb to ascend to heaven? Absolutely! From a purely historical point of view, a highly unlikely event is far more probable than a virtually impossible one…

But this is more than “highly unlikely”. There is, in fact, absolutely no reason, even historically, to think that this actually happened other than that it was possible. There are no accounts — at least as far as I know and there is no suggestion that there were in the quote or in the post — that suggest this happened. There are no legends of this happening. There is nothing to suggest that this actually happened. In essence, Ehrman simply made this explanation up. And yet, somehow, he wants us to believe that this completely made up explanation, with nothing to suggest it, is still more probable than what the accounts that we are using to determine that there’s even an event to consider are saying. So you can invent a story that doesn’t align with and is not informed by any of the actual accounts, and if you consider the explanation in the accounts sufficiently improbable you can declare that yours is more probable.

Pearce echos this sort of analysis:

Now, you can claim that some of these interpretations or theories or claims are inherently improbable. They may even be utterly wildly improbable. But that still puts them in the category of being far more probable, and with higher prior probability through precedence, than a dying and rising incarnate god-figure, who prays to himself and sacrifices himself to sit on his own right hand which somehow pays for the sins of humankind, which he created and had ultimate control over, for all of time.

So, again, he can invent interpretations, theories and claims that are wildly improbable and yet are still more probable than the supernatural explanation given in the actual accounts. Given this, it’s hard to see how anyone could ever demonstrate that a “supernatural” explanation is the one that we should accept, even provisionally. After all, in response the naturalist can simply invent an explanation and declare it the winner, simply because it’s naturalistic and doesn’t contradict the evidence. Given that you can almost always come up with an alternative explanation that is consistent with the known facts, this means that the naturalist can always invent an explanation that they can use to claim that the supernatural explanation isn’t the most probable explanation.

Something has gone wrong here, and I blame David Hume.

Hume rather famously made an argument that if someone tells you that they’ve experienced a miracle, it is at least in general more reasonable to argue that the person is lying rather than accept that they really experienced a miracle, no matter how truthful you thought they were, because it was always more likely that they were lying than that they actually experienced a miracle. This was based on the idea that miracles, by definition, are wildly improbable events — that’s how we know that something ought to be called a “miracle” — and so pretty much any other explanation has to be more probable. If it isn’t, then it would be an actual miracle itself. Of course, the problem here is that while miracles are supposed to be wildly improbable for natural or even human agency, they aren’t improbable when interpreted as an act of God. So we have reason to think that God could and would do that, and no reason to think that the cause could be natural or human. Thus, if it happened, it is more probable that God did it than that anything else did. Thus, the improbability argument works against Hume: once we’ve established that the event occurred, any explanation other than God is, by definition, more improbable.

So Hume takes the first step towards denying the event itself using probability. But if you have no reason to think that the person is lying to you or hallucinating, then you have no reason to posit it just because you don’t like or don’t want to accept that that event actually happened. So by Hume’s argument, it is more reasonable to believe that someone you know doesn’t lie and who has no reason to lie to you about this event is lying because if you accept that the event occurred you’d have to accept the supernatural explanation that has consequences that you don’t like. It’s one thing to say that you need more than one person’s word to accept that it happened, or that you disagree with their interpretation of what that event implies, but quite another to say that you don’t think it happened simply on the basis that the consequences of it happening are things you don’t like.

And we can see this carried on in the comments in this post. They consider the supernatural explanation so unlikely that they can prefer any other explanation, but all of those explanations require first dismissing the event itself. In short, they argue that Jesus was never really resurrected, and all of their explanations are aimed at demonstrating that. Now, while it’s true that the evidence for the resurrection isn’t as solid as the account of someone who does not lie, has no reason to lie, and is in a situation where there is no reason to think they were hallucinating, that’s not the argument here. The argument isn’t that the evidence isn’t sufficient, but essentially that the event itself is so outlandish that any explanation other than that the event in question actually happened is to be preferred. And, again, since you can pretty much almost always come up with an explanation that will fit the known facts, this means that there is no way to demonstrate that the event actually happened. Just as Hume could dismiss all evidence up to and including direct testimony from an incredibly reliable source, so to can Ehrman and Pearce dismiss all possible evidence for the resurrection, based entirely on them not thinking that the resurrection happened.

This is what the “improbable” argument is hiding. It’s not an honest intellectual argument, but a way to dismiss and ignore conclusions that you don’t like. At the end of the day, they need to be able to say that there is a way that the supernatural explanation could be more probable than a naturalistic explanation. Given their reasoning in this post, it doesn’t seem like they can do so.

Medical Model vs Social Model of Disability

April 21, 2017

So, this is a post where it’s not so much that I have a strong opinion on the various topics, but that in reading on the topic I can’t help but think that using philosophy and philosophical methods — specifically wrt conceptual analysis — would really help clear things up. Here Ania Onion Bula talks about the Medical and Social models of disability, and presents them as a dichotomy: either we take the Medical model, or the Social one, and she prefers the Social one.

So, to start, what are these models. First, we need to start with the idea of an impairment:

Impairment is a loss or deviation of normative physiological, psychological, or anatomical structure or function that can be caused by injury, illness, or congenital condition. It’s the description of what is happening in your body. So for example in the case of deafness, the impairment is a loss of hearing caused by x factor. Or in the case of my arthritis, it is the physical damage to my hip and the inflammation in the joint, caused by my arthritis.

Disability in turn is the restriction or inability to perform certain tasks or activities.

Under the Medical model, people have a physical or mental impairment, and that impairment itself leads them to be unable to do certain important things, and so they have a disability. Under the Social model, they may have an impairment, but it is the way that society is structured that turns that into a disability, as society is structured under the assumption that most people do not have that impairment and so there is no real need or desire to structure society to accommodate them. To put it simpler, the Medical model talks about people being disabled, while the Social model focuses on people being differently abled. Thus, the Medical model puts more focus on removing the impairment, while the Social model puts more focus on changing society to accommodate those different abilities.

The problem is that there are a lot of impairments that really do seem like they themselves are the main barrier to the life satisfaction of those who have them, like blindness, deafness, and so on, and so where the person’s life would definitely be greatly improved if the condition was simply eliminated. She tries to address that:

What about people who are disabled in more socially understood ways, like deafness, or blindness, or autism, who want a cure?

To start with, understand that many people who want a cure, want it because society makes it hard to exist otherwise. It would be like wanting a pill that would turn you straight because the world treat homosexuality as if it is something evil and not because it is evil in and of itself.

In order for it to be an ethical choice, one option cannot lead to punishment. If you punish someone for not taking a cure, how is it different from forcing them to take it?

This seems incredibly condescending to those who are blind or deaf, especially, as it seems to argue that they aren’t capable of determining whether or not, in general, they’d be better off not being blind or deaf because of some societal attitude. And this becomes even more problematic when we consider that the implication of this argument is that blind and deaf people’s lives would not be richer if their conditions were cured, which seems utterly ludicrous. There’s no societal change we can make that would allow a blind person to actually, say, really see a lovely sunset, or a deaf person to be able to experience a lovely symphony. It really does seem like their conditions deprive them of some experiences that they could have if they didn’t have it, and so curing their condition would at least give them significantly more choices of experiences and thus allow them to perhaps lead far more fulfilling lives.

It seems to me that for anything that we would generally and rightly call a “disability”, the main determining factor will be whether removing the condition, in and of itself, would greatly improve their quality of life. Let me use an example of mental conditions to highlight this, and talk about depression and introversion. Given that clinical depression, in and of itself, causes reduced affect and motivation and often leaves those people with a duller and less fulfilling inner life — for example, not being able to take pleasure in things that do give you pleasure — it would seem reasonable to suggest that simply eliminating the condition would improve their life no matter how accommodating society is to depression. Even if they could function reasonably well in society, those who have clinical depression still aren’t going to be capable of enjoying it as much as they might want. That doesn’t seem to be true for introversion, as there’s no argument that you can make that if you converted an introvert to an extrovert that therefore their life would automatically be better regardless of how society treats introverts. If society wasn’t so extrovert focused, there wouldn’t be any significant issues with being introverted. And we can thus call clinical depression a disability, and introversion simply a personality trait that is no better and no worse than any other. Thus those who are clinically depressed are disabled, and those who are introverted are, in fact, merely different.

So if we are going to talk about disabilities and have to choose either the Medical model or the Social model, it really does seem like the Medical model is the way to go for conditions where it really is the case that simply eliminating the condition would in and of itself improve the person’s quality of life, which includes her example of someone in a wheelchair. However, the dichotomy between the Medical model and the Social model is, in fact, a false one.

What we have come to understand is that there are people who have conditions, through no fault of their own, that in and of themselves limit their abilities. That means that, in some cases, they are going to be limited in what they do and what they can do based simply on the fact that they have that condition. For example, blind people are not going to be able to experience the visual sensation of a beautiful sunset. But there are, in fact, a number of cases where their condition simply makes them different, and so not fit into the standard model of the majority. Now, when people simply don’t fit the norm we tend to be less willing to accommodate them, instead asking them to accommodate so that the majority don’t have to make efforts to handle rare cases. But when someone has a medical condition, we are, in fact, more willing to make reasonable accommodations. That’s because simply being different is seen as being a choice, while being disabled is seen as not being a choice. But the long and short of it is that we can see someone as being hampered primarily by their impairment in some cases while recognizing that in some cases they are being hampered primarily by how society is structured.

Once we understand this, we ought to be see that society does need to, at least morally, accommodate these conditions. But that accommodation has to be “reasonable”. And for an accommodation to be reasonable, it seems to me that at least this condition must be met: It must be at least slightly easier for them to accommodate you than it would be for you to work around the condition.

This does lead to something she complains about:

Acting as though chronic illnesses or even just severe temporary illnesses like cancer, are not disability further plays into the ableist idea that people have to prove they’re “disabled enough”. Many people with chronic illnesses struggle with insecurities related to their own disability. We feel as though we are doing something wrong by asking for accessibility.

But I think her interpretation here is false. The whole point is to assess whether or not the proposed accommodation is the appropriate one for the situation. Let’s look at some of the things she proposes as reasonable accommodations:

When I was in university, for example, I was frequently too sick to come to class. A part of this had to do with the fact that I took the bus to and from school. The bus route did not have consistent access to restrooms. As a result, if I was having the kind of day where I’m running to the bathroom frequently, a bus trip put me in the situation of risking having a public accident. Taking the bus is also exhausting. It takes longer, is inconsistent with whether or not I will be able to sit down, involves me having to walk to the stop, and so it uses up more spoons. Some days I had to stay home, not because I didn’t have the energy for class itself but because I didn’t have the energy for the additional tasks that physically going to class would entail.

Under the medical model, there is not much I can do except maybe take so many pain meds that anything I try to learn that day will likely be gone forever before I’m even done processing the prof’s sentence.

Under the social model, however, there would be additional options – the professor could work with me to let me watch the lecture from home via skype, or pre-recorded videos. The professor could assign assignments that would make it possible for me to learn and show the work, without having to be physically there. Perhaps the university could help arrange some sort of ride share or carpooling program so that I can spend less of my energy going to class.

Now, most universities in Canada, at least, have departments dedicated to accessibility, even for temporary conditions. And that was 20 years ago, so she almost certainly had access to that if she went to a Canadian university. So there might well have been options, especially if Skype was an option. But let’s look at what options she could drive herself that might improve her situation. For example, if she needed to miss class because of her illness, there’s a time-tested method for getting what she missed, which is borrowing the notes from another student. If she made her concerns clear to the professor, that might be something that the professor could help facilitate, as well as the car pooling issue. As for assignments, that’s harder, but again I was accommodated for a broken wrist where they had a student volunteer sit with me and write things out for me, and I proctored an exam one for a student who had a condition that required that she have extra time to write it. The only issue is where you get participation marks, but again for an actual disability — as, for example, opposed to someone who is introverted — accommodation could be made for that by, say, not counting those marks.

While her thoughts here might just be suggestions, what they do — and what her huge “Treat disabled people as people!” rhetoric pushes — is the idea that she has an issue and other people need to do things to accommodate that. But in any discussion of accommodation what the person themselves can do and ought to do comes into play. For missing classes, it’s a reasonable question to ask why she can’t just get the notes from someone else. Sure, it isn’t as good, but it ought to work.

What if instead we created a world that accommodated different ways of being? Imagine a classroom where students had access to both a noise room and a quiet room to work in, and they could come and go and choose as they please? Imagine having access to different styles of chair to fit what they feel most comfortable sitting in, or have the option to stand or kneel or even pace. Imagine if schools made it possible for you to learn in different ways so that you could get information in the ways that work best for you, and if you could present your gained knowledge in the way most comfortable for you.

But again here we see the very self-centered view of accommodation. There’s little thought here put into how this would impact the teacher or, well, everyone else. Being able to come and go as you please doesn’t work if the teacher wants to do a group lesson, and there are too many students in classes to do everything individually. And this also applies to, well, everything else on the list. I like to pace while I think, too, but I don’t do it in meetings because it will bother everyone else. We can’t really accommodate what everyone would want or what works best for them, as that would be too chaotic. That’s one of the reasons why having separate classrooms where things can be done more individually is a benefit, and then you can integrate for classes and conditions where accommodation isn’t as required. But the entire attitude here is indeed a “I want this, so restructure society for me!” and while that might work and benefit society in general, it might not either.

Here’s the most egregious one:

It is about teaching hearing children and adults the appropriate sign language for the region they’re in so that it isn’t up the Deaf person to have to undergo either painful treatments or go through speech therapy if they can’t or don’t want to.

So, the “reasonable accommodation” is for everyone in the world to learn sign language because they might come across someone who is deaf who can’t or won’t do things to make that less or unnecessary and thus might need to communicate with them in sign language. Also, since the region matters, they might have to relearn it, again just so that they can communicate with someone there who, again, happens to be unable to communicate any other way. That’s … kinda ridiculous. She tries later to argue that this could be a benefit for people who are not deaf, but shouldn’t that be their choice? Are we all going to have to learn to write in braille too? Ultimately, part of agency is accepting the consequences of your choices. If this deaf person chooses not to learn things that could help them, we aren’t obligated to bend over backwards to accommodate them. If we are going to have to do things and put in the effort, they are going to have to, too.

This is why my minimum condition compares the effort that the disabled person would have to make to the effort that the others would have to make. If it would be easier for the disabled person to work around their condition than it would be for us to accommodate it, it seems the height of selfishness to insist that we need to do the accommodation. But on the other hand, if us accommodating it would be easier than their working around it, what excuse do we have for not accommodating it? And all of this comes from rejecting the false dichotomy of the “Medical vs Social model”.

Diversity in Comics …

April 19, 2017

So, comic book sales aren’t going all that well. And so the question has arisen of whether that decline is being caused or helped by diversity, or if diversity is the way to solve that decline. Alex Brown at Tor.com is arguing that diversity is not, in fact, the problem. She’s responding specifically to comments from David Gabriel:

Later, Gabriel gave another interview that, in part, rehashed that hoary old proverb that diversity doesn’t sell: “What we heard was that people didn’t want any more diversity. They didn’t want female characters out there. That’s what we heard, whether we believe that or not. I don’t know that that’s really true, but that’s what we saw in sales. We saw the sales of any character that was diverse, any character that was new, our female characters, anything that was not a core Marvel character, people were turning their nose up against.”

As I’ve already said, Brown thinks he’s wrong. I’ll get into her arguments later, but I think it will best frame the discussion if I give my opinion first:

Diversity doesn’t sell.

Now, when I say that, I don’t mean that a diverse cast of characters won’t sell, or that a female or black main character won’t sell, or anything like that. For the most part, if the characters and book are well-written and get noticed by readers/consumers, they’ll sell. What I mean by that is that using a claim of “This is diverse!” will not, in and of itself, drive sales, at least beyond the short-term, especially in a field that hasn’t actually been diverse. The problem is that, from the start, you are going to have some fans that are deeply resistant to anything that might be considered as diverse or deviate from the norm. Maybe those fans are indeed racist and/or sexist, or maybe they just see it as too deep an intrusion of politics into their media. These people, as soon as they hear “It’s diverse!” as a selling point, are automatically going to avoid consuming that product. Now, the argument is that those fans will be balanced out by more “diverse” fans who would buy it for the diversity, but the problem is that if that’s not a form of media that they would normally buy they aren’t likely to stay with it or even pick it up in the first place because, well, they likely don’t really like that media in the first place, and not everyone — yes, not even all nerds or geeks — like every type of “nerdy” media. So the hope to balance those who hear the word “diverse” and spit with those who hear the word “diverse” and have their ears perk up probably isn’t going to happen.

But even if it would, trying to sell on the basis of diversity has an impact on “middle-of-the-road” consumers like myself. I’m probably as middle-of-the-road as you can get here, and when the main selling point of a work is “Look how wonderfully diverse it is!” my immediate reaction is “… Really? That’s the best you can say about it?” How about talking about how great the story is? Or the characterization? But simply saying “It’s diverse!” leads me to think that that diversity is the main point of the work, and not the story or characters or whatever. And I get very skeptical about a work when the best people can say about it is that it has a diverse cast. That skepticism will get me to avoid spending my money on it, and instead to buy things that are “safer”, where I know — presumably — what I’m going to get. So trying to sell it on diversity is going to push away people who don’t care whether it is diverse or not, but are worried that diversity is the only thing it has going for it.

So, while I say that a work being diverse isn’t going to hurt its sales, promoting a work for its diversity will. Now let’s look at Brown’s view on diversity and how it isn’t the problem:

Disregarding the sugarcoated PR update Marvel made praising diverse fan favorites, Gabriel’s comments are so patently false that, without even thinking about it, I could name a dozen current titles across mediums that instantly disprove his reasoning. With its $150 million and counting in domestic earnings, Get Out is now the highest grossing original screenplay by a debut writer/director in history; meanwhile, The Great Wall, Ghost in the Shell, Gods of Egypt, and nearly every other recent whitewashed Hollywood blockbuster has tanked.

But are these really good examples? Get Out is a fairly unique take on horror, and benefited from that. Ghost in the Shell is the best known name out of the other examples, and was likely going to be a hard sell given that it is based on anime, which a lot of mainstream audiences have never heard of (as an example I, who is more tuned in to these things than the average person, had heard of the anime, but never watched it). She’s trying to do the comparison based on a movie that had some racial implications vs some movies that she calls “whitewashed”, but doesn’t compare the impact of genres and quality and what impact that might have on their sales. So it’s hard to say that it’s just “patently false” when her examples aren’t ones that would, well, prove the statement.

So let’s look at comics specifically. Maybe those examples will be better:

Even sticking strictly to comics, Black Panther #1 was Marvel’s highest selling solo comic of 2016. Before Civil War II, Marvel held seven of the top ten bestselling titles, three of which (Gwenpool, Black Panther, and Poe Dameron) were “diverse.” Take that, diversity naysayers.

Black Panther #1, which had a big following from the movie tie-in and was an established Marvel character, did well, certainly. That being said, it would be a bit odd to challenge Gabriel using that as an example, since he talked about returning to core characters instead of promoting diversity specifically and, well, Black Panther, as I just said, is a core Marvel character. So let’s look deeper at the monthly numbers, starting in April, where Black Panther, Gwenpool and Poe Dameron were all in the top ten. The thing to note here is that those were all #1s, and Marvel had another #1 in that top ten, which was C3P0, which she ignores (droids obviously not being “diverse”). #1s always get a bump due to them being the first issue, and all of these had ties to other things that would get them noticed. As I’ve already mentioned, Black Panther got a boost from the publicity from Civil War. Poe Dameron was linked to “The Force Awakens”. And Gwenpool was linked to both Deadpool and Spider-man, and was such an odd concept that people might definitely be interested in checking it out just to see what the heck was going on with it. Obviously C3P0 got the same boost.

So let’s look at what happened the next month, which had Civil War II 0 and maybe some other Civil War II crossovers. Black Panther #2 fell to 9, Poe Dameron fell to 12, and Gwenpool collapsed to 45. But that could be the influence of Civil War II, right? Not likely. Amazing Spider-Man #12 didn’t move at all compared to #10 and was only slightly 10,000 higher in sales than #11. Spider-Man Deadpool #5 sold basically the same as Spider-Man Deadpool. Star Wars and Star Wars Darth Vader didn’t lose any ground at all (Darth Vader actually sold more issues in May than in April, Star Wars had a slight decline). And Deadpool, despite releasing two issues that month (11 and 12) stayed roughly the same as well. So it’s far more reasonable that the decline came from the issues no longer getting the #1 boost than from Civil War II.

In June, more #1s flood the top ten, and so they lose even more ground (Black Panther comes in at 27, Poe Dameron at 43, and Gwenpool at 76) but Black Panther’s sales are mostly flat while both Poe Dameron and Gwenpool lost sales. For comparison, Star Wars stays flat, Darth Vader loses some — but also has two issues in the month — Amazing Spider-Man loses but has three books in that month, including the Civil War II tie-in — which didn’t lose when compared to Amazing Spider-Man in May — Spider-Man Deadpool’s sales are flat, as are Deadpool’s.

So, given these numbers … I’m not sure what “that” the diversity naysayers are supposed to “take”. It doesn’t really look like the new, diverse comics outperformed those focusing on core characters after the glow from their first issues faded, and most of them had influence from core characters or other media buttressing them in the first place. This is not a good argument that the idea that diversity doesn’t sell is just patently false.

Brown then moves on to differentiate the old school comic fans from the modern comic fans:

Comic book fans generally come in two flavors: the old school and the new. The hardcore traditionalist dudes (and they’re almost always white cishet men) are whinging in comic shops saying things like, “I don’t want you guys doing that stuff…One of my customers even said…he wants to get stories and doesn’t mind a message, but he doesn’t want to be beaten over the head with these things.” Then there are the modern geeks, the ones happy to take the classics alongside the contemporary and ready to welcome newbies into the fold.

So, technically, by this I’m both? My subscriptions included — when I still had them in force — Deadpool, Darth Vader, and Agents of Shield (with the latter clearly being “contemporary”). So I like my classics and I like my contemporary, and don’t care one way or the other about “newbies”. However, I am indeed one of those customers who says that I like stories an I don’t mind a message, but I don’t want to get beaten over the head with it. And, to be honest, I can’t see what’s wrong with that. Is Brown going to suggest that being beaten over the head with a message is a good thing? She could be trying to argue that what they see as “being beaten over the head with a message” is nothing more than being diverse period, but she’d need to a) demonstrate that and b) well, actually say that. Which she doesn’t as she moves on:

This gets to the point made by a woman retailer at the summit: “I think the mega question is, what customer do you want. Because your customer may be very different from my customer, and that’s the biggest problem in the industry is getting the balance of keeping the people who’ve been there for 40 years, and then getting new people in who have completely different ideas.” I’d argue there’s a customer between those extremes, one who follows beloved writers and artists across series and publishers and who places as much worth on who is telling the story as who the story is about. This is where I live, and there are plenty of other people here with me.

So, Brown is promoting customers who don’t care about the specific characters, and don’t care about the specific stories, but care about who is telling the story? I mean, okay, there are writers and artists that I might chose to follow to books that I might not otherwise buy, like Peter David or JMS, because I like what they do. But even then I’m not likely to pick up a work with a character that doesn’t interest me. And for artists, that’s more likely to be an exclusion list than a “Oh, I like their art but hate the character and story, so I’m going to buy it!” So … where do I fit in this paradigm? And where do the “old school” customers who do follow writers and artists around fit?

Or, does Brown really mean that she cares not about their skill, but about who they are? Does she follow them because she likes their work … or because they are themselves “diverse”? This would indeed be a difference, but I’m not sure that it’s one that we should promote as being a good way to approach comics, or that comics should try to appeal to these customers who don’t seem to care about the actual product.

Blaming readers for not buying diverse comics despite the clamor for more is a false narrative. Many of the fans attracted to “diverse” titles are newbies and engage in comics very differently from longtime fans. For a variety of reasons, they tend to wait for the trades or buy digital issues rather than print. The latter is especially true for young adults who generally share digital (and yes, often pirated) issues. Yet the comics industry derives all of its value from how many print issues Diamond Distributors shipped to stores, not from how many issues, trades, or digital copies were actually purchased by readers. Every comics publisher is struggling to walk that customer-centric tightrope, but only Marvel is dumb enough to shoot themselves in the foot, then blame the rope for their fall.

I have to agree with her, in some sense, on this. As I’ve said before, the subscription model is terrible, which stops me from subscribing. This is at least in part because they keep cancelling and rebooting books, and because they keep driving events that would require me to buy far more than I’d like just to get the entire story. Brown says more about this in the post and all of those points are reasonable. I do agree that this is probably causing more of the problems than “diversity”. But as I said above, the solution to that is not going to be promoting diversity, because that doesn’t help.

When you look at the sales figures, the only way to claim diversity doesn’t sell is to have a skewed interpretation of “diversity.” Out of Marvel’s current twenty female-led series, four series—America, Ms. Marvel, Silk, and Moon Girl—star women of color, and only America has an openly queer lead character. Only America, Gamora, Hawkeye, Hulk, Ms. Marvel, and Patsy Walker, A.K.A. Hellcat! (cancelled), are written by women. That’s not exactly a bountiful harvest of diversity.

But Brown thinks that, indeed, that’s the solution. On what evidence? What evidence does she have that ramping up the diversity is going to improve their numbers? None of her examples demonstrated that at all, and weren’t bountifully diverse themselves. And then she says this:

Plenty of comics starring or written by cishet white men get the axe over low sales, but when diversity titles are cancelled people come crawling out of the woodwork to blame diverse readers for not buying a million issues. First, we are buying titles, just usually not by the issue. Second, why should we bear the full responsibility for keeping diverse titles afloat? Non-diverse/old school fans could stand to look up from their longboxes of straight white male superheroes and subscribe to Moon Girl. Allyship is meaningless without action.

So, those who are diverse and thus would be the intended audience can’t be expected to, you know, actually buy comics in the way that keeps them afloat. Instead, those who are not the intended audience and many of whom who have no interest in being an “ally” in the first place need to belly-up to the bar and buy those comics for … reasons. Riiiiiiiiight. Or, you know, they can keep buying the comics that they, you know, actually like and let you buy the ones you like and keep them going. If you can.

Really, this is just ridiculous. If the comics can’t appeal to their own intended audience enough to get enough sales to avoid cancellation, then they should be cancelled, and appealing to those outside of that audience to save them is just … well, doomed to failure, and utterly entitled.

“Diversity” as a concept is a useful tool, but it can’t be the goal or the final product. It assumes whiteness (and/or maleness and/or heteronormitivity) as the default and everything else as a deviation from that. This is why diversity initiatives so often end up being quantitative—focused on the number of “diverse” individuals—rather than qualitative, committed to positive representation and active inclusion in all levels of creation and production. This kind of in-name-only diversity thinking is why Mayonnaise McWhitefeminism got cast as Major Motoko Kusanagi while actual Japanese person Rila Fukushima was used as nothing but a face mold for robot geishas.

So, know who “Mayonnaise McWhitefeminism” is? Scarlett Johannson. That’s a great way to encourage “allyship” by tossing an ally under the bus, and then driving the bus forward and backwards a number of times just to really drive home how loyal you are to your allies.

Also, I agree that making diversity the goal is not a good idea, because it leads to simply counting diverse characters/writers/artists instead of making sure that, for example, things are actually done with those characters and their diversity or that you are getting interesting, quality and also different narratives. So, given that … how come her examples above are all about counting the numbers? She just counts the numbers across more fields than simply the characters in the books themselves. Kinda hypocritical.

At the end of the day, using diversity as a main selling point doesn’t work. Diverse audiences won’t flock to media they don’t care for just because it happens to be diverse, those who hate diversity will avoid the titles like the plague, and everyone in between will just throw up their hands in frustration and retreat to those boxes of comics they have in their basement because, hey, at least they know what they’re getting. Brown’s arguments in favour of more diversity aren’t demonstrated and Gabriel’s comments ignore the real structural problems in comics that have nothing to do with diversity. Until people can figure out what’s really going on, comics are not likely to recover.

In Defense of Thought Experiments …

April 14, 2017

So, a common refrain I hear when people who are not philosophers talk about philosophy is the disdain for thought experiments. “What can we possibly learn from such artificial examples, that are so disconnected from real life?” This is one of the things that is usually used to argue that philosophers are ivory tower intellectual, more concerned with intellectual wanking than solving real-world problems, which is then used to justify ignoring philosophy and focusing on more “realistic” approaches, like science. The problem is that this is, in fact, based on a complete misunderstanding of what thought experiments are meant to do and how they work. I’m going to try to correct this misunderstanding by focusing on how important thought experiments are for determining the morality that we ought to use to guide our every day actions.

The first thing to note is that thought experiments are, in fact, very similar to scientific experiments, in both purpose and in how they work. Both aim to test out various theories, and have to do so by focusing on specific elements of the theories and filtering out all of the confounds. As such, lab scientific experiments are themselves incredibly artificial. You don’t test, for example, the ideal gas law by going out and experimenting in the middle of the part. No, you do it in a sealed container or lab where you can control the temperature, pressure and volume of gas as much as you can. This holds for all scientific experiments, and is particularly important in psychological experiments. And yet people rarely complain — well, except sometimes for psychology — that these experiments are too “artificial” and so don’t really reflect “real life”. In general, the experiments are designed to isolate the specific parts of real life that we want to study without invalidating what really happens in real life.

Thought experiments work the same way. What we want to do with a thought experiment is isolate the particular notions that we want to examine (or argue for) without the confounds that real life might introduce. Thus, in the trolley thought experiment, we wanted to isolate the “5 will die if you don’t act, and 1 will die if you don’t” aspect in order to test whether or not our intuitions lean more Utilitarian. The experiment is also designed to avoid the confound of the impact of taking a direct action to kill someone, which might have a moral status; a Stoic or Kantian, for example, won’t be allowed to take a direct action to kill someone, so if a person decides to pull the switch they would definitely be using Utilitarian reasoning. And as it turns out we still missed a confound when we change it to the “push a person in front of the train” example and find that many people change their minds on the permissibility of the action. So the examples need to be simplified and therefore “artificial” to allow us to test what needs to be tested.

Also, it is important to note that we want to get at what people really, at heart, intuitively — or even through explicit reason — think is the case. What we don’t want is them merely regurgitating an answer that their culture has specifically drilled into them for those specific cases from childhood. So, we can expect that everyone will have a ready answer for any normal, every day situation, but that answer might be one that was generated for them from what they learned about morality in their childhood, or is a conclusion that they generated from a moral viewpoint that they no longer hold. Thus, we want to give them a situation that they don’t have a ready answer for, one that they will have to think about and engage their moral reasoning or moral intuitions about. That means, then, making an “artificial” example.

Now, the objection is constantly raised that a moral system can’t be expected to handle situations outside of the experience of the person and/or of human society in general. It is, they assert, an ad-hoc system cobbled together by evolution or something to handle interacting in society, and so isn’t designed to handle things too far out of normal experience. Thus, the answers that are given when we create these artificial experiments just aren’t valid; moral systems can’t and don’t need to handle such outlandish cases.

The problem is that we, in our every day lives, may well come across cases that our moral system wasn’t originally designed to handle. Taking the “evolution” example, if we even went back 100 years — let alone the thousands or millions that evolution would cover — we couldn’t have, for example, conceived — outside of science fiction works — that we’d have to deal with the morality of stem cell research. They couldn’t have conceived of such a thing being possible, and it certainly wasn’t something their experiences in every day life could have prepared them for. If we are going to rely on a system that was developed or strongly influenced by factors so far in the past that we couldn’t have conceived most of the moral dilemmas that we are facing today, we had better have some confidence that it can actually handle those moral dilemmas. If we abandon it because it “wasn’t designed for those questions”, then what are we going to use to settle them? If we limit our moral systems to handling only those cases that we already know, understand, and have ready answers for, then what happens when we end up in a situation where we don’t? Are we just going to muddle through without using any moral system at all and hope we get the right answer? Better hope we get it right the first time, then.

This approach would make moral systems meaningless. All we would be able to do is regurgitate the answers we picked up from … somewhere, and any time we end up in a sufficiently new situation we, if we were being honest, would have to declare our moral intuitions and moral reasoning suspect. There’d be no point in talking about even an evolved moral compass or moral system because we could never trust it to be right except for those cases where we at least have declared that it worked right in the past. But, of course, even then we’d have no idea if it really worked right in the past, because those would have been new situations, too. Ultimately, to make this work means utter confidence in some overarching moral principle — like increasing happiness — that we can use to assess the results of the action to determine if it was the morally right one or not. Of course, then we can use that same principle to assess future actions as well, and from that even the artificial “thought experiment” ones to see if they work out.

So this leads to another common protest, which is essentially that the thought experiments are designed artificially in such a way to invalidly generate an “immoral” result using the basic principles or system that the person is using. Since the deck is stacked against a specific view in the first place, that the view “fails” the experiment doesn’t say anything about the moral view itself. So we can’t use these sorts of experiments for the purpose they are most commonly used, which is to support one moral view over another.

Here, the issue is that, in general, this objection is raised about cases where the person who holds that moral view concedes that they applied their view to the example and came up with an answer that they themselves consider immoral. The person who holds that moral view is always able to respond by “biting the bullet” and either simply stating that despite the intuitions of the person proposing the experiment or even despite their own intuitions that the answer is morally wrong, it really is the morally right thing to do. So if the person retreats to this objection, we can see that what they have is a contradiction in their moral system: when they apply their moral system, they come up with an answer, but then when they assess that answer against their moral intuitions, they believe strongly, nonetheless, that the answer is immoral. A moral system cannot survive having such contradictions, because that would mean that if someone tried to follow it they risk taking what they think are moral actions that, after they act, they consider horrifically immoral. Thus, any such case reveals a contradiction in their view that they need to resolve, and so cannot be dismissed so blythely.

Ultimately, thought experiments are designed for and perform a very important task: testing moral systems. As such, they need to a) engage the moral systems directly and b) challenge them. While some experiments might be too contrived or artificial to work, if you find yourself protesting that a thought experiment that challenges your own view is too artificial you really should consider whether it is the challenge that is the problem, not the experiment.

Jerry Coyne’s Sense of Superiority is Tingling …

April 7, 2017

So, Jerry Coyne recently pondered the question of whether religious people are stupid (or “a bit thick”). Or, perhaps, to be totally charitable, more the question of whether someone can be religious and still be called “smart”, as that’s really the question he talks about, but his summary talks about them being in some sense stupid or a bit thick. His answer is, of course, that the religious are at least not smart, are at least partially stupid, and are a bit thick. And as you might imagine, the comments on that post are full of atheists basking in their presumed intellectual and mental superiority, talking about how flawed the reasoning of religious people is and even bringing up the old “It’s a mental illness” and “brainwashed” canards … all of which is based on, at best, them happening to get one answer — albeit potentially an important one — right. Sure, some are pointing out that everyone has potential irrational blind spots, but that hasn’t really made an impact on the atheists there.

So, why does Jerry Coyne think that theists are “a bit thick”?

And many public intellectuals—and virtually all accomplished scientists—are atheists. Why? Because there’s no credible evidence for God. It’s palpably and painfully obvious that religion is a human construct and that the tenets of different faiths are not reconcilable. The things that the faithful say they believe are simply ludicrous. I cringe, for example, when I hear a “smart” person like Rabbi Sacks or the Archbishop of Canterbury profess such stuff.

To me, this means that someone, regardless of how “smart” they seem, is at the very least irrational if they believe in God or the attendant superstitions. It is as if their brain is a jigsaw puzzle with one crucial piece missing: the piece that accepts important propositions in proportion to the evidence supporting them. And to me that kind of irrationality is a form of stupidity, which the Oxford English Dictionary defines as “dullness or slowness of apprehension; gross want of intelligence.” It’s not that they’re totally stupid; just partially stupid.

Hoo boy.

1) That public intellectuals and scientists — presumably, people that Coyne thinks are smart — are atheists is not evidence that therefore smart people are or tend to be atheists and that that fact is related to their intelligence. There might be something specific in the nature science that makes it so that those people tend towards atheism just from that that has no relation to the purported evidence or lack thereof. Like, I don’t know, naturalism? Coyne’s comment here is like saying that if we discovered that most computer scientists were introverts, and computer scientists are smart people, therefore extroverts aren’t or can’t be smart people. So that first sentence is at best meaningless, and at worst should lead us to wonder if the scientific worldview might be unduly biasing people against theism. So we’d need to look at the evidence.

2) That there’s “no credible evidence” is a claim Coyne needs to support, starting by pointing out what counts as “credible evidence”. And since theism just means “belief in god(s)”, he can’t appeal to the idea that it isn’t convincing to him; that would require it to be a knowledge claim, which I at least don’t make.

3) To say that it is obvious that religion is a human construct implies that he has sufficient evidence of that to demonstrate it to the point of knowledge. But since atheists have made a virtue over never having to prove that God doesn’t exist, colour me skeptical that he actually has that evidence. I certainly haven’t seen evidence that rises to that level.

4) He also — as is his wont — misses that just because it is obvious to him doesn’t mean that it is obvious to everyone. If someone merely doesn’t accept that naturalism is true then they are not going to find it “obvious” that a supernatural entity like a god can’t exist, while a naturalist will.

5) Since most religious people don’t hold that the tenets of faiths other than their own are true, that you can’t reconcile different faiths is not a problem because they aren’t trying to. They think theirs is right and the others are wrong. This might get into an argument over whether the evidence is stronger for their faith than for others, but the idea that the different faiths are reconcilable is utterly irrelevant to that claim.

6) In order to claim that theists are irrational, Coyne would have to know and examine why they maintain that belief, understand their reasons, and address their reasons. Since Coyne has both spent an entire many hours long car ride with Dan Dennett and has compatibilists like Coel constantly try to correct him about what compatibilism wrt free will means and still can’t get it right, this does not seem like something Coyne is capable of doing.

7) Coyne concludes that because theists don’t agree with him on one proposition, that they must therefore be missing the entire faculty that proportions beliefs to evidence … despite the fact that psychologically it would be more likely that in that case if Coyne is right they most likely have some sort of cognitive bias that is interfering here. This would especially be the case if, in other areas, they seem to have no problem apportioning belief to evidence … which would obviously be true for those that Coyne thinks might be smart but who have this odd attachement to religion or theism.

Look at it this way: if someone spent much of their lives worshiping Santa, elves, fairies, or even Zeus, and maintained in all seriousness that Santa delivers presents to Western children at nearly the speed of light each Christmas, you’d think they weren’t playing with a full deck. But somehow it’s okay if they do the same with Allah, Jesus, Muhammad, God, Vishnu, and the like. They can profess such stuff and still be considered “smart.”

But, again, most people don’t believe that, and their cultures consider those things to be false, and so it’s not reasonable to believe them unless you have good reason to. When your culture believes that religion is true, then it’s certainly more reasonable to believe that it is true unless you have good reason not to. Can Coyne offer good reasons not to? The idea that atheists have no burden of proof suggests “No”.

Coyne, essentially, considers theists to be in some sense stupid because we, at best, are wrong about something that Coyne thinks is obviously true. Well, then if we take any position that Coyne holds that others disagree with — free will, scientism, etc — then they would be justified in believing that about him. Thus, we can all get a nice sense of superiority from considering that everyone who does not believe precisely as we do is “a little thick” while we, of course, are not. Alternatively, we can instead consider that maybe they’re wrong, and try to find out where they went wrong, and work with that. We might discover that we are, in fact, wrong. Or they might learn that they are. Or we might discover that the answers aren’t as obvious as we thought they were. At the end, someone might well learn something.

And we obviously can’t have that, right?

Bad Defenses of Bad Atheist Arguments: “History Is Unreliable”

April 5, 2017

So, here are, finally, at the last chapter in Bannister’s book and so the last post Seidensticker will make about the book. As the posts went on, it seemed to me that Seidensticker felt more and more frustrated with Bannister’s book and that it wasn’t providing him with any real arguments to address. This is surprising since presumably before doing this he would have read most of the book to see if it was worth doing chapter by chapter. I mean, I when chapter by chapter with Philipse’s book (which I haven’t finished either commenting on or even reading yet) but there I only started it when I knew there were things that I really needed to talk about and that was one of the books that Coyne insisted all theists had to read, so there was a built in reason for me to take it somewhat seriously. Here, it really looks like Seidensticker picked up the book, thought it might say things, and then started posting on it without checking that or checking if he’d feel that each chapter needed to be looked at, and so ended up very frustrated.

On my side, I knew the posts would be bad starting it, and couldn’t read all of them to see how he’d end. My main frustration here is that Seidensticker doesn’t ever seem to actually defend the purported bad arguments … but, at least, my title is completely accurate (and that inability and/or unwillingness to defend the arguments was kinda the point of my writing these posts).

Anyway, it seems to me that the main issue here is how far one can push the line that the historical evidence we have is insufficient to claim that Christianity is true or that Jesus even existed without risking making all ancient history equally unreliable. The question, then, should turn on whether we have more evidence for other ancient historical figures or stories that we at least consider reliable enough to believe than we do for Jesus and what he did. This is, of course, only going to be a very minor part of this last post.

His complain about Islam is different: “Muslim theology is exceedingly clear that Muhammed was just an ordinary human being.” Yeah, and Mark, the first gospel, makes clear that Jesus was, too. It opens with Jesus being baptized. There’s nothing about Jesus being part of the Trinity or having existed forever. Avoid the Christian dogma, and a plain reading of Mark likewise tells of Jesus as an ordinary human being.

So, let’s go look at the opening to Mark, shall we?

1 The beginning of the good news about Jesus the Messiah,[a] the Son of God,[b] 2 as it is written in Isaiah the prophet:

“I will send my messenger ahead of you,
who will prepare your way”[c]—
3
“a voice of one calling in the wilderness,
‘Prepare the way for the Lord,
make straight paths for him.’”[d]

4 And so John the Baptist appeared in the wilderness, preaching a baptism of repentance for the forgiveness of sins. 5 The whole Judean countryside and all the people of Jerusalem went out to him. Confessing their sins, they were baptized by him in the Jordan River. 6 John wore clothing made of camel’s hair, with a leather belt around his waist, and he ate locusts and wild honey. 7 And this was his message: “After me comes the one more powerful than I, the straps of whose sandals I am not worthy to stoop down and untie. 8 I baptize you with[e] water, but he will baptize you with[f] the Holy Spirit.”
The Baptism and Testing of Jesus

9 At that time Jesus came from Nazareth in Galilee and was baptized by John in the Jordan. 10 Just as Jesus was coming up out of the water, he saw heaven being torn open and the Spirit descending on him like a dove. 11 And a voice came from heaven: “You are my Son, whom I love; with you I am well pleased.”

So, the opening explicitly states that he was the Messiah, that John the Baptist was preparing the way for him, and that Jesus was the Son of God. I’m not sure how you get “an ordinary human being” out of that, beyond the perfectly compatible with Christian dogma interpretation that Jesus became Man, on any reading, plain or otherwise. In a post where we’re talking about history, it’s not a good idea to start by pointing to a source and badly missing what it actually said.

Bannister declares that to defeat Christianity, you must address Jesus and his claims. He ignores that Jesus didn’t make claims; the gospels say that he made claims. How reliable is that record? And if history is that big a deal, you must acknowledge that historians scrub out the supernatural. Sorry, historians aren’t your friend.

Well, how reliable is that record? If you’re defending a claim that they aren’t reliable enough, you might have wanted to start or stick to that instead of adding the sidebar of the supernatural. And if historians “scrub out” the supernatural, on what grounds do they do that? If they do it simply because it is supernatural, then historians may not be Bannister’s friend, but they would be letting philosophical views dictate their interpretations of history, which is bad for history. About the only real argument that can be made here is that historically speaking we’ve found that these supernatural elements are ones that tend to get added to these sort of stories, so we ought to be skeptical of them. Sure, but that’s a) not what Seidensticker says here and b) isn’t an answer to Bannister’s argument anyway.

Dawkins uses the game of telephone (“Chinese whispers” in British parlance) to show how the Jesus story is unreliable, but Bannister isn’t buying it. He mocks this approach:

We mustn’t think of Thucydides, or Josephus, or Tacitus, or St Luke as carefully interviewing eyewitnesses, reading sources, and weighing the evidence—goodness, no, they were ignorant ancient yokels, relying on what they half-heard, whispered into their ears, after the stories had made their way through a long line of pre-school children, high on sugar and gullibility.

Seidensticker’s initial reply?

Where do you start with someone so afraid of honest skepticism that he hides behind straw man arguments like this? Josephus said nothing about Jesus, and Tacitus wrote in the early second century. Thucydides died in about 400 BCE and so is irrelevant; presumably, Bannister uses him to say that the period produced well-respected historians. So therefore all ancient documents are reliable? Nope, that doesn’t follow.

But it does imply that you can’t merely look at assembled oral histories or ancient histories and declare them unreliable. You need to do something more than merely suggest that ancient historical works are formed by the game of telephone and so distorted so much that they are useless.

Seidensticker, shockingly, actually tries to do that:

Let’s review some of the historical weaknesses of the Jesus story that follow from Dawkins’ example of the game of telephone.

  • There were decades of oral history from event to documentation in the gospels.
  • There is a centuries-long period of Dark Ages from the New Testament originals to our best copies (more here and here). We can’t be certain what was modified during that period.
  • Much of Christianity comes from Paul, who never saw Jesus in person (more).
  • We don’t even know who wrote the gospels (more).
  • The gospel of Luke promises that the author is giving a good historical analysis, but why is that believable? You wouldn’t believe an earnest supernatural account from me, so why is it more believable if it’s clouded by the mists of time?
  • Matthew and Luke copy much of Mark, something that an eyewitness would never do.

I’m not sure how these can be said to follow from Dawkins’ assertion, as these seem to be facts that Seidensticker is mustering to show that the gospels are unreliable. But let’s take these in order:

1) Sure, but that would only mean that there’s a risk that it was overly corrupted through Chinese Whispers. And I’m not sure history says that decades of oral tradition mean that the traditions ought to be considered invalid when determining what historically happened.

2) Yes, that might cause some issues, but again I don’t think enough to invalidate them as historical sources.

3) I’m again not sure why that matters that much, since while he was getting things second-hand presumably he got enough of it from eyewitnesses (I’m not a Biblical scholar and so don’t know how much Paul interacted with the disciples).

4) If Luke says that he is going to make this a historical account, then we ought to at least consider that that is what he’s trying to do, and treat his work as such. Whether we accept what it says or not, we have to treat it like a historical account or an attempted historical account until we have real reason to think otherwise.

5) But neither of them are claiming that. Seidensticker explicitly says in the previous point that Luke is giving a historical analysis, which means that he’s going to gather up various sources — including eyewitnesses if he can get them — and use them to build his account. Matthew was one of those sources. Hardly surprising. And the only gospel that even remotely claims to be an eyewitness account is John’s. So this is an utterly irrelevant point.

So, sure, there are some issues, but they seem hardly damning, and hardly enough to get us to treat them as nothing more than “Chinese Whispers”. And in fact Seidensticker falls into the trap of assuming that these can’t be historical analyses with his fourth point, so he in fact uses the argument Bannister thinks is bad without ever defending it.

He declares that the gospels are biographies. Wrong again—they’re better described as ancient biography, which is a quite different genre. An ancient biography isn’t overly concerned about giving accurate facts but with making a point. (More: Charles Talbert, What is a Gospel? p. 93–98.)

And how do we know that these are that sort of ancient biography? Again, recall that Seidensticker points out that Luke claims to be making a detailed account. In fact, let’s look at what Luke says:

1 Many have undertaken to draw up an account of the things that have been fulfilled[a] among us, 2 just as they were handed down to us by those who from the first were eyewitnesses and servants of the word. 3 With this in mind, since I myself have carefully investigated everything from the beginning, I too decided to write an orderly account for you, most excellent Theophilus, 4 so that you may know the certainty of the things you have been taught.

Luke’s point, in his own words, is to write it all down in as accurate a way as possible so that the person reading it can be certain that the things that he has been taught are actually true. Seidensticker’s weak “why should we believe him?” counter would indeed be a universal acid of history because it can arguably be said about any work, where they could be either intentionally or unintentionally shading the truth to comport with what they already believe. Sure, we could find inaccuracies and changes that cast doubt on the account, but Seidensticker has no reason to attack the intention, and that intention alone seems to put it more in the range of “history” than of “ancient biography”.

Seidensticker then does his last set of questions and answers:

Jesus really existed; don’t believe Jesus mythicists! I don’t make that argument. I don’t care whether he was a myth or not. My point is that you have no reason to accept the supernatural claims in the gospels.

And again, Seidensticker refuses to defend the actual bad argument, and instead insists that he wouldn’t make that argument. Which pretty much means that he thinks the argument is indeed a bad one. But since some atheists make that argument, all Seidensticker is doing is agree with Bannister that it is a bad argument that atheists shouldn’t make without ever saying it, meaning that people can still deny that it’s a bad argument. Which, again, is not a defense of the argument in any way.

The gospel story isn’t fiction. If it were fiction, why invent these impossible-to-follow moral rules like looking at someone with lust equals adultery? Right—I never said the gospels were fiction. (Though fiction is still probably easier to defend than the supernatural.)

Look, the book is not called “Bad Arguments Bob Seidensticker Makes”. Thus, the book is not all about you. If you don’t want to or can’t defend the arguments, then don’t try. Instead, you damn them with the faint praise of “I don’t make that argument” when you know good and well that some atheists do.

The gospels weren’t myth. Right—they closer to legend. (Jesus probably a legend here; the differences between myth and legend here.)

So you’ll accept that any atheist who says it’s a myth is wrong? That would be very charitable of you … if you, you know, actually said that.

He says that the gospels have lots of place names with details about each, which refutes their being fiction. Right—I don’t say that it’s fiction. This is the Argument from Accurate Place Names fallacy.

Um, that fallacy is very often used to defend the idea that the work can still be fiction even if it includes real places, so that’s hardly something you’d want to mention right after pretty much saying that it’s not fiction. Of course legends and even myths include real places, but the — admittedly bad — argument is that works that put an emphasis on real places ought to be taken more seriously as historically inspired than a simple work of fiction. Which isn’t true. Now, if you find too many fictional places in a work, that’s a good sign that the work is a fiction, but the opposite is not true.

He marvels at the fluency of Jesus’s rebuttals to the bad guys. The story was honed over decades—I should hope that some compelling anecdotes would come out the other end. The stories that flopped didn’t make the cut.

This is a fair point. I’d need to see the original argument — and Seidensticker is lax in quoting or even summarizing arguments — but that the answers were good doesn’t mean that they are necessarily true. Again, fictional characters can make really good arguments, too.

He appeals to the Criterion of Embarrassment (the more embarrassing a story, the likelier it’s true) and gives as an example a passage from Mark in which a man calls Jesus “good teacher.” Jesus responds, “Why do you call me good?” Yeah, that’s embarrassing, and you’ve undercut your claims of deity. And just how is this supposed to give me confidence in the supernatural parts? He notes that Jesus died when he should’ve been a conquering hero. So much for him fulfilling the prophecy of the Messiah, eh?

For the first one, that indeed doesn’t sound all that embarrassing, so I’d go after Bannister on that tack, instead of the rather ridiculous “Jesus can’t really be a deity then!” claim, which seems to me to miss the entire point of Christianity. As for the second one, fulfilling the prophecy in a unique way suggests that it wasn’t simply manipulated to get the right answers, because if it was it would have made a more direct link. Thus, this at least implies that there was or was believed to be a real person who died that way, at least necessitating a change in the interpretation of the prophecy. But, yes, it could still be a legend.

“If we were dealing with theological fiction, one would expect the edges to be straighter, the language more doctrinally polished.” More to the point, we’d expect that if we were dealing with the words of the omniscient creator of the universe. You’ve nicely shown that it doesn’t hang together and could never have been inspired by a perfect being.

But the gospels aren’t the words of the omniscient creator of the universe, as you yourself pointed out in your discussion of Luke. So they could be inspired by a real Jesus who was the Son of God in the sense that his existence triggered the accounts without it having to be the case that God wrote the words for them, as again Luke makes very clear is his intent.

He gives Lewis’s (false) trilemma—the only possible bins to put Jesus in are Liar, Lunatic, or Lord. Wrong again. Unsurprisingly, he doesn’t address the obvious genre: not fiction but legend.

Fine. I’ll grant that it could be legend. Any real reason for us to accept that? I mean, we don’t have an account of King Arthur that says that this is someone roughly contemporary trying to make a good historical account, which tends to work against it being mere legend.

Anyway, that’s all I’m going to look at here. In summary of the entire series of posts, Seidensticker is generally pretty consistent in not actually defending any of the atheist arguments that he is supposed to be defending. He consistently, in fact, ends up implying that he thinks they’re bad too. That’s … not the way to defend arguments.

Bad Defenses of Bad Atheist Arguments: “Atheists Have No Use for Faith”

March 29, 2017

So, we’re at the second last chapter in Bannister’s book, and this time the the topic is faith, and whether or not atheists need it or rely on it. The underlying argument that I think Bannister is going after here — remember, I’m not reading Bannister’s book, and so have to rely on Seidensticker’s summaries of what Bannister is saying — is the idea that the only rational beliefs — religious or no — are those that conform precisely to the evidence. And, if we accept that, then there is no room for faith.

Bannister’s example this time is this:

In today’s episode, our hero is about to enjoy a quiet lunch when he spots Fred, who looks shockingly thin. When offered some lunch, Fred not only rejects the idea but knocks our hero’s sandwich onto the ground. “Haven’t you heard of the Panini poisoner of Pimlico?” Fred asks. It turns out that Fred is terrified of eating a randomly poisoned sandwich. He refuses to put his faith in the government’s health and safety agency and won’t eat anything that’s not proven safe, though he’s starving himself by playing it safe.

Seidensticker quotes Bannister’s summary later:

“Faith is the opposite of reason!” may make a great bumper sticker or tweetable moment, but when it bangs into reality—the small matter of how each and every one of us lives, every day, in the real world—it fails spectacularly. Try if you wish to live a totally faith-free existence, but that will require doing nothing, going nowhere, and trusting no one. . . . Faith is part of the bedrock of human experience and one on which we rely in a million different ways every day.

Seidensticker summarizes Bannister’s position as demanding certainty, and from the quote that does seem like a fair criticism. If we look at the example story, it seems that the person refuses to eat because they see a possibility that the food might be poisoned and that they can’t be certain that it isn’t. Thus, Bannister seems to be arguing that unless we have certainty of something, believing it to be true requires an act of faith. This does seem to be incorrect, as it is reasonable to say that if we know — or are justified in believing that we know — that something is true, then it isn’t an act of faith to act on it, and knowledge — or at least justification — doesn’t and can’t rely on certainty. And so, it seems, if Seidensticker wanted to go after Bannister here, he’d make a move along those lines: the things that atheists rely on that are not certain are, nonetheless, things we know, and so atheists don’t rely on faith.

Of course, that’s not what Seidensticker will do, or at least not to that extent. Instead of trying to defend the atheist argument, he’ll go on the offensive, trying to argue that Bannister is equivocating — and Seidensticker implies that it’s deliberate — on the meaning of the word “faith”:

Predictably, he’s determined to obfuscate the word “faith.” In fact, it can mean two different things:

  • Faith can be belief that follows from the evidence. This belief would change if presented with compelling contrary evidence, and it is often called “trust.”
  • Or, faith can be belief not held primarily because of evidence and little shaken in the face of contrary evidence; that is, belief neither supported nor undercut by evidence. “Blind faith” is in this category, though it needn’t be as extreme as that.

Acknowledging these two categories, assigning different words to them (may I suggest “trust” and “faith”?), and exploring the different areas where humans use them isn’t where apologists want to go. In my experience, they benefit from the confusion. They want to say that faith can be misused, but we’re stuck with it, which allows them to bolster the reputation of faith while it opens the door to the supernatural.

The problem with this is that, if we reference my above summary of the atheist argument, “trust” doesn’t seem to fit it very well at all. It’s hard to imagine that someone could be claimed to really trust someone if they only trusted or believed them precisely as far as the evidence they had suggested. We do seem to argue that to really trust someone means trusting them in cases where there isn’t sufficient evidence to know that they are going to or not going to do a certain thing, and in fact even when the evidence suggests that they might violate our trust. If we only trusted someone to not violate our trust when we knew that they wouldn’t or couldn’t, it wouldn’t seem like we actually trust them. You could hardly be said to trust your spouse not to cheat on you if any time there was any indication they might or even might be in a position to do so you at least no longer trusted them not to, for example. So at a minimum, even “trust” seems to involve trusting someone beyond what the evidence strictly says, a fact that Seidensticker acknowledges by having to add on “… if presented with compelling contrary evidence (emphasis added)”.

But this gives the game away, because adding that last part on gives theists a way out, by arguing that the counter evidence offered by atheists is not compelling. A good many theists make this exact claim, and I have to admit that I’m on their side; the evidence offered by atheists is not compelling. Seidensticker’s position is further undermined by his earlier entry in this series of post where he argued against atheists having the burden of proof. If atheists really had compelling counter evidence, then there would be no argument over the burden of proof; they’d be able to meet any reasonable burden of proof and so would have their conclusions proven. So on what grounds can Seidensticker claim that the typical theist is acting on what he calls “blind faith” rather than on what he calls “trust”?

The only move he can make here is to argue that all of the examples of what we’d call “trust” are cases where we are making inferences from previous evidence, and thus using induction to get knowledge. This risks turning trust into knowledge, but it isn’t even a good counter to theists, given the arguments that theists often make. Inferring a God from our observations of the world is just as much induction as what atheists would be doing, and so again he’d face the need for compelling counter evidence. The most he can do is try to argue that those who believe based only on the Bible don’t have that sort of reason … but then he’d have to get into a deep analysis of when it’s okay to believe based only on a purportedly historical document, which we’ll touch on in the last post. Suffice it to say, things aren’t as simple as Seidensticker seems to believe.

Seidensticker can also claim that theists are actually immune to counter evidence, but he’d have to establish this in principle and not just based on what evidence atheists typically try to muster against theists. For example, he can trot out the quotes that if a scientific and religious claim clashed, some theists say that they’d trust their religion over science, but this doesn’t work because a) that’s just a clash over what methods they most trust and b) most theists will actually try to reconcile the two so that they don’t have to choose between them. And, at the end of the day, Seidensticker would have to argue that theists are ignoring compelling evidence to maintain their belief, which again he has not been willing to do.

Seidensticker’s final move would be to claim that theists have far too much confidence in their belief given the evidence they have. Sure, it might be okay to believe in God based on the evidence they have, but the notion of faith is to raise their confidence in that belief to the level of knowledge, if not to the level of certainty. While atheists may still be more confident in their “trusting” beliefs than the evidence would strictly permit, they also have a lot more evidence for those beliefs. Even in cases where they might seem to be holding an irrational belief in the face of evidence, they still base it on, at least, a long-standing experience with the person and a feeling that they know them well, and in the case of science with a past history of it working out. Thus, the theistic “faith” is more problematic because that extra confidence on less evidence also makes it more resistant to change than it ought to be.

If Seidensticker had actually made that argument, he might have a point. But this is still problematic because at this point the difference is not one of kind as Seidensticker asserts, but of degree. Thus, we might very well be able to find cases where the “trust” of the atheists is just as much “faith” as that of the theists, and that possibility destroys Seidensticker’s argument. It may be the case that the theists’ faith is a problematic case of faith, a case where their faith is misplaced or misused, but that faith is still not unreasonable because it’s faith, and so Bannister’s point that atheists do rely on faith and that the fact that sometimes it’s misused or misplaced does not mean that faith is invalid holds. Thus, from there Seidensticker would have to focus on demonstrating that in that specific case faith is being misused, but not only is that not what the original atheist argument argues, but that conclusion would also go against what Seidensticker himself says in the quote above.

Bannister moves on to Christian applications of faith. He imagines falling down a cliff and reaching for a branch to save himself. “What I know [about trees] can’t save me; rather, I have to put my facts to the test and exercise my faith. Now what goes for the tree goes for everything else in life. Facts without faith are causally effete, simply trivia, mere intellectual stamp-collecting.”

Here again, the comparison fails. Botanists are in agreement on the basic facts about trees, but not even Christians agree among themselves about the basic facts about God. First let’s get a reasonably objective factual foundation for your hypothesis and then we can worry about accepting it. You haven’t gotten off the ground.

So, as far as I can see it, Bannister’s point here is that nothing that he knows about trees will let him know that grabbing that branch while he’s falling will save him. The branch might not be strong enough. He might be falling too quickly for the branch to hold. The branch might have been weakened by something. So what Bannister suggests is that we need to act on our beliefs — ie “exercise our faith” — and then see what happens. To me, this is the heart of what a reasonable “everyday reasoning” implies: form beliefs, act in the world as if they are true, and if contradictions occur adjust accordingly. And such an approach seems to be the best we can do; for everyday reasoning and thus the majority of our beliefs, we don’t have the time and resources to test them out entirely before acting on them, and acting on them is usually pretty good at weeding out the ones that are false. I don’t think this requires faith, though, because obviously we want to act on our beliefs in accordance with the confidence we have in them and the potential consequences of being wrong. If there’s an action that I’m not certain of and that the consequences of my being wrong mean my death, I think the only rational move would be to go and check first. But in Bannister’s example we don’t have the time to check and the consequences of being wrong aren’t any worse than the consequences of not trying, so we just go ahead and act. Seems reasonable and not really faith to me.

But note Seidensticker’s reply, or rather non-reply. He argues that Christians don’t agree on all of the facts about God. So? I see Bannister as advocating that each Christian act on their specific beliefs and see if it works out. Seidensticker would be insisting that Christians have to test out all of these beliefs and settle on the “right” facts before acting on it. Seidensticker also ignores Bannister’s point that none of those biological facts can justify the action here, beyond that sometimes branches are strong enough to save someone falling off a cliff. So, sure, we call agree on those facts, but those facts aren’t going to justify the action that we’d be taking there. This, then, is a complete non-sequitor, and nothing more than Seidensticker trotting out his own favourite canard out. But again it doesn’t do anything to defend the original contention, or to refute Bannister’s argument here.

Seidensticker has two sets of questions and italicized answers here, but I’m going to skip the first set and focus on the second. Here is the preamble:

Bannister proposes that we consider different factors to see if they argue for God, against God, or neither. He gets us started with a few examples.

From this, it’s clear that Bannister is going to try to argue that at least some of the examples that are purported to argue against God at least don’t do so strongly, and by implication we can argue that they won’t provide compelling evidence against the existence of God, which would then mean that we might have “trust” and not “faith”. Remember that.

Evolution. He uses the Hypothetical God Fallacy (let’s assume God first and select facts to support this conclusion) to say that this fits in the Neither bin. Who’s to say that God couldn’t use evolution? Nope: evolution doesn’t prove God, but it explains a tough puzzle, why life is the way it is. This is a vote against God.

Well, putting aside the fact that we still have puzzles … why does this still count as a vote against God, just because it solves — in Seidensticker’s mind — the “puzzle” better? Again, the counter is that God could very well choose to use evolution to achieve his goals. If this is not inconsistent with God, then evolution does not provide compelling counter evidence against the existence of God. At least, accepting that God could have done that and remained consistent with our idea of God weakens that potential counter argument, and Seidensticker never actually addresses that counter argument. This is putting aside the fact that I’ve already addressed the Hypothetical God Fallacy and found it wanting. For evolution to count as a vote against God, it has to be the case that us having evolved is some kind of contradiction — even of expectations — of our idea of God. To assess this, we have to ask the question “If God existed, what would human development look like?”. If evolution is consistent with that, then Seidensticker has no point … which is probably why he wants to avoid allowing theists to ask that question despite the atheist argument depending on doing that first.

Evil. He concedes that this may be a vote against God, though he falls back on the “How can an atheist say anything is objectively wrong?” fallacy. Atheists don’t make that claim. Atheists are waiting impatiently for evidence that objective morality exists.

Okay, first, there are some atheists who make that claim like, for example, Sam Harris, mentioned earlier in the post. Second, and more damningly, if objective morality doesn’t exist, then how do you get the “Problem of Evil” off the ground? Even the weaker versions of the Problem still rely on the purported contradiction being a good and moral God allowing so much suffering to exist in the world when God could clearly stop it. This also relies on us judging God by our moral standards and claiming that we understand morality well-enough to know that God ought to be morally obliged to do so. If morality is not objective and is instead subjective, then a) our moral standards can’t be directly applied to God and b) we have no case to make any claim about moral standards at all. The best the atheist could do, then, is say that they wouldn’t like a God whose moral position would allow that much suffering, which is hardly a contradiction or any evidence that God didn’t exist. So Seidensticker’s reply that atheists don’t believe in objective morality or that anything is objectively wrong actually makes the entire “Problem of Evil” meaningless and irrelevant. That’s hardly the way to defend it as being a “vote against God”, which from the first point we are led to believe was Seidensticker’s goal here in addressing the examples.

Reason. How can there be reason without God?? This is a vote for God. Nope. Reason is an emergent phenomenon. If you’re saying that science has unanswered questions about human consciousness works, that’s true, but Christianity doesn’t win by default. Christianity has never answered any scientific question, so there’s no reason to imagine it will this time. This topic is related to Alvin Plantinga’s Evolutionary Argument Against Naturalism, to which I respond here.

I’m … not sure how this is a response here. At best, Seidensticker provides an alternative explanation that is compatible with naturalism, but simply saying “It’s emergent” is not a proof of that, and the link to the argument against Plantinga — which I’ve provided here for convenience — is simply the old argument that natural selection would weed out such beliefs. So at best Seidensticker argues this to a neutral position, which sure is not what he wants to do. And as he provides no proof or evidence for this position and seems instead to be relying on the old canard about Christianity not providing scientific answers, I suspect that he does that because he doesn’t actually have an argument for that, and likely has no idea what it would mean for a phenomenon to be emergent … or what the consequences of reason being one would be.

Next time is the final post, talking about history in general.

Pay Gap Myths

March 24, 2017

Stephanie Zvan is talking about what she calls the “Myth of the Pay Gap Myth”. Essentially, a number of people have commented that when we actually run the numbers, we reveal that the long-standing feminist talking point of the “Pay Gap” is revealed to be a myth. The purpose of Zvan’s post is to argue that the stance that the Pay Gap is a myth is, in fact, a myth itself, and thus the Pay Gap is real.

In order to assess this, I think we need to untangle the various positions wrt the Pay Gap. The classic Pay Gap is the idea that women are paid less for doing the same work as men. Which has thus led to the common slogan of “Equal pay for equal work”. This implies — and many of the personal anecdotes have specifically claimed — that if you have a man and a woman working the same job and the same hours with the same experience, the woman will be paid dramatically less. Thus, when we get claims like “Women only make 77 cents on the dollar compared to men!” the context implies that this is true for that case; a woman can be doing exactly what a man can be doing and be paid 23 cents on the dollar less, on average, than him.

This was always a suspect notion, as many pay equity claims, in order to make their case and attempt to bridge the Pay Gap, had to do so not through anti-discrimination measures, but through reclassification of fields into “equivalent” fields, where arguably the female-dominated field was the same as the male dominated field but was paid less only because it was female-dominated. This immediately raises suspicions that if you have a man and a woman working the exact same job the Pay Gap isn’t all that significant. And the latest charges — as seen in the quotes in Zvan’s article — are attacking this notion, pointing out that when you do compare men and women working the same job, same hours and with the same experience and performance the difference shrinks to almost nothing.

Zvan concedes this part, and this to me seems sufficient to make a claim that the Pay Gap, as outlined above, is indeed a myth. What Zvan is going after is a possible implication of that, which is that therefore the main reason for the gap in salaries is due to the choices of men and women, and that therefore there is no systemic discrimination to deal with. But at least here we can conclude that any solution that is based on assuming that companies pay men and women different amounts just because they are men and women is a non-starter, because that’s not the problem. As we shall see, the big difference here is going to be over social expectations, not over explicitly sexist policies.

While Zvan lists a number of things that impede the progress of women in the workplace at the end of the post, the two big things she will focus on are the impact of a lack of flexible schedules and of rewarding working overtime on the issue. Thus, to make her case, Zvan needs to both show that women have no real choice with regards to those aspects and that those aspects aren’t legitimately better for business, which she will somewhat attempt to do, in a bit of a haphazard manner, which makes it really difficult to organize my response. I’ll start with the idea that this isn’t a choice for women, and then work into if these things are a legitimate business requirement after that. This means I’ll likely jump around a bit in her post, which hopefully won’t be too confusing.

Zvan, as it turns out, won’t really make a case that allowing flexible work schedules is a huge boon to businesses, but she will focus on demonstrating that requiring a more flexible work schedule is a need and not a choice for women:

First, even though women work fewer paid hours than men, they work the same number of hours overall. The reason women more frequently require constrained work weeks and more flexibility in their schedules is that they do the bulk of the unpaid work that makes our society run, particularly caregiving, both for children and for other adults.

Zvan uses a study of the relationships talking about parental leave — mostly after childbirth — and the differences in the pay impacts of those countries, summarizing it this way:

At first glance, European results would seem to suggest a preference for childcare over other types of work in women. These subsidies sometimes worsen wage disparity by increasing the amount of time women spend outside the labor market (pdf). In Sweden, however, some of that subsidy can only be received as paternal leave. This helps men overcome the stigma of taking time off work, childcare labor is more evenly shared, and the contribution of childcare to pay inequities is eliminated.

This seems to overstate the case a bit, as while pushing for quotas on parental leave that has to be taken by the father seems to have had an impact, it’s not equal enough or had been going on long enough to indicate that the childcare part of pay inequities has been eliminated (even in the study, there’s a lot of “may” language around there). But despite the rather underwhelming empirical evidence — again, even the study doesn’t want to say that this is really the case — the underlying argument is sound: if being on call all hours of the day and working more hours is seen as a benefit to an employer, women who face more pressure to take care of children and of the family as a whole are not going to be able to do that. Thus, they will be at least seen as less valuable to their employer and won’t get the raises assigned to those who show greater loyalty or even who can put in the hours to develop themselves or jump on big opportunities that require the extra time. And it is certainly reasonable to say that women feel far greater pressure to look after the personal family matters than men do. Some of this is due to societal pressure based on the old patriarchal expectations, and some of it is just a result of the fact that women tend to be the person in the relationship with the lower salary and so it is more reasonable for them to risk their job or salary advancement than it is for the man, which comes both from the “Pay Gap” and also from the social tendency that finds it more acceptable for a woman to marry a man who makes more than she does than the inverse.

But as we saw last week these social pressures also have an impact on men. While women will feel more pressure to pick up that unpaid labour, men will feel more pressure to maximize and maximize the worth of their paid labour. Thus, if a situation comes up where there is a choice between, say, putting one’s family responsibilities aside for something that will improve their perception — and thus future pay — at work, women will feel strong social pressure to focus on the family responsibilities and pass on the employment opportunities while men will feel strong social pressure to take up the employment opportunity at the expense of their family responsibilities. In short, we allow the excuse of “I had to work” more for men than we do for women, but it’s also seen as more acceptable if a woman says “I had to look after the children” than it is for men (although that is changing).

Thus, when it comes to choice, neither men and women really have choices here. Well, of course, in a sense they do, but they both face strong and diametrically opposed social pressures wrt them. Men, as the presumptive primary provider, will always face pressure to take a job that maximizes their earning potential, which means that they will always tend to put earning potential ahead of any other factor. Thus, as long as they are capable of doing it, men will make choices to maximize that potential no matter how many hours they have to work or how crappy the job is. Yes, there are differing levels of motivation and cost/benefit associations, but in general men are socially conditioned to lean to the side of making more money and getting a better and more stable job. Even with the feminist influence on society, the same is not true for women. The strongest feminist motivation for higher wages and higher paying jobs is essentially a “I’ll show them!” motivation, proving that she is as good or better than the men she works with. This isn’t a motivation that, I think, can motivate most people; most people just want a good life and don’t care that much about proving themselves to others except when it comes down to direct confrontation. The other motivation is for a fulfilling job, but for that the qualities of the job beyond simple pay are a more important factor. If the job is too demanding, then it isn’t fulfilling, and women have little reason to accept an unfulfilling job just because it happens to pay more.

Ironically, this distinction might mean that the insistence on the constant discussion of the points Zvan makes at the end actually makes things worse for the pay gap. Men are more likely to accept worse consequences — even to the point of having to fight discrimination — in order to get more pay, while women are less likely to do so. So men, arguably, are more willing to fight through discrimination as long as they believe that they can succeed in order to get a higher paying job than women are. So if one constantly says that there is a terrible amount of sexism and harassment in a field, this will discourage women from going into that field even if they, in actuality, could easily handle that level of sexism and harassment. They have less of an external motive for going into in anyway than men do. When it comes to racial discrimination, it seems to me that a big factor there is that many people who might face racial discrimination in certain fields think that it will be so strong that they simply won’t be able to succeed, and so they settle for the highest paying job they think they can get. But if those men thought they could achieve it, they would be willing to face more problems in order to do so.

Thus, the social pressures push men and women apart on the overall average pay scale, as men feel social pressure to maximize pay while women feel social pressure to, at least, minimize the impact their job has on their family responsibilities. Both need to be addressed, and while arguably forcing men to take parental leave can work to break that up, that can have other issues, including ones of practicality. But it is clear that if women are to be said to not really have choices in that regard, neither do men.

Okay, so finally let’s look at whether these companies are, in fact, really reasonable in asking for the main things Zvan focuses on and thus rewarding people who are willing to do it over those who aren’t, because if they are being reasonable then one of the big thrusts of her post is lost. Again, she doesn’t really argue that for flexible work schedules, but she does try to argue that overtime isn’t actually a benefit. She starts by characterizing why employers are pushing for overtime more lately:

Let’s look at the math. If you’re an employer who offers decent benefits, those benefits typically cost roughly the same as your direct payment for labor. In other words, a $20 hourly pay rate actually costs you $40 an hour. But benefit costs don’t grow much with additional time worked. Hours of time-and-a-half overtime at $30 look like a steal when you compare them to hiring another employee at $40 for each regular-time hour.

The first thing to note about this is that it has the implication that one of the best ways to eliminate this part of the gap is to look at how much benefits cost. If we could reduce the cost of benefits — or even offer less — then this wouldn’t be seen as being cost-effective anymore, and they’d just hire more people. So perhaps the real problem is that benefits are too generous for how much they cost, encouraging employers to find ways around that, including paying overtime which starts at time and a half.

The second thing to note is that this ignores the previously stated point that workers who work more overtime and are willing to work less flexible schedules get paid more in terms of base salary than those who won’t. Her own source insists that for salaried employees this can be in the range of twice as much. At that rate, Zvan’s argument that they are trying to save money by not hiring someone seems a little shaky. And this is the key to her argument, as she concludes:

If long hours happen often enough in your business to treat working them as critical, it’s time to hire more employees.

So we need to examine if the solution to most of the overtime seems to be, in fact, simply to hire more people.

So let’s start with manufacturing jobs. Most of them are shift work — and unionized — and so both flexible schedules and overtime pay gets complicated. Since much of the work is dependent on the operations of the entire factory, it’s not possible for someone to, say, show up at 5 and leave at 2. At 5, they’d either be joining the previous shift or, if it is the downtime between shifts, standing around doing very little. This also holds if they want to work a little overtime, as coming in at 5 and then trying to work until 7 to get 4 hours overtime in a day isn’t going to be, at least in general, very cost effective for the company. So the most cost effective way for a company to use overtime to replace hiring another employee would be to have them work two shifts in a day instead of just one. But to do that, you either need to have someone who can do that constantly over the long haul or you need to try to do that for an entire shift. Neither really works. So instead, manufacturing overtime, in my experience, has been either for jobs that are mostly independent — where you have two or so people who can work on their own without relying on anyone else and both are willing to work overtime — or as temporary replacements on later shifts — when people can’t come in or someone suddenly quits — or for things that need to be done but that can’t be done while things are running, like maintenance. None of these are things that you can easily hire someone else to do, since they won’t be full-time positions or will be only limited positions … or both.

But what about service jobs, like wait staff, fast food, or department stores? Well, the good news here is that these jobs tend to have more “shifts” available, and so tend to have more flexible hours. You can’t come in to work too long before the store or restaurant opens, but the stores don’t tend to have such long shifts and so someone can work as long as the place is open. The problem for Zvan’s argument here is that that flexibility lends itself greatly to part-time work, and as far as I know both in the United States and in Canada — as well as in a number of places around the world — the benefit requirements are lower for part-time workers than they are for full-time workers. Thus — and we’ve seen this in these sorts of jobs over and over again — the most cost effective way is to replace full-time workers with part-time workers, not demand overtime. Thus, the only time a company will push for overtime in these cases is for particularly important workers or particularly important times, such as having your experienced person in the department around longer so that they can answer the questions of customers and tell each employee as they come on shift what needs to be done, or to work a few extra hours because the busy time is constantly a couple of hours what would be a reasonable leaving time. Again, neither of these cases are ones where you can simply hire someone else to step in when the other person has to leave.

Also, it is interesting to note that in my experience, at least in Canada, companies don’t seem to be doing this. Instead, they tend to be simply not having people on staff in the off-hours. As someone who tends to try to arrive for opening almost everywhere I go, I tend to see departments or cashes having no one working at them, even when it would be useful for them to have someone there in order to make sales. “Just hire someone” doesn’t seem to be workable and they, at least, don’t seem to feel that they lose enough business to bother staffing those areas, with overtime employees or not.

So, what about salaried employees? That’s the focus of Zvan’s source here, but it doesn’t seem to work either. The problem is that salaried employees tend to be judged on productivity rather than on hours worked. In general, there are a number of things that need to get done by a certain time, and they don’t really care how many hours you work as long as those things get done. In software design, this is always a number of “features” that the company has either promised to customers or that they feel they need to make sales. If you can get them done without working overtime, great, but if you need overtime to get them done and working with few enough bugs then that’s what’s “expected”. In Canada, they aren’t actually allowed to ask you to work overtime — since you don’t get paid for it — but they are allowed to note that you didn’t get your work done on your performance review. And a lot of the time this overtime is pushed either by market pressures or by things just not working out the way you’d expect. But hiring someone else isn’t always an answer. You can’t claim that adding one person to a feature will reduce the time it takes to complete it by that person’s person hours because software design doesn’t work that way. And sometimes you don’t need another full-time person, but you just need a few more hours a week to catch up on it. Hiring a full-time person for that job doesn’t work. In addition, it may be the case that there is specific knowledge required to do those things effectively, knowledge that a new person won’t have.

It seems to me that most of the salaried positions are like that, but there are exceptions. The one I constantly hear about is nursing, where hospitals and the like are understaffed. But in these cases, the issue is not that they can’t handle adding a salary and benefits, but that there is no room in the budget to add another salary, making Zvan’s argument irrelevant to them.

While there are likely some cases where businesses say that they can save the benefits by getting someone to work overtime rather than hiring someone else, I can’t see that as being the major driving factor behind the increase in overtime, and Zvan provides no evidence that this is indeed the main factor beyond a shaky argument. But even if we accept all of Zvan’s comments that it is this cost analysis that is driving this and that it is wrong because it doesn’t include the loss of productivity of workers working overtime, we can still ask if an employee who is willing to do this when necessary is more valuable than an employee who isn’t and thus should be paid accordingly? After all, even if we accept Zvan’s reasoning there will always be situations where overtime would be necessary or beneficial to the company, and so should we reward employees whose schedules are more flexible to the company’s needs and who can work more hours when required more than those who can’t? Are these employees really more valuable to the company?

Well, given that, it seems obvious that they are. Even if they aren’t regularly working more hours, and even if they regularly take advantage of flexible work schedules, an employee who can shift their schedule when required or who can work more hours when required is a more valuable employee, all other things being equal. They can fill in when someone gets sick or can’t come in. They can get more things done. They have an easier time arranging things so that they can attend important meetings or meet with important customers or fix something to get a customer up and running at a critical time. Yes, flexible working hours are a benefit for employees but a flexible employee is a benefit for customers. So while we can argue over specific uses/demands for inflexible work schedules and overtime, in general an employee willing to work inflexible work schedules and overtime when required is the more valuable employee. So companies, it seems to me, are doing right to reward those employees who are willing to and able to do that; the only debate here is over whether companies ought to be encouraging/asking for it as often as they are, which is another discussion.

So, in summary, the idea that men and women working the same job with the same experience get paid significantly differently is indeed a myth. However, there are a host of social pressures working on men and women that encourage men to put in the time and effort to maximize their pay while encouraging women to minimize the impact their jobs have on their family life. These social pressures are, indeed, probably the biggest factor driving the overall difference in take-home pay between men and women, and neither men nor women have any greater choice due to those social factors. So it is indeed far too simplistic to ascribe this gap to simple “choice”, but also too simple to ascribe it to simple “sexism”, where that only looks at the responsibilities of women. In order to solve this, we need to solve the idea that the main contribution of men to the household is their salary and that the woman’s salary is secondary to her family responsibilities. Until we do that, the choices will still be made the same way and the gap will never close.

Bad Defenses of Bad Atheist Arguments: “Atheists Don’t Need God for Meaningful Lives”

March 22, 2017

So, the next chapter in Bannister’s book that Seidensticker is going to look at revolve around meaning. Seidensticker starts with an argument that should be familiar to us:

Why is this hard? I say that my life has meaning, and that’s it. That’s not a grand platform, but it’s all I’ve got. And it’s all I need. I make no claim for absolute or objective meaning, just my own meaning. Like so many before him, Bannister seems to think that the only meaning is an objective meaning. For this, I point him to the definition of “meaning” in a dictionary.

Again, just like when he talked about morality Seidensticker doesn’t give the dictionary definition that he thinks makes his case, or demonstrates that it does make his case. And here it is even harder for him to argue that meaning can’t be absolute or objective, and so here he almost fudges on the question by implying the minimum argument he can make: meaning doesn’t have to be absolute or objective. The problem is that, at a minimum, this is what would be up for debate here. If Seidensticker means to argue that thinking that meaning is objective and absolute is wrong, he needs to provide evidence and argue for that to demonstrate that Bannister is wrong. If he merely wants to claim that it might be possible to have meaning that isn’t absolute and objective and that therefore Bannister is wrong to assume that it must be objective and absolute, then the immediate response has to be “So what?”. It in no way addresses Bannister’s contention that we can’t have meaning — or, at least, that atheists assuming we can have meaning — without God to say that maybe he’s wrong in thinking that we need an absolute and objective meaning. He might be wrong. And so might atheists. Why should anyone think that the atheist move to personal meaning works at all?

Seidensticker next tries to address this example from Bannister:

In today’s opening episode, our hero dreams that he’s wandering through a penguin colony. He muses that penguins have meaningless lives, but one penguin speaks up and says that, on the contrary, his life has plenty of meaning. He makes his own meaning. And then he gets eaten by a sea lion.

So, let me shake out what I think this example is aiming for. The penguin can insist that he makes his own meaning, but having that sort of meaning has to, in fact, link to goals and purposes and things to achieve in accordance with that meaning. But this assumes that what one chooses for that meaning can lead to goals that are in principle achievable. It would be a very odd meaning of life that sets goals and purposes that the person cannot, in fact, achieve. But it is clear that the penguin’s meaning of life didn’t include getting eaten at that time, and getting eaten meant that any goals or purposes would now never be achieved. Thus, the penguin would not have fulfilled the meaning of his life. And this is because the universe does not care at all about those personal meanings, and so will provide no help in achieving them. Thus, at a minimum, the meaning of our lives is greatly constrained by the universe and what we can do in it. Which also means that our self-selected “meaning” may well have to change repeatedly as we discover that whatever that self-selected meaning is is just not achievable by us in this universe.

But if we have a meaning determined for us by the force that created the universe, then these problems go away. The universe will be set-up for us to achieve the purposes implied by that meaning, and all we have to do is figure out how to actually do it. And if the penguin gets eaten, then that action itself would be to further the purpose of the existence of that penguin, and so fulfills the meaning of that penguin’s life rather than frustrates it. This, then, would be very comforting, as we’d have a set meaning — in Bannister’s case, it’s “Find out the purpose God intends for us” — that would never change, and that the universe and pretty much every action we take and every thing that happens to us works to fulfill.

If Seidensticker was paying attention, he’d see the flaw in this idea of meaning: it proves too much. What reason do we have to actively pursue the purposes and goals that follow from our idea of meaning? Surely even refusing to do certain actions fits into that purpose? Yes, we have free will, but surely God won’t let us simply frustrate his overall Grand Design (which has to be the case if bad things happening to us are to have a purpose). So, then, how do we determine what negative actions are punishments aimed at guiding us back to the right path and which ones are God’s purpose working through us? So Bannister would be stuck between things being so determined by God’s Plan that we need do nothing, and us being able to frustrate God’s Plan but us not being able to understand something so complex in order to be able to figure out what to do.

And from this we can get the mirroring problems with each side. Bannister gives us a set purpose supported by the universe, at the cost of that purpose being too large for us to actually interact with. Seidensticker’s personal view of meaning is understandable, but is so personal that we might find ourselves changing it constantly. Both run into the issue that their views, to work, can’t follow from our personal worldviews: Bannister’s has to come from what God wants, and if Seidensticker’s follows from our worldview we’d be stuck if our personal circumstances and our worldview produces a meaning that cannot be achieved, as the only way to resolve the issue would be to change our worldviews. But surely our meaning of life has to be tied to our personal worldviews in some way. If it isn’t, again for Bannister we wonder how that can be my meaning, and for Seidensticker it becomes simply random selection, and so doesn’t have the importance required to satisfy a desire for a meaningful life.

Unlike morality, I haven’t spent a lot of time thinking about the meaning of life. However, let me take a stab at a non-God and arguably non-objective meaning of life. I propose that the meaning of life for a human being is to live the best possible life you can, where the best possible life is determined by your worldview. This allows us to change our approaches and even let the universe cause us to fail without having to change our meaning. To return to the example, our penguin certainly didn’t expect to get eaten at that point, but they still would have achieved the meaning of their life: to life as good as life as possible. Getting eaten at that point doesn’t change that; it’s just what happened, but the assessment of whether the penguin achieved the meaning of his life is judged by what happened up to the point where it died.

Sure, there are problems with this idea, but it’s at least a credible example of a meaning that can work. Let’s look at Seidensticker’s direct response to this:

Next, he considers the fate of the penguin—eaten just as he was pontificating about the meaning he had for his life.

Yeah. Shit happens. It could’ve been our hero who got eaten instead. What’s your point?

Yeah … that’s not a reply, and there’s clearly more of a point there than Seidensticker recognizes.

And another familiar argument:

For one of his “problems,” he contrasts meaning in a book, where we can ask the author to resolve differences in interpretation, with an authorless universe where we’re on our own for finding meaning. “Claiming that we have found the meaning is utter nonsense.”

Right—that’s not my claim. But Bannister is living in a glass house. He does claim to know the meaning of life, but his source is the Bible, a book for which there is no the meaning because Christians themselves can’t interpret it unambiguously.

Just because people don’t at least currently agree on the answer doesn’t mean there isn’t one. Morever, it should be clear from the quote that Bannister’s reply to this will be “Which is why we need to ask the author … in this case, God”. Seidensticker will of course make great hay over us not really having a way to ask the author about that since the author — God — doesn’t really answer directly and stays hidden, but this attack on the Bible is completely and totally misplaced and completely misses the point.

I’ll skip the rest of Seidensticker’s ranty replies, as he continually refuses to give any notion of meaning and just rants about how Bannister is wrong. Let me address, then, the cases Bannister gives where he says that if you put your idea of meaning in God, you have a better idea of meaning. The first one:

<blockquote>Who am I? You aren’t an accident but were fashioned by God. I was fashioned by God to burn forever in hell? That’s what your book says is the fate of most of us. Jesus said, “Small is the gate and narrow the road that leads to life, and only a few find it” (Matthew 7:14). Thanks, God.

Objection! Non-responsive! This has nothing to do with meaning. If your meaning is to end up in Hell, then it is. But the quote and the implication is that we have a choice in that, and Bannister would argue that we need to live up to the purpose God has for us if we want to avoid that, which gets back to the free will issue raised above. This is taking a shot at Christianity, not at the idea that putting meaning into God doesn’t make for a more overall satisfying idea of meaning.

Do I matter? “God was willing to pay an incredible price for each one of us.” An incredible price? Nonsense. Jesus popped back into existence a day and a half after “dying.” The sacrifice narrative is incoherent and embarrassing (more here and here).

Um, willingly dying and suffering just to redeem us, I think, counts as “an incredible price”, even if he only stayed dead for a day and a half — and, if Christianity is right, any similar sacrifice we might make might mean that we don’t stay dead any longer. Again, the idea is indeed that if Christianity is right God cares about us, and so we matter. Seidensticker needs to demonstrate that he can find a meaning that everyone will accept that also means that we matter. He doesn’t even try.

Why am I here? Our purpose “is to know God and enjoy him forever.” Seriously? Yeah, that’s a purpose that will put a spring in my step. Not to help other people, not to make the world a better place, not to eliminate smallpox, but to enjoy God, who won’t get off the couch to make his mere existence obvious.

It’s a set purpose that can be true. What do you have to offer? Why should anyone accept a purpose of helping other people or making the world a better place? And under your view, how are we to come to the conclusion of what our purpose should be?

Can I make a difference? We can be part of God’s greater purpose. That atheism thing is sounding better all the time. Instead of brainlessly showing up to get an assignment from the foreman, we’re on our own. We are empowered to find our purpose rather than have it forced upon us. Yes, that can be daunting. Yes, we might get halfway through life and realize that we’d squandered much of it. But the upsides are so much greater because there’s a downside. Because we can screw up, it makes the successes that much more significant. And we have ourselves to congratulate for our success.

In order to find a purpose, there has to be one to find, which makes it objective: the idea is that there has to be at least a right answer for us. And why should we admit that we’ve squandered our life, instead of simply redefining our purpose to match what we did do, if meaning is to be left up to us? Seidensticker, as usual, contradicts himself by assuming there is a right answer and that we can’t come to any conclusion we want while insisting that meaning is left entirely up to us and isn’t objective. It is at least very difficult to find a way to make those two views consistent with each other.

Seidensticker then turns to the question of nihilism:

Of course not. Citing his oft-mentioned but ill-supported claim that the only meaning is objective meaning, he calls atheism, not cake, but “the soggy digestive biscuit of grim nihilistic despair.”

Wrong again. You can try to find someone to impose this on, but that’s not me. Ah, well—so much for the possibility of evidence

But perhaps Seidensticker only avoids that by deluding himself about what atheism implies about meaning. He certainly has given us no real reason and no real method to determine meaning for ourselves, so we have to wonder if he has really achieved meaning at all.

Again, as we saw last time, Seidensticker’s defense of the arguments is simply to say that the arguments are right and Bannister is wrong, with lots of shots at Bannister tossed in. That is not the way to defend arguments, especially if you want to insist that your ideas are true and reflect reality and are evidence-based, as Seidensticker does. So, another case where Seidensticker doesn’t even defend the arguments he is purportedly defending.

Female Privilege …

March 17, 2017

So, let me shift for a bit to discussions of feminism, specifically by looking at this post from Everyday Feminism about female privilege by Nikita Redkar. As you might have guessed, the author is going to try to argue against the idea of female privilege by listing 7 examples of what are claimed to be examples of female privilege and showing that they aren’t really. But let’s start with what she thinks is the key thing to consider when determining if something counts as privilege:

Yet unlike male privilege, “female privilege” corners women into benefiting from a much smaller, domestic sphere, rather than the system at large.

When people refer to “female privilege,” they’re likely referring to the positive counterpart of a male non-privilege. It’s definitely true that men experience social injustices – nobody’s lives are perfect. But a lot of these non-privileges – such as expecting men to stifle emotions or providing for families – aren’t indicative of female privilege because women are not inherently benefiting from what men are disadvantaged by.

The problem is that when you try to apply that definition to examples of “male privilege”, it doesn’t seem to hold water either … or, at least it will only work in a way that applies to her examples of female privilege, too. From my understanding, for something to be socially privilege, it has to be systemic, certainly. But being “systemic” doesn’t mean that the benefit applies in all parts of the system, but merely that it is created and enforced by the system itself. After all, benefits for things like presumption of being career-minded or focused on the provider actually do disadvantage men in the domestic sphere — as that is what the arguments for “female privilege” explicitly assert — which is surely part of a patriarchal system. Additionally, the presumption that a man is going to be the breadwinner which would give them advantages in getting a job doesn’t necessarily hurt women as a group; in fact, if that man is married then it will certainly benefit that woman if she would rather stay at home with the children and not be the breadwinner. Moreover, a privilege can easily be seen as something that men or women get because they are men or women that is not available to the other gender, simply based on gender, which then would remove the requirement that the privilege must disadvantage men specifically at all. A privilege can be a benefit that one gender gets that the other doesn’t or a disadvantage that one gender has to face that the other doesn’t. The symmetry proposed here doesn’t seem valid.

And the main issue is that I think that “privilege” causes issues and doesn’t make sense in a social context, because those who talk about privilege in reference to patriarchy are incorrect about what patriarchy actually is. Patriarchy was not a system where men subjugated women, but instead a system where men and women were pushed into strictly defined roles based on their gender. If your natural personality and talents lended themselves to being good at and happy in that role, then things were great. If they weren’t, then things were terrible. Most of the privileges — both “male” and “female” — tend to work out precisely that way: if that’s what you want, it’s great, but if it isn’t, then it is very hard to do anything else.

So let’s see how this plays out in the seven examples:

1. Women Receive Chivalry – And Therefore, Free Dinners, Open Doors, and More

The author concedes that these things are benefits, but doesn’t agree that this means that this is female privilege.

But are free drinks and open doors benefitting women in society, as real privileges? They’re not hurting, but they’re not helping either.

The pampering part of chivalry can verge on being unsolicited, which actually means the social constructs women supposedly enjoy are really just positive encouragement for men.

It views women as unequal – either as weaker or placed too high on a pedestal – and men who treat them as such might be expecting to be rewarded for their gentleman-like manners.

I’m not sure how a societal expectation that men are expected to provide these benefits unsolicited can mean that it’s not privilege. Presumably if a benefit is conferred upon you unasked as if it was simply your due right is more a privilege than one that you have to ask for. The best Redkar can do here is argue that if women don’t have to ask for it, then they may not want it, and so it wouldn’t be a benefit to that woman. But if a man has no interest in being ambitious but is offered a position in some school club on the basis that they presume that they’d use it to pad their resume, that would also not be a benefit to that man and yet would still be considered an example of male privilege.

As for the reward, this ties into the overall idea of dating as a whole. Men are expected to prove their worth to women with things like dinners, arranging an interesting date, and so on and so forth. Based on this, the woman selects the man who can provide her with the material goods she wants and also can give her an interesting life. This is crucial in patriarchy because women cannot get those things for themselves. So this sort of structure is required to allow men and women to fulfill their specific roles: men are encouraged to provide for themselves, but are then required to provide for women, while women are constrained from providing for themselves but then if a man wants to fulfill his requirement he has to demonstrate to a woman that he can, indeed, provide what she wants or needs. Sure, in practice things didn’t work out this smoothly, but it didn’t work out smoothly on both sides of the ledger, with women having little choice in provider and some men having little choice but to take positions that didn’t make them happy or get them what they wanted. But again this is a reflection of men and women being boxed into constraining gender roles.

Also, it is interesting that Redkar leaves out one of the more prevalent examples of chivalry: the idea that men should risk their lives to protect the lives of women. The earliest chivalric romances have men taking on evil knights that have killed or maimed many other men in order to free a woman from captivity and thus win her hand. Even today, if a man and a woman are walking and are attacked, the man is expected to at least stay and fight them off long enough for her to get away and — hopefully — get help. Being able to expect the members of the other gender to risk their lives for yours seems like a pretty strong benefit to me, and is clearly enforced through the underlying social mechanisms of patriarchy. They are, therefore, just as systemic as the expectation that women don’t care as much or need jobs as much as men do.

2. Women Are Under No Pressure to Provide for the Family – Unlike Men

So are women who aren’t under pressure to provide benefiting at the expense of men? Nope, still no dice.

It turns out the very “privilege” of being apathetic about a career is what hurts career-driven women. The patriarchal expectation of men providing for the family is reciprocated by women caring for the children and household.

I don’t see how her comment means that this doesn’t count as “privilege”. As pointed out above, this is just the dual nature of the patriarchal gender roles. Men are presumed to be the provider, and so are expected to provide, and because they are expected to do that they are given preference in the areas they need to fulfill that role. On the flip side, women are presumed to be caring for the children and the household, and so get preference in those areas. Redkar’s sixth point is about women getting preference in getting custody of their children in the case of a divorce, a preference that follows precisely from women being seen as caring for the children. If a man would rather raise the children than provide, he faces social pressure, and if a woman wants to be the provider rather than raise the children, she faces social pressure. So they definitely seem pretty complementary to me, so much so that I can’t see how to argue that this is not female privilege while maintaining the equivalent male privilege.

The influence of feminism, however, adds another wrinkle to this, in that a woman can choose to focus on her career without also taking on the expectation of being the main provider. Feminism has long advocated for women to care more about their careers because it is better for the women if they do — it will leave them better off financially in the case of a divorce and can provide fulfillment — but hasn’t advocated for women to take on or even share the burden of being the provider. Thus, if a woman’s career stagnates, or she decides she hates it and wants to take on something else, she doesn’t face any stigma of risking her family for those choices like a man would. If both are working and both lose their jobs, that will be seen as a failure for him and not for her. While it may be a struggle, women at least have the benefit of being able to aim for what they want to do without facing social stigma over it, while men are constantly challenged to take the jobs that best provide for their family, even if they don’t want those jobs.

3. On That Note, If Women Don’t Feel Like Working, They Can Just Marry Rich

Assuming a woman can throw in the towel at a moment’s notice and marry a rich partner is an incredibly sexist assumption.

Not only does it endorse an odd reality in which rich men are available in endless quantities and for marriage on-demand, but it also caters to politics of desire, something not all women can benefit from.

So no, the answer to workplace discrimination or unequal pay isn’t to marry a richer spouse.

But that’s not what the privilege is claimed to be. It is essentially the same as the one above: a woman who wants to aspire to being the wife of someone who can provide for them without providing any direct income to the relationship does not face as much social disapproval as a man in the same situation. That doesn’t require them to find a very rich man, but only someone who makes enough to support the family without her having to work. Since the expectation under patriarchy is that men will strive to be able to do that, there are far more choices out there than Redkar accepts.

Redkar’s response here strikes me as unresponsive. It’s too shallow to work as an argument that women don’t actually have that benefit, but doesn’t address the underlying argument for this being a benefit women get due to their gender.

4. Women Are Accepted as Emotional Beings

This instance is yet another example of how the patriarchy chastises men for showing signs of weakness – or, in other words, acting like a woman.

The very phrases of “man up” and “take it like a man” may as well just say, “Don’t be like a woman!”

Men are taught from an early age that women are weaker and emotional, and that so much as a teardrop will chip away at masculinity. It’s an unfair burden for men to cage emotions, but it’s also done at the expense of women.

By viewing an open acceptance of women’s emotions as a “privilege,” it only reinforces women as being a lesser gender and placing an inhuman hardship on a very fragile male ego.

This point would work if it wasn’t the case that women are also chastised for being too much like men under patriarchy. While comments like “the weaker sex” permeated patriarchy, underneath it all men were not supposed to act like women and women were not supposed to act like men. Sticking things like ambition into the male side restricts women who are ambitious, but sticking emotion into the female side restricts men who need to show emotion. And arguably the latter is worse because psychologically men are forced to address emotional issues in very unhealthy ways. That women are indeed allowed and even encouraged to show emotion benefits them in the situations where that is a good thing just as men being allowed and encouraged to be ambitious benefits them in those circumstances. Again, it is hard to see how to deny emotion as a female privilege without also denying ambition as a male privilege.

5. Women Have a Higher Chance of Getting Accepted into College

But are women getting accepted into colleges at the expense of men? Not necessarily.

In the past fifty years, women have begun to take over jobs traditionally held by men: doctors, lawyers, engineers, and other specialized career paths that require the successful attainment of a college degree.

At the same time, women are also dominating the fields of jobs traditionally considered “female”: teachers, nurses, administrative assistants, and so on.

Elisa Olivieri, PhD, concluded this notion of why women outnumber men in colleges: Jobs seen as “manly” – namely, manual labor jobs – don’t require college degrees. “Feminine” jobs like nursing and teaching, on the other hand, do.

Olivieri calculated that the biggest obstacle keeping men out of college may just be society’s stigma against gendered jobs.

This is actually a privilege added directly by feminism. Feminism has pushed for and made it more acceptable for women to entire traditionally male fields. However, they have done little to make it acceptable for men to enter traditionally female fields, and has possibly even indirectly increased the stigma towards them by pushing women to enter the male jobs that were considered “superior”, maintaining the “superior/inferior” divide between male and female jobs. As such, women are free to select from all of the college offerings without excessive stigma, while men are not. As our economy shifts towards skilled and educated labour as opposed to manual labour, this hurts the economic ability of men … while they are still expected to be the main provider for a family. The ability to enter any career that strikes your fancy no matter whether it is seen as traditional or not is clearly a benefit, and follows from the old patriarchal divisions that feminism removed for women — or, at least, works hard to remove for women — but didn’t remove from men.

6. Women Are More Likely to Win Child Custody Battles

One of the biggest myths against marriage equality is the same underlying notion behind the myth of women being more likely to win child custody battles: that mothers are absolutely necessary in a child’s development.

Statistics show that women are far more likely to win custody of children in a divorce, yes. But they are also far more likely to ask for it.

One of the main reasons for this is that men don’t ask for it unless they have really strong reasons for it because they are told that they will not win. To use that in any way as an example of why this isn’t female privilege is like pointing out that fewer women apply for science programs in universities. No one would buy it in that case, and we shouldn’t buy it here either, because the underlying issue is the social pressure that says that they aren’t good at it, can’t do it, and shouldn’t do it.

7. Men Are More Likely to Die of Suicide

Although it’s still unclear as to why men use more deadlier methods to end their lives, it is drastically different to the traditional approaches of women who are suicidal. The culture of toxic masculinity and expectations to preserve characteristics of socially prescribed manliness could be partly to blame.

Asserting that this statistic is evidence of female privilege is false. Because women are not gaining advantage from the higher suicide rates of men – no one is.

When I’ve seen this used, it’s less an example of direct privilege, but instead as an argument based around a couple of points:

1) Women die less often because they use it as a cry for help, and in our society women who cry for help get it. Men die more often because they don’t try to use it as a cry for help, feeling, at least, that they wouldn’t get it.

2) More men die from suicide, but we aren’t doing more to relieve depression and suicidal in them and are instead focusing on women, when less women die from that.

If there’s a privilege here, it’s one of the oft-cited ones: society considers women’s lives more valuable than men. That’s why that higher death rate doesn’t trigger the expected response; we care less when men die than when women do. On its own, this isn’t a particularly good example of female privilege, as you need to unpack a lot to make this fit into the context. Redkar, of course, addresses this literally and does none of that, even though if she had she could have raised actual questions about even the points I raised above.

At any rate, overall I don’t see how Redkar’s arguments work to refute the idea of female privilege without also weakening the idea of male privilege. It seems that she starts with the presumption that the system oppresses women at the expense of men, and then if she can find any way to claim that this still is part of that oppressive system then it can’t be an example of female privilege, but this again is all about taking the two sides of patriarchy, defining one as superior to the other, and then using that to argue that that side is therefore superior. Which is exactly what even Redkar has to admit is what patriarchy does wrong.

So, sure, we can nitpick over what really counts as “privilege”, but that ends up as being nothing more than, well, nitpicking. Women get benefits simply for being women, and those benefits and detriments are the complete inverse of the benefits and detriments men get. That’s what patriarchy is. The sooner we realize this and stop trying to declare one side better than the other the sooner we can eliminate those incorrect presumptions that drove the system in the first place.