Archive for the ‘Philosophy’ Category

Higher Taxes Aren’t Good In and Of Themselves …

August 14, 2014

So, Ophelia Benson at Butterflies and Wheels liked part of an interview with a Swedish actor:

Mr. Skarsgård, where do you live?

I live in Sweden because the taxes are higher, nobody is starving, good health care, free schools and universities. It’s a civilized country and I like that.

You prefer paying higher taxes?

Of course. If you make a lot of money like I do you should pay higher taxes. Everybody should have the possibility to go to school, and university, and have good healthcare.

She comments on it this way:

Goodness. How reasonable, and how rare.

Well, it’s a good thing that it’s rare, because it isn’t that reasonable. The reason is that the reason for his living there isn’t, in fact, because he pays higher taxes in Sweden … or, if it is, then he’s actually being quite unreasonable. Either way, something’s missing here.

Now, I’m sure some will comment that I only say this because I’m caught up in some kind of uber-capitalist notions about what society should do and that taxes are evil, and if I only could see the light that the people in these nations have already seen this wouldn’t seem so odd or unreasonable to me. I’ll counter that with this little thought experiment:

Imagine that you have two countries. Both countries have exactly the same quality of social programs: they have excellent schools, no one is starving, they have free health care, and so on and so forth. However, country A has lower taxes than country B. Which country would you rather live in?

If you say country B, then, well, I’d like to see a good argument for why, because rationally country A is the better place to live. You get all of the social program goodies of country B, and get to keep more of your own income to pursue your own interests. How could it not be better?

The reason why someone could think that Skarsgård’s comment on preferring Sweden because he pays higher taxes isn’t just an utterly irrational statement is because of the correlation between tax rates and social programs. In general, we all understand — or at least strongly believe — that in general if you pay higher taxes then you have more social programs, and if you pay lower taxes then you have less social programs. And this is generally the case, because if a government is getting the resources it needs to provide social programs from taxes, then the higher its taxes then the more social programs it can provide, and the lower its taxes then it will be able to provide less social programs. Thus, we expect a range between 100% taxation with the government providing everything for you, and 0% taxation but you not having anything that looks like a government at all.

While many people will easily see the latter case as being bad, the former isn’t good either. Lower taxes provides people with discretionary spending, in the sense that it’s money that they can use to get what they personally want or need. If the only way to get anything is to get it free from the government, then the only things you will be able to get are the things that the government provides. You’d better hope that you want what they want to provide, because if you have non-standard desires you may find yourself out of luck. On the other hand, if the government provides nothing then you can only get what you can afford to pay for yourself, no matter how badly you need it, or how badly everyone needs it. A capitalist system may provide, but your only guarantee will be what you can afford; if you can’t, then that’s likely tough luck. And the instant people get together to pool their resources to build what they need you start to get something that looks a lot like a government … and start introducing things that look a lot like taxes to fund it. Even user fees would quickly grow into general taxation since it would be difficult and expensive to track who’s using what for how much and how much worth.

So neither extreme is actually good, so what we really want is a mix of the two: taxes to fund social programs that pretty much everyone agrees they want and/or need, but taxes kept low enough that we have room for people to spend their own money on the things they need and, also, to not have too many cases where money is taken from them and spent on things that they’ll never want, never need or that they disapprove of. Taxes, in general, are used to provide the things that we really need provided to maintain the social contract, and a functioning society; people will always take it badly — and rightly so — if they are used to fund the personal ideals of politicians or the majority or even a minority of people.

A lot of complaints about taxes and tax increases, in my opinion, comes from a cynicism or distrust that the government needs the money to provide necessary services, and is instead using that money to promote themselves or specific causes that they favour (and many taxpayers don’t). A lot of the complaints about foreign aid, for example, are complaints that the government is spending a lot of money in foreign countries while they aren’t providing sufficient social programs in their own country … you know, the one that contains the people that are paying taxes. And a lot of the complaints about funding special interest groups is the same: the ones that the government likes get money, but things that impact more people or are supported by more people don’t, and surely there are better things to do with that money than to promote a cause that, if it was worth anything at all, could be funded by people who support it instead of with general taxpayer dollars.

Ultimately, it is critical in a democratic society — or, probably, any society with taxpayers for that matter — that people feel that the money they pay in taxes is not wasted or spent on things that a government shouldn’t be providing. If the government didn’t need that money to provide the essential services that we need a government to provide, then it should return that money to the taxpayers to let them support causes they want to support and get things that they want to get. Because of this, it’s critical that taxpayers feel that they have a say in where their tax dollars go, to ensure that they aren’t giving money to a government that is using it to gain more power for themselves, enrich themselves, or support causes that they favour but won’t fund themselves. Hence, the importance of a democracy which does provide that feeling. But note that the higher taxes someone pays, the more they’re going to want to make sure that that money is used properly, because everyone feels that if the government isn’t going to use their money wisely they have uses for it that would certainly, at least, benefit them more than what the government is going to do with it.

Which, then, reveals a major problem with having the wealthy pay more taxes than everyone else. If they note that they fund the government’s activities to a disproportionate amount, they’ll want to have a say in how that funding is used — and therefore, in that government’s activities — to a degree that matches the funding they’d putting into it. This is a perfectly natural response, as in most things we think it fair that if someone is footing most of the bills they get a bigger say in what is getting bought, at least to avoid people racking up the bills because they don’t actually have to pay for it. But they can’t politically get more of a say in a democracy, and so then they’ll demand that they only pay as much as everyone else does. So the choices then are to lower their taxes, or let them have more influence in what programs get funded.

Now, one way around this is to argue that there are certain things that need to be funded, and that the government needs a certain amount of funding to do that, and that taking equal proportions from everyone would unduly burden people with lower incomes while would be less of a burden for those with higher incomes, and so we can have higher taxes on people with higher incomes. And this works perfectly well … as longer as the government is using that money to a) provide those essentials and b) only to provide those essentials. So, the deal works as long as those who have higher incomes believe that, and the government can prove that they are doing that. As soon as that is no longer the case … the deal breaks, and people with higher incomes can again rightly complain about being taken advantage of: the government justifies taking more of their earnings on the basis that it needs it to provide services that it isn’t providing while it provides services that it doesn’t need to provide. And we’re right back where we started.

Ultimately, at the end of the day, all groups, no matter what their income, need the proper and fair balance of taxation versus social programs. Where this line is drawn will depend on a society, of course, but in general the government has to offer the social programs the people need, not offer the programs that people don’t want or want to fund themselves, and then set tax rates specifically to fund the programs that the society agrees they need. Anything beyond that breaks the agreement of what taxes are for and shouldn’t be seen in a free and democratic society.

The Sound of Logic …

July 31, 2014

As the regulars on FTB are taking Dawkins to task over his discussions of logic and emotion, there’s a lot of talk about logic on those sites. P.Z. Myers is the latest to get into the fray, claiming that pure logic can or does lead to things like the firing of flechette bombs at Palestinian children:

We can stand aloof from the events and carry out thought exercises, and we can carefully weigh the pros and cons of war—this side did this horrible thing, that side did that horrible thing, this side has this worthy cause, that side has that worthy cause—and we can attempt to calculate who is slightly better and who is slightly worse, although even there it’s striking how often different people seem to come up with completely different sums, as if maybe, somehow human lives resist being reduced to simple numbers. Let us reason together, you say; if only we could get everyone to look at the situation logically, if only everyone would be a dispassionate observer like me, if only everyone would sit back and coldly analyze all possible actions to arrive at an optimal conclusion that maximizes idealized outcomes…

…and then we arrive at this moment where all the brilliant science and technology of our civilization culminates in this beautifully intricate weapon, designed, machined and assembled by highly educated teams of engineers and executives and politicians, aimed at a small child. One human being, persuaded by the moral calculus of their side that this action is a logical necessity, pushes a button and turns another innocent human being into shredded meat.

We don’t need any more logic. What we need now is more appreciation for the value of life.

As you read through the comments, the justification for claiming that pure logic leads to this is one that I find disturbingly common: comments that they can make a logically valid argument that has whatever horrible or insane proposition as its conclusion and claim that therefore “pure logic” validates thinking that the conclusion is true. And while pure logic classes do spend a lot of time pointing out the importance of logical validity and how logical validity does not depend on the truth of the premises or even the conclusion, if you actually learn what logic is you’ll understand that that isn’t where logic stops.

So, then, what does it mean to say that a logical argument is valid? Simply this: if the premises are all true, then the conclusion cannot be false (ie must be true). Now, what we want from a logical argument are to know what propositions are true and what propositions are false. So, given a logically valid argument, can I say anything about the actual truth of the conclusion, just by knowing that the argument is logically valid? No. All I know is that the conclusion is true if the premises are true (it’s not even an if and only if, because it is possible in a valid argument for at least one of the premises to be false and the conclusion to still be true, based on another argument), but without knowing if the premises are all true I can’t say if the conclusion is itself true or false.

Thus, the criteria of a logical argument that everyone keeps forgetting when they talk about “pure logic”: the soundness of the argument. An argument is sound, roughly, if the argument is valid and the premises are all true. If you have an argument that is valid and sound — the conclusion is true if the premises are all true and the premises are all true — then you know that the conclusion is true. If it isn’t, then you don’t, at least not from that argument.

If I had a pure and fully logical argument that said that the moral thing to do was to use flechette bombs, meaning that the argument’s conclusion was “The moral thing to do was to use flechette bombs” and the argument followed from premises such that if all of the premises were true the conclusion had to be true and the premises were indeed known to be true, and if I was the sort of person who would indeed choose to act morally, I dearly hope that no pictures of dead children would sway me from using the “pure logic” argument and in fact using them. I can’t think of a valid and sound argument for that, because I don’t think there is one, once you include the unstated moral premises into the debate. And thank God for that. But contrary to Myers’ assertions, we don’t need less logic, but more logic. We need to remind people that simply making a valid argument logically doesn’t mean that you are reasonable in thinking the conclusion true, which is one of the first things formal logic classes teach you about logical validity. And we need to remind people that they have to include all of their hidden premises, especially when dealing with morality. It’s the failure of people to use pure logic that causes problems like Myers talks about, not the fact that people use it too much instead of their own emotional reactions.

If you can make a valid and sound argument for a conclusion, then no amount of emotion, concern, or care ought to convince anyone that that conclusion is actually false. If that happens, you are being irrational, and dangerously slow. But you cannot forget that soundness part; validity is not enough.

Reason, Emotion, Experience and the Right Answer

July 30, 2014

So, after things had settled down a very small bit between Richard Dawkins and the Gnu Atheist/FTB/Atheism+ group, it all started again over Dawkins using an example where he argued that date rape was not as serious as stranger rape in an attempt to provide an example of how saying that X > Y (ie more serious, worse, etc) doesn’t mean that that person thinks that Y is a good thing. When he did that, people jumped on him for claiming that date rape was not as serious as stranger rape — with some justification — and Dawkins replied with, essentially, a sigh and a “That wasn’t my point” type of response.

And then he made a post about it, which Ophelia Benson replied to, which asks this question that Dawkins invites people to consider and that Benson is considering:

Are there kingdoms of emotion where logic is taboo, dare not show its face, zones where reason is too intimidated to speak?

After eliminating interpersonal cases, Benson goes on to bring it down to discourse:

Discourse by definition rests on at least minimal reason and logic. But does that mean emotion must be banished?

Being Stoic-leaning, I’m inclined to answer that with a simple “Yes”. But even I realize that that’s far too trite an answer. And yet, the answer depends a lot on what is meant by saying that emotion must be “banished”. A reasoned and logical decision shouldn’t be settled by emotion or an appeal to emotion, but that doesn’t mean that the facts of people’s emotional states are always going to be irrelevant. For morality, I think it should be, but others disagree. It is reasonable, however, to think that when you want to predict or influence the behaviour of people who are not Stoics that you will need to consider their emotions. So the key, I think, is this: the facts of emotions — what emotional states people are in, what emotional states they will be, and so on — will be relevant to logical and rational discourse as facts, as states of the world. But emotions are not arguments, nor should they attempt to stand in for them. So, in a rational discourse, an emotion is on the same level as, say, a colour or a solid object or a conceptual truth: things that are true and are assessed for their relevance to the argument. But an emotional state does not make an argument true in and of itself; they only affect the truth of an argument if joined to it by solid logic and reasoning.

Benson goes on to talk about how she thinks emotion can impact discourse:

But more to the point, it isn’t just random daft meaningless “emotion” that make people wary of discussions of, say, abortion. It’s emotion about things like consequences and experience and the difference between being someone vulnerable to the harm under discussion and being someone who is not vulnerable to it.
So we could have another discussion about the morality of trying to discuss moral issues that have huge impacts on one kind of people but no impact on you. Does that make a difference? Should it make a difference? Is it possible that, for instance, a very rich person who has always been very rich and has no personal experience at all of what it’s like to be poor – that such a person would have a shallow understanding of the consequences of, say, a wage cut for bottom-tier workers in a company? Should very rich people be the only people deciding what wages get paid? Is that a question about reason and logic, or emotion, or both?

They might, indeed, have a shallow view of the experiences of the very poor, and thus might be missing some of the relevant facts … particularly, those facts of the experiences of the very poor. And assuming that those experiences are relevant, then they might be missing important facets of the situation that will make for a bad logical argument. However, the flip side is this: if those facts are relevant to the argument at hand, someone ought to be able to convey them in a way that doesn’t require that very rich person to have had those experiences. The key is this: if you are making an argument that is objectively true, then it has to depend on facts that are objective as well, meaning that anyone can get access to them. If you are relying on subjective facts, then only those who have access to those facts can see the truth of the argument, and so it has now become a subjective argument. And a logical and rational argument is objective, at least for any argument that you want other people to accept using logic and reason.

So if you end up arguing that people have to have experienced what you’ve experienced to see the truth of your argument, at that point you have to consider that you have either made a bad argument, or an irrational/illogical one. (Note, sometimes you want irrational/illogical arguments, or at least subjective ones. But the case listed here is not one of those; we should be able to logically assess whether the wage cut is the right thing to do by means that are accessible to everyone.)

For an example: suppose you get a group of prosperous comfortable well-fed men having a rational logical discussion of rape. Is it excessively emotional to point out that a group like that would be simply talking over the heads of the people most vulnerable to rape? I don’t think it is. I don’t think it’s excessively emotional to point out that there’s something blood-chilling about seeing people who are safe talk calmly about the risks or tragedies faced by people who aren’t like them.

Would those people be talking over the heads of the people most vulnerable to rape … or simply possibly telling them what they don’t want to hear. And I personally think that people should be able to, in general, talk calmly about issues like this without it being considered “blood-chilling” or in any way wrong. It is indeed the rational ideal that such arguments should be made calmly and, more importantly, without bias. Those who are most impacted by the choices are clearly biased, and it’s perfectly natural that they would have a bias. But if people who have a vested interest in the outcome of a discussion are the only or main ones who can argue over it, they risk introducing bias into the arguments, even unconsciously. And a rational, logical and dare I say scientific argument wants to remove bias as much as is humanly possible. Again, it comes back to what I said above: if the argument isn’t one that’s accessible and demonstrable to everyone, then it’s at best subjective and at worst illogical/irrational.

Note again that this doesn’t mean that their concerns are irrelevant; they should be demonstrable facts. But the fact that people who are in those situations have those concerns says nothing, in and of itself, about the truth of the argument. It seems to me that hinting that there’s something wrong or something missing if a bunch of uninvolved people had a dispassionate discussion of an issue is advocating for a subjective argument: you can’t see the truth of the argument unless you’re involved. And that’s wrong. It begs for an answer of “Demonstrate these facts objectively and then we’ll incorporate them into the argument, and see where they take us.”

Also note two things:

1) Dawkins’ actual complaint was about shutting down the argument or consideration of it at all due to emotional reasons, which is about an even stronger stance than I’m talking about here.

2) My take here is a bit stronger than Benson’s phrasing would insist on; she could say that she just wants the situations considered, and not to trump discussion.

In summary, emotion is not useful as part of the method of rational discourse, and only introduces bias and gets in the way. However, facts about emotions may be necessary to produce the right argument, and so should be limited to that role in rational discourse, not banished entirely. Rational discourse has to depend on things that can be demonstrated to everyone, and emotions and personal experiences can’t. Thus, rational discourse should follow the “Just the facts” model, where sometimes the facts include personal experiences and emotional states.

God in the Age of Science, Part 1

July 26, 2014

So, I did receive “God in the Age of Science?” on July 16th, and immediately read four chapters. And then got busy and didn’t read any more since then. But I’d like to comment briefly on my very first impressions of the book and on the first couple of chapters or so. I have a more detailed commentary planned for chapters 3 and 4 … whenever I get around to it [grin].

First, on tone, the tone isn’t particularly aggressive, which is a plus, in my opinion, for a book like this. A few stylistic notes: he doesn’t seem, at least not yet, to have Prinz’s obsession with presenting all of the counter-arguments in as detailed and precise a manner as possible, although he does indeed address counter-arguments, which is nice. The second is that he seems to be fond of Derek Parfit’s style — which, to be fair, is fairly popular in philosophy in general — of stating what he sees as the obvious conclusion to an argument as if it was, well, obvious, except Philipse does it through rhetorical questions while Parfit uses out-and-out statements. Unfortunately, both of them have a tendency to do it without giving any further explanation and in cases where the conclusion is not or at least may not yet be clear, which always chafes me a bit. Philipse’s approach is a bit less annoying, leaving you wanting more rather than automatically wanting to challenge him, but still it would be better if these things were presented as the results of full arguments rather than as asides.

But that’s not all that important. The big goal in the first section is to demonstrate that you can’t have or rely on revealed knowledge or, perhaps, revelation in general to justify a belief in God, but instead even for revelation will have to rely on either empirical evidence or reasoning to justify your case, or else you’ll be irrational. This last part is actually pretty controversial, because it risk conflating rational belief and being justified in claiming to know something, and a lot of the arguments particularly against reformed revelation (in Chapters 3 and 4) do rely on that. But remember that theism is a belief in the existence of one or more theistic gods, not necessarily a knowledge claim. As someone who doesn’t make the knowledge claim, it’s going to be easy to say “Can we believe that rationally, though?” as a response to most of this. More on that when I look at chapters 3 and 4 specifically.

The biggest flaw in the first two chapters, though, is probably in his discussions of contradictions in the Bible and how they can’t be resolved through revelation, which for him seems to be “Reading the Bible and thinking really hard about it, which may include noting the contradictions and resolving them through the text”. While few people will deny that there are some at least difficult things to resolve in the Bible, for this point to work they have to be virtually impossible to resolve, and his examples aren’t that hard to resolve. For example, he cites a contradiction between Jesus and Paul over whether works or following the Laws are what is required to get to heaven, and notes that this one is unresolvable. Except it’s very easy to resolve for most Christians: if there’s a contradiction between something that Paul said and something that Jesus himself said, you go with Jesus, and Paul either got it wrong or should be taken another way. Philipse could have found a similar contradiction between the purported words of Jesus himself, but I suspect that those would be easier to reconcile on interpretation. Now, for Philipse’s point disowning Paul’s revelations might support not trusting revelation since there are times when it gets things wrong, and you don’t really know when it’s getting things wrong or not, but that isn’t the tack he’s taking in those chapters, and thus it’s about the contradictions being unresolvable … but he leaves himself open to the counter of “Says who?”.

If he took revelation as a method that had to reveal itself directly to the person in full form, then he’d have a point. But as soon as he allows for reflection, any philosopher should know that there are many, many ways to resolve seeming contradictions in a work (seeing how that’s done for, say, Kant, for example), and so his comment that taking a revelatory approach to the Bible leads to unresolvable contradictions is weaker than it needs to be to make his point. And if he takes that approach as requiring reason and so doing natural theology, then he seems to contradict his original discussions and, in fact, the reformed approach to revelation that he discusses in chapters 3 and 4 as if it really could save revelation. So contradictions, though a popular argument, don’t seem to support his case as well as he’d like them to. But this shouldn’t be a big issue for his overall thesis, and so probably isn’t worth worrying about.

It’s in …

July 16, 2014

So, Amazon is telling me that my copy of “God in the Age of Science? A Critique of Religious Reason.” is waiting for me at the post office, so I should get it — and start reading it — tonight. I’ll try to do my “one chapter a night” thing, although I might do a bit more than that tonight since I ought to have time. I’m considering posting thoughts on each chapter as I think of it, but I hate doing that since I think that for most works it’s important to understand the whole point before breaking it down into its parts, although for a lot of books each point is independent and so can be addressed that way. So we’ll see.

But my biggest question before starting is: is this book, how can I put this, aggressive? Coyne has a strong tendency to like and recommend books and articles that utilize snark and mockery a lot, as much if not more than they utilize arguments. The initial description didn’t sound too snarky, but is this going to be a case where Coyne’s recommendation of a good book is one that doesn’t, in fact, mock the arguments but instead focuses more on addressing them? Only time will tell.

Challenge Accepted …

July 10, 2014

There has long been a line of argumentation that many atheists like to use that relates to the traditional Courtier’s Reply, which goes something like this: you keep telling us atheists that we’re ignorant of theism and can’t dismiss it until we’ve considered all of the best arguments for theism. But what about the best arguments for atheism? Can we list off a list of books and arguments that you have to read before you can be considered credible in critcizing atheism?

Now, the theistic point isn’t usually just “You need to read all of these authors”. Most of the initial replies are people pointing out that the atheists tend to talk about particular arguments for or conceptions of God, get it completely wrong, and so really should try to understand the arguments before criticizing and, especially, before mocking the arguments. In other cases, it’s just people pushing their own preferred arguments and conceptions, because there are indeed a number of different ones. Sometimes, it’s both. But there are times when people — whom I’d tend to call “unsophisticated” — really do just toss out books and say read them. While I never approve of such things, I can approve of the underlying sentiment that makes that seem even remotely credible: in order to criticize or reject a position, you really should be well-read in not only what others say about it, but also in what those who support it say about it.

In terms of atheism, I’m doing pretty well. I’ve read Dawkins, Dennett and Harris of the Four Horsemen (I read a debate between Hitchens and someone else once which convinced me that he wasn’t worth my time), I read Kaufmann as suggested by Jerry Coyne (and wasn’t impressed, to say the least; I really should critique the religion part more directly), I’ve read Smith’s initial book, I’ve read Grayling’s take, and some others.

Now, Jerry Coyne is pushing another book:

For a good refutation of the “God off the hook” claim of Ruse, read the philosopher Herman Philipse’s God in the Age of Science? A Critique of Religious Reason. It’s the best attack on theism I know, and though it’s occasionally a hard slog, it’s well worth it. I can’t recommend it highly enough, and if a theist says he/she hasn’t read it, you can rightly say, “Well, then, you can’t bash atheism, because you haven’t dealt with Its Best Arguments.”

Well, if that’s one of the “Best Arguments” … then I shall take up the challenge and deal with it, despite the fact that it really looks like this whole challenge is one that Coyne and other atheists really don’t expect someone to accept. I’ve ordered the book and will read it when it gets in. However, from reading the description on Amazon I can already predict that it will have an uphill climb:

God in the Age of Science? is a critical examination of strategies for the philosophical defence of religious belief. The main options may be presented as the end nodes of a decision tree for religious believers. The faithful can interpret a creedal statement (e.g. “God exists”) either as a truth claim, or otherwise. If it is a truth claim, they can either be warranted to endorse it without evidence, or not. Finally, if evidence is needed, should its evidential support be assessed by the same logical criteria that we use in evaluating evidence in science, or not? Each of these options has been defended by prominent analytic philosophers of religion. In part I Herman Philipse assesses these options and argues that the most promising for believers who want to be justified in accepting their creed in our scientific age is the Bayesian cumulative case strategy developed by Richard Swinburne. Parts II and III are devoted to an in-depth analysis of this case for theism. Using a “strategy of subsidiary arguments”, Philipse concludes (1) that theism cannot be stated meaningfully; (2) that if theism were meaningful, it would have no predictive power concerning existing evidence, so that Bayesian arguments cannot get started; and (3) that if the Bayesian cumulative case strategy did work, one should conclude that atheism is more probable than theism. Philipse provides a careful, rigorous, and original critique of atheism in the world today.

So, what are my issues?

1) It starts from the Bayesian cumulative case strategy of Richard Swinburne, which I’m not familiar with.

2) That uses Bayesian analysis which I don’t care for.

3) If the end of his third argument is that atheism should be considered more probable than theism, then even there I don’t think that what we believe must be that which even we consider most probable, let alone what would be considered most probable by an abstract Bayesian analysis.

4) Knowledge certainly isn’t set by probabilities of any kind, so that wouldn’t get to a knowledge claim that atheism is true, and I don’t care much about atheism until they can claim to know that God doesn’t exist.

5) His second point about it not reaching the level where a Bayesian analysis can be done is underwhelming to me and only matters if I accept that the Bayesian route is the way to go, but since my epistemology is not Bayesian that’s going to be pretty hard to do.

6) So it will come down to his first point about not being able to state the proposition meaningfully … but since Christians can point at the Bible and most religions can point at their text that seems to be precisely as meaningful, at least to most people, as, say “Sherlock Holmes”, a word that we clearly know the meaning of, so he must mean something more advanced than that … but I don’t see why that would matter.

I hate starting a book thinking that I’ll hate it, because I find that starting with that attitude almost always ensures that you will, in fact, end up hating it. But I will read it and see if it can convince me.

So … challenge accepted.

God as a Gaseous Vertebrate?

June 21, 2014

A while ago, Jerry Coyne finished reading “The Experience of God” by David Bentley Hart, and made some comments on it that revealed that, yes, he didn’t really understand what a Ground of All Being actually was. I meant to respond to that as a summary, since he didn’t really post a solid review/summary, but anyone who’s been following this blog knows that I get lazy and then don’t reply. Maybe I’ll get back to it one of these days. However, , and after spending a little time listening to Christian radio compares God — either the folk God or the theological God or, well, it isn’t quite clear what — to what H.L. Mencken called “a vertebrate without substance”, which when you unpack it and unpack Coyne’s post seems to mean a God that has human traits but isn’t human, a common criticism that Coyne makes of “sophisticated theology”.

(As an aside, Coyne compliments Mencken as “…a true strident atheist, as good with mockery as was his successor Hitchens”. This leads me to ask “When did mockery become a good argument to convince rational people of your position?”)

Coyne gives this as his main example:

One show, for children, was about a girl who wanted to become a personal trainer, but had shown little talent for the job, and was frustrated because she didn’t know what to do with her life. “I want to be somebody,” she wailed. Her father, who tried to soothe her, had his own problem: he was overweight and was on a diet. Eventually he told her that God would show her the way, but it would take a while, just like the long while he’d have to wait to shed his extra pounds. Then a voice-over came on and gave the lesson: God has plans for all of us, and listens to our needs, but he will effect his plans for us in his own time. We must wait. But we should be reassured that he knows what is good for us, loves us, and will, in time, show us the way.

This God, of course, was humanoid: the emotions he evinced were love, understanding, empathy, and the desire to interfere in our lives so we could be fulfilled. And, of course, he was touted as actually listening to prayer, for the child was told to consider her options “prayerfully”.

He then compares this to Hart’s position:

Those gaseous theologians like David Bentley Hart and Karen Armstrong, of course, decry the concept of such a humanlike God. That’s not the real God, says Hart, and those atheists who argue against it are wasting their time. The real god is ineffable (though somehow Hart knows that He/She/Hir/It loves us); it is a Ground of Being.

Why? Because they think that God can love? Because they think that God can plan, or have emotions, or act in the world? The Ground of Being — as I explained in my review of Hart’s book — is not some completely amorphous, blob without properties. For the Thomists, the Ground of All Being is, indeed the Ground of All Being. It is not only the case that every being exists because it participates in the Ground of All Being, but every positive property only exists because the Ground of All Being has that property. So if we can be said to be capable of love, then the Ground of All Being must be capable of love. If we can plan, so can the Ground of All Being. If we can act in the world, then so can the Ground of All Being.

Now, getting this from Hart’s book would be tough; only by combining it with Feser’s posts and book was I able to get that. But Coyne should have been able to get the answer to this question from it:

What I want to know is this. If Hart and his ilk think that 99% of Christians have the wrong concept of God, why aren’t they trying to correct it? Why are they writing books aimed at fellow scholars instead of, say, the average Christian, or the average Christian child? Why are they wasting time bashing atheists instead of telling their coreligionists—or all religionists—the truth about God?

Now, here’s a quote where Coyne does seem to get the problem that Hart is trying to address:

I listened to two stations, and both of them constantly promoted the idea of God as a gaseous vertebrate—just like us, but more powerful.

Now, Hart was clear in his book that this was indeed the wrong way to look at God, and he in fact called out other theologians, including Plantinga and the modal logic attempts to prove the existence of God, as well as the Ontological Argument. So no one can validly complain that they aren’t trying to correct the misconception. So the only complaints would be that they may write more scholarly works than popular works, and that they take aim at atheists too much. For the former, it’s hardly a valid criticism that they’ve decided to work in intellectual circles instead of aiming at the rank and file, any more than it would be a valid criticism of, say, those studying global warming if they write more academic papers and books aimed at disagreeing scientists and don’t spend a lot of time talking to the mainstream press. For the latter, since Feser and Hart were taking on the New Atheists, who aimed at and still aim at the average person, aiming at them means aiming at a popular or average view as well, and in effect aims at two birds with one stone: taking out the rather weak counter-arguments against God — from their perspective — while clearly pointing out to religious people what the common view of God really is or really implies. Maybe Coyne’s right and they should promote the underlying theory more … but maybe the folk view isn’t as far off of their view as Coyne thinks it is.

At any rate, there is, in general, no gaseous vertebrate here, at least not from the perspective of Thomist theology. There’s nothing really wrong with what those stations said, other than the analogy risks anthropomorphizing God if taken too far and too literally. Which is a risk of any analogy. The contradiction that Coyne so relies on simply doesn’t exist.

Comment on Ryan Born’s Response to the Moral Landscape Challenge …

June 14, 2014

So, I’ve read Ryan Born’s response to the Moral Landscape Challenge, and want to comment on it a bit here. But first, it must be noted that the biggest problem with attacking Sam Harris’ views is that there really isn’t any kind of central point or analogy in it, no overall moral system that you can attack at one place and bring it down. Instead, Harris has set up multiple fronts, and seems to be willing to stake his entire claim on any one of them at any time, switching between them as necessary to avoid having to address tough arguments. Well, okay, perhaps that isn’t quite fair. Perhaps it is more reasonable to say that instead of having a system that works together to build out a fully-formed moral philosophy, Harris has instead a group of independent statements about morality that he brings together under the umbrella of “morality” but which are all, for the most part, independent. No one could refute all of them in 1000 words, and so one has to pick one to attack. But even if that attack is successful, Harris is open to saying “Well, what about this? You have to refute this to refute my view”, which he does tend to do. In preparing my response, I had at least two other main points that I could have attacked:

1) Argue against his main point by arguing that just because morality may be something that you have to be conscious to have, it doesn’t mean that morality is a property of consciousness.

2) Argue against the health analogy by pointing out that health is a state, not a normative value. You can be healthy without trying to be healthy or valuing it at all, but you can’t be moral without valuing being moral or having your actions motivated by valuing morality.

Ultimately as already seen, I went with moral disagreement. Born took on a fourth principle, that of whether you can subsume morality under science, pointing out that Harris’ basic principle that morality is about the well-being of conscious creatures is a philosophical/conceptual argument, not a scientific/empirical one.

This, to me, is a relatively weak counter. The first reason is that Harris’ main point ends up being essentially this: Given that morality is in at least some way critically defined by or related to the well-being of conscious beings, and given that all properties of conscious beings as conscious beings are explained by their brain, and given that explaining the brain is something that can be done scientifically, then morality is in some way critically defined by or explained by science. Born’s response that the first “given” isn’t scientific doesn’t actually touch this part of the point, and so has to aim at another angle, that the initial given is itself not scientific and needs justification from something other than science.

This leads to the second reason, which is that Harris doesn’t seem to care whether that initial given is scientific or not. First, as we’ve seen when he discusses scientism, Harris will quickly argue that saying that something must be proven philosophically and not scientifically is defining science too narrowly. So if philosophy becomes science, then that given is still justified scientifically. Yes, this isn’t a very good point, because it ends up taking the only really novel thing Harris says — although not new, since naturalization of philosophical claims has been done for at least half a century — and makes it meaningless, because it would still allow for the normal armchair philosophizing about morality to proceed and might change those discussions not one bit, leaving Harris’ view saying nothing new while attempting to imply that it does. Second, Harris has been consistent in maintaining that he doesn’t really need to actually justify the idea that morality is about maximizing well-being; all of his defenses of that, even in his latest response, end up being that he can’t see any other basis for morality and essentially challenging all comers to prove something else is right or better or else he must be right. His health and logic examples always boil down to saying “Well, we just take this as a given and we have to take these things as a given so why not take my initial given as a given?”. So given that he doesn’t seem to care about justifying that initial given, it seems unlikely that he’ll care whether that non-justification is done scientifically or not, even in the narrow or broad sense of the term “science”.

That was why I chose the specific approach I did, aiming it more at Harris himself and what you’d have to do to convince him than in creating a full, formal philosophical argument. The aim was to force Harris to take a question that he’d be sure that there was an actual, objective answer to, but demonstrate that he couldn’t do it without defining and justifying at least some sort of view of well-being, while demonstrating that no physical facts nor facts about the brain would be able to answer that question. Essentially, the only thing critical to all conceptions of his view is that initial given of morality essentially being the well-being of conscious creatures, and destroying that would destroy his view.

What Sam Harris Gets Wrong.

June 11, 2014

Now that the winning essay in Sam Harris’ “Moral Landscape Challenge” has been posted, and wasn’t mine, it’s time for me to post the essay I submitted. I hope to talk about the winning essay and Harris’ response more over the next few days. Anyway, here’s mine:

Sam Harris’ view of the proper morality is essentially that what is moral is what best promotes the well-being of conscious creatures. This is, in and of itself, fairly controversial, but Harris makes a few moves to sidestep some of the more obvious challenges. The first comes from a mostly off-hand comment that most if not all of the rival conceptions of morality also boil down to some form of well-being. The second is that there is room for multiple plateaus of well-being, so that we don’t all have to have the same exact idea of well-being to act morally, which is what allows for those widely differing views of well-being to all grasp at the same idea. However, if Harris is going to have a morality that can properly be called objective, there are going to have to be at least SOME outcomes that are going to be considered moral or immoral regardless of the personal preferences of the individual or culture, or else he’ll have a relativistic moral view. For example, it’s clear that he’ll consider murder and theft as being universally opposed to well-being, even if killing and seizure of property won’t always be. I propose here that we consider this question that we seem to intuitively think has a universal answer and see if a) all ideas of well-being will answer it the same way and b) if they don’t, if we can answer it by appealing to some kind of physical fact. The question is: is it morally permissible for a parent to steal bread to feed their starving children?

Under a loose Utilitarianism, barring a massive cost to the person who owns the bread, the answer is a resounding “Yes”. After all, it seems clear that more suffering will arise from those children starving and possibly starving to death than from the shopkeeper losing one loaf of bread. On the other hand, the Stoics will answer with a resounding “No”. This is because for them Virtue is what provides the most well-being for people, and pleasure, pain and even lives are indifferents, only to be preserved if it doesn’t interfere with acting virtuously. Without settling the dispute of ideas of well-being here — which would require far more than appealing to conscious creatures — it looks like we have a nasty clash between two at least potentially proper ideas of well-being. And note that neither of them disagree with all of the obvious physical facts here; it’s just the idea of what counts as well-being that’s at stake here.

Perhaps we can appeal to psychology, to what most people think is the case. Intuitively, most people side with the Utilitarians, but there is a group of people who don’t: autistics [see http://verbosestoic.wordpress.com/fearlessly-amoral-psychopaths-autistics-and-learning-with-emotion/ for details]. They tend to side with saying that it isn’t right for the parent to steal the bread, at least in part because they think the rules should not be broken. But, we can reply, they clearly have abnormal brain function, and so we can limit ourselves to those who have a properly functioning brain. Well, the problem is that while their brain does not function like everyone else’s, that doesn’t mean that their brain is functioning immorally, or is incapable of morality. Unlike psychopaths, autistics, in general, act properly morally and could certainly be seen as grasping for one of the plateaus that Harris allows for. That they have a different opinion on this topic doesn’t mean that they’re wrong; their “abnormal” brain function might even make them BETTER at answering these sorts of moral decisions than those with more “normal” brains. In order to settle that question, we’d have to know, independently of simply looking at the brain, which view of well-being is in the right or which is in the wrong. Otherwise, we’d have to allow that there is no objective answer to this question, but if we cannot answer this question it seems difficult to see what use such an “objective” morality would be as an objective morality.

The physical facts are not in dispute. The differences in brains are not in dispute. Everyone pretty much agrees with all of the facts in this case, but there is still radical disagreement. What, then, is in dispute? What well-being really means, of course, and what it implies about what you should value. Harris introduces the Moral Landscape to allow for variance in how well-being is defined, so that he doesn’t have to insist on one very specific idea of well-being that almost everyone will disagree with at some point or another. People have to be free to tailor their lives to what they themselves want and value to some extent. However, this notion cannot be stretched to cover the gaps we see here between the Utilitarians, Stoics, autistics, and so on. And yet, surely it should. Harris presents no reason to think that the Stoics or the autistics are simply wrong, but the difference is so vast and the question so important that we cannot simply allow everyone to come to their own conclusions and still have anything that looks like an objective morality.

Once one knows all of the relevant facts about a situation, and the relevant physical facts, and the facts about the brain, it is clear that there can still be major disagreements about moral decisions. These disagreements might also be about questions that cannot be left open to interpretation; they cannot be left as a plateau in the moral landscape. From this, if Harris wishes to have an objective morality, he needs a way to settle the differing ideas of well-being to come up with one overarching one … and that is what philosophy has been trying to do, empirically and conceptually, for thousands of years. Harris’ position, then, ends up not solving the problem, but instead walking itself right back into the problem of value and what it means to be truly moral.

They count only blue cabs …

May 21, 2014

When I was finishing up my Master’s Degree in Philosophy, I sat in on a tutorial with a few Cognitive Science students on Mind. We all had to give individual presentations, and one woman talked about Bayesian reasoning and about the taxicab problem. I found the example massively counter-intuitive, and ended up arguing in E-mail about this with a couple of students over it until everyone got sick of it. This impacted me in two ways:

1) It led to me having a great distrust of Bayesian probability.
2) It confirmed for me something that I had already held to be true about the “Gambler’s Fallacy”, which is that I classify these as “Obi-Wan Fallacies”: what action you should take/what you should believe depends greatly on your point of view.

I was thinking about this again yesterday while hanging around the university waiting for the Alumni office to open, and came to a conclusion about what exactly was wrong with the taxicab problem and why it didn’t work. And then while searching for a good summary of the taxicab problem I found this paper from 1999 that sums that up precisely. Before I summarize it, let me summarize the problem, taken from the appropriate sections here:

In another study done by Tversky and Kahneman, subjects were given the following problem:

“A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue.

A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time.

What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue?”

Most subjects gave probabilities over 50%, and some gave answers over 80%. The correct answer, found using Bayes’ theorem, is lower than these estimates:

* There is a 12% chance (15% times 80%) of the witness correctly identifying a blue cab.
* There is a 17% chance (85% times 20%) of the witness incorrectly identifying a green cab as blue.
* There is therefore a 29% chance (12% plus 17%) the witness will identify the cab as blue.
* This results in a 41% chance (12% divided by 29%) that the cab identified as blue is actually blue.

No, to me, the right answer was: 80%. This is the probability that the witness identified it correctly. But, regardless, them being given as over 50% seems to indicate this reasoning: it can’t be the case that someone, under the appropriate conditions, can identify the colour of the cab reliably and yet it be someone more likely that they are identifying the colour of the cab incorrectly in this case. It’s only the Bayesian calculations that say otherwise, but then surely applying Bayes’ theorem here is the wrong way to solve this problem. At the time, I conceded that over time these numbers might work out, because the differing numbers of cabs would result in more mistakes made identifying blue cabs than green ones, but for every indiviual event it can’t work out that way. So an insurance company might want to use the Bayesian numbers, while a judge looking only at a specific case couldn’t. That, then, made it an Obi-Wan Fallacy. Even trying to run a computer model ran into issues of it depending on how you counted.

Michael Levin, in his article, sums up how I came to understand the problem yesterday, with some additional nice mathematics for people who like that sort of thing. The key part is here:

“Reliability” should be explicated so as to preserve the apparent truism that
someone equally reliable at two t asks-such as shooting for two different
regiments, o r identifying cabs o f different c olors-is equally likely to succeed
at both. This principle is violated by the “Bayesian” analysis I have criticized.
For let us assume, as does the received analysis, that Witness is precisely as
reliable about Greens as about Blues, i. e., (5) and (6). To evaluate the prob-
ability that the errant cab was Green i f Witness says it was, switch h with – h
and w with – w in (7); P (-h!-w) is then (.8 x .85) + [(.8 x .85) + .2 x .15)]
66 Michael Levin
= .95. That P (-hl-w) » P(hlw)-the cab is more likely to have been Green
i f Witness says Green than to have been Blue if Witness says Blue-shows
that, whatever we are discussing, it is not the probability that Witness is right.

What I had thought of for a long time was the idea that the Bayesian analysis couldn’t be right because the probability of it being a blue cab or a green cab had to, logically, be identical to the probability that the witness had identified the cab properly. That’s what saying that they can identify the colour of a cab reliably 80% of the time means. What I should be able to do, then, is take the final probability of the cab being blue given that the witness identified it as blue and sub it into the probability that the witness identified the colour of the cab correctly (in this case, as blue). But remember that the probability that the witness identified the colour of the cab correctly was our initial probability, which means that to do that properly you’d have to run it through the Bayesian analysis again, which would change the results, which would lead to an infinite progression until you got to 0, which can’t be what you wanted.

To work around this, you have to argue one of two things:

1) That the probability that the witness can identify the colour of the cab correctly isn’t what was measured, but is the result of the Bayesian analysis. This leads to the looping above and makes the measurement pointless and suspect.

2) That the probability that the witness identified the colour of the cab correctly is not the probability that the cab was the colour the witness identified it as. But written out like this, it seems obvious that the probability that the witness identified the colour of the cab correctly is identical to the probability was the colour they said it was. That seems to be what that means, most of the time.

So, as Levin says:

What we are discussing, when Bayes’s Theorem comes into play, is the
cab’s likely color when we do n ot know the probability that a cab is the color
Witness says it is. Background infonnation, including base rates, then be-
comes pertinent. I f most cabs are Green, the cab Witness saw very likely was
Green, all else equal. I f in addition most o f the time Witness will say a cab is
Green when i t is, a nd say i t is Blue when i t is, the cab he saw is almost certain
to have been Green i fhe says G reen-but less certain to have been Blue i fhe
says Blue. Many situations, like this one, involve an indicator o f unknown
trustworthiness. We know the odds that a subject with clogged arteries will
feel fatigue, and the odds that a subject with nonnal arteries will feel fatigue.
What we would like to know is the specificity o f fatigue, the probability that
someone feeling fatigue has clogged arteries. In such cases we should not say
we know how well fatigue predicts clogged arteries. Did we know that, fur-
ther infonnation would be superfluous. Indeed, knowing an idicator’s trust-
worthiness and what the received analysis calls “trustworthiness” would us to
solve for the base rate.

You don’t and can’t use Bayesian analysis when one of the probabilities you are using in and of itself determines what the final probability is. That’s precisely the mistake that’s being made here. So if you are going to use Bayesian analysis, you need to be very careful to ensure that you don’t fall into this trap. If you do, you will end up with very counter-intuitive results that look right mathematically but fail logically. Which explains my problem with it, since I’m far stronger logically than mathematically, and so insisted that the logic couldn’t be violated even though the mathematics said it could.


Follow

Get every new post delivered to your Inbox.

Join 36 other followers