So, lately, Richard Carrier has been posting very long and very snarky and insulting posts talking about philosophy in general, with a particular focus on ethics and religion. The problem is that those posts being long and taking lots of time out to insult the people he’s criticizing and any who might hold a similar position makes them very hard to read, and would do so even if I agreed with him … which, of course, I generally don’t. In fact, it’s worse when I disagree with him because the insults don’t add anything to the conversation for me and yet I feel that I really should read through the entire post to make sure that he doesn’t make a point that, well, makes sense or address some of my objections.
However, there are, at least on occasion, things that I want to address in them, so I’m going to try. In this post, I’m going to address his comments on Plantinga’s Tiger thought experiment and the Evolutionary Argument Against Naturalism, that he discusses in two posts.
So, what is this thought experiment, and what is this argument? Well, Plantinga wants to argue that if accepting evolution and naturalism would provide a defeater for our belief that evolution and/or naturalism are true, then we would have reason to reject at least one of them and accept some other view, which he suggests would be a theistic view. How he tries to get there is to argue that if our cognitive faculties were selected for by evolution with no supernatural influence, then they were selected for simply on a survival basis; they were selected for because using them allowed the organism to survive longer and/or reproduce more than those who didn’t have them or didn’t use them. In general, we assume that propositions have to be true in order to provide a survival benefit; if you don’t have an accurate view of reality, you probably aren’t going to be able to survive for very long. But this is exactly what Plantinga wants to challenge with his thought experiment.
What he argues is this: our behaviour is determined, in some sense, by our beliefs and desires. We believe some facts about the world, have some desires, decide that given the facts we have about the world that a certain action will achieve our desires, take that action, and then see what happens. If it works, we assume that our reasoning was correct and we have the right facts, and if it doesn’t we assume that either our beliefs or our reasoning about what would be a good action are correct, and so try to correct one of them. But Plantinga argues that we can derive the right behaviour from the wrong beliefs, as long as the wrong beliefs are assembled in the right way. And that’s what his thought experiment is aimed at showing. It argues that if a human comes across a tiger, the right behaviour in terms of survival is to run away. So if the person believes that a tiger is a threat, and that the right behaviour to ensure their survival is to run away, then that is what they will do, and they will succeed. But if the person believes that the tiger, say, wants to play tag, they will also run away, and they will also succeed. Because Plantinga wants to make an argument that the probability of getting true beliefs just from pragmatics is low, he argues that there are many, many ways to assemble appropriate false beliefs, but only one way of assembling true ones, so a cognitive faculty that was selected for on the basis of pragmatics — the beliefs it produces works out — would be unreliable. And if our cognitive faculties are so selected and thus are unreliable, then any belief produced by them is produced by an unreliable mechanism and so cannot be trusted. But our belief in naturalism and evolution are produced, he argues, by those mechanisms, and thus would be unreliable. So we’d have a defeater for those beliefs if we accepted them, and so it’s not rational to believe them.
Now let me generalize a bit from this to make it clearer, because I believe that some of the things he assumes he doesn’t have to assume to make his point. First, I think that his view can be summed up as this: selecting for pragmatics isn’t the same as selecting for truth, and so you cannot assume that a mechanism that produces useful beliefs is also producing true ones. Second, I don’t think he needs to argue that the probability of getting true beliefs is low, just that such a system will produce false beliefs a significant amount of the time. If it does, then we’d have to doubt any belief we have since we wouldn’t know — and couldn’t tell — which of the beliefs were true and which were conveniently false, which would provide our reason to doubt evolution and naturalism.
Now, before I get into Carrier’s counters, let me briefly outline what _I_ think is the best counter to it. While it’s relatively easy to find one case where we can massage beliefs to get the right behaviour, most of our beliefs are used in a number of situations. It’s a lot harder to imagine that we can have a set of false beliefs that would work in every case where those beliefs are relevant to our behaviour. It’s a lot harder to assemble a consistent set of beliefs that work with every other belief that we have in the situations where other beliefs are relevant to our decision. You can do it, but it usually involves a lot of workarounds and patch-ups and by the end of the day it really seems like just having true beliefs is far less complicated and actually works far more of the time. In isolation, the thought experiment might seem plausible — many people, however, don’t find it plausible — but as we build out a full Web of Belief it starts to look quite implausible.
So, then, what are Carrier’s counters. The first one I’d like to look at is the idea that Plantinga’s idea of beliefs and desires producing behaviour isn’t an accurate way to describe our cognitive faculties:
He does so by simply asserting all that happens in cognitive evolution is the natural selection of belief-desire pairs (“see tiger, run”). That’s wildly false. As I already noted, it’s total bullshit, worthy of Ken Ham’s Ark Encounter. You can’t explain the reliability of human vision in accessing reality (which gets us mostly to a correct basic physics of the world, like where objects are and what shapes they have and what the patterns of their geometry and movement tell us about reality, e.g. discriminating a live tiger from a dead one, or a tiger from a wildcat, or an angry tiger from a guy in a tiger-pelt cloak), and the evolution of human hypothesis-testing in accessing reality (which gets us things like “a spear will kill a tiger”), with the same selection model at all, much less Plantinga’s nonsense about “belief-desire pairs.”
The problem is that even if Plantinga insists that our cognitive faculties work entirely on “belief-desire pairs” — and it isn’t clear that he does — this doesn’t matter, because that still works as a relatively accurate description of how we act in the world based on the facts we have, which is the only thing that evolution or even pragmatics can work on. We have facts about the world, things we want to accomplish, and we use those to determine how to act in the world. Whether those facts form from reasoning into an explicit belief proposition or from an automatic parsing of sense data, at the end for everything we know beyond the instinctual — and possibly even in the instinctual — we have a fact and a desire and combining the two gives us an action that we can take to see if it works out in the world, which many people then use to conclude that the facts and reasoning are true. If Carrier is going to deny this, then he’s going to have no way to actually test any cognitive faculties, and will undercut his own pragmatic arguments. So this argument is entirely irrelevant to the debate.
His better counter is to introduce a distinction between our innate cognitive faculties and our developed ones, like logic and science. The former are clearly produced by evolution and are also, according to Carrier, actually pretty bad, as he expects given evolution. But they are good enough to allow us to develop those other cognitive faculties, and those ones work really well, and those are the ones that justify evolution and naturalism:
Plantinga is explaining the wrong thing. He thinks innate faculties have to generate scientific knowledge. False. All they have to be able to do is generate the ability to discover a technique (like the scientific method). And then the technique generates scientific knowledge. Using those underlying faculties. But it is not the faculties alone that are doing it. Those faculties have to be manipulated according to a procedure, one not evolved, and not innate in the brain (nor easily learned…remember, no human learned it for hundreds of thousands of years; and no human learns it today, unless they are taught it by someone else).
Note that we don’t even need evolved faculties that generate the techniques that can gain greater access to world knowledge. We only need evolved faculties to have the ability to generate those techniques. And observe history: that’s what happened. Our evolved faculties did not just generate those techniques (in the way they readily generate, for example, knowledge of object permanence). They failed to do so for countless thousands of years of countless millions of humans tinkering around and exploring different techniques. That that process would stumble across the cognitive tools we now use (science, math, logic) was statistically inevitable; it just would require a really long time. And lo and behold, we observe that’s exactly what it did. Evolution by natural selection is confirmed. Intelligent design is refuted.
(Note that Carrier tends to use a lot of emphasis, which doesn’t come across when I copy from his posts. I’m too lazy to put it back in, but it’s in the second link above if you want to see it).
But the problem with this view is that simply generating a technique or cognitive faculty isn’t enough. It has to be a technique that we can verify is reliable, meaning that it produces true beliefs more often than it produces false ones, and preferably that produces true beliefs almost all of the time. If our underlying faculties are in general pretty poor at doing that, we can’t use them to verify these new faculties. And since that’s all we have, there doesn’t seem to be a way that we can justify that these faculties are reliable, and so no way we can know that they are reliable. And if we can’t know that they are reliable, then Plantinga has his defeater.
Now, Carrier would argue that we test everything against survival, or rather that they work in the way outlined above: they produce beliefs that, when we act on them, actually work out as expected. Our innate faculties work well enough to be useful and to help us produce tools, but when we acted on them we found them to be insufficient as they are wrong way too often, but when we act on science or logic they work out far more often, and so are more reliable, and so we judge them reliable because, well, they do indeed work. This is probably about the only way we can go … but note that as I commented above this just retreats to justifying our cognitive faculties on the basis of pragmatics, and Plantinga’s argument is that trying to use pragmatics to justify our cognitive faculties doesn’t work because we can have a large number of sets of false beliefs that happen to produce the correct behaviour, meaning that those false beliefs happen to “work”. Thus, Carrier either gives Plantinga his defeater, or else uses the precise justification that Plantinga is attacking. Carrier hits at this attack:
It is not true that we can build consistently false beliefs about the world and still successfully navigate it. False beliefs kill you, e.g. a hungry man who runs from a tiger will starve; a hungry man who endeavors to kill the tiger, will eat. More to the point, if you don’t know you can’t hide from a tiger inside of a coconut; that objects don’t cease to exist when they move behind other objects; that you can’t summon water on a journey; that a leaf doesn’t hold enough water to drink in a day; and on and on and on, you will not do very well at survival compared to someone who does know those things.
But while a number of these things seem right, and even his attack on Reppert in the first post cited above is not unreasonable, nobody thinks that the sets of false beliefs are this trivial. Even Plantinga’s example is more complicated than this. Carrier, it seems to me, really needs an argument like the one I outlined above (which may be buried somewhere in his posts), which is that as we get into more and more complicated actions and interconnected beliefs, creating consistent false beliefs that always work is simply untenable.
So Carrier’s counters aren’t all that strong, certainly not strong enough to justify the strong tone he takes. That being said, I don’t think Plantinga’s argument works either.
Thoughts on Beast Wars and Beast Machines
September 25, 2017So, the last segment of my spin through Transformers was the CGI-based series “Beast Wars” and “Beast Machines”. For the most part, I think both of these series were definitely hampered by a move to short, 13-episode series from a longer Season 1 of “Beast Wars”, although “Beast Machines” suffered more than “Beast Wars” did.
While the post-movie Transformers cartoon definitely tried to take on more mature and darker topics than the original cartoon, these series went even further, although oddly while they were definitely more serious they weren’t typically darker overall, at least in “Beast Wars”. There was still a huge sense of fun that the post-movie cartoons seemed to lack, and that was also more absent in “Beast Machines”. So ultimately it started down a path of having more detailed and involved plots and characterizations and character arcs, which worked really well. And they both tended to not only have these be more detailed, but also to have more of them, and to have them all going on at the same time, which allowed for them to advance multiple arcs in the same episode while the overall episode focused on one of them or, at times, none of them.
The thing is that if you’re going to do that many involved and detailed arcs all going on at the same time, you really need the time to develop them all. In the first season of “Beast Wars”, there were enough episodes and few enough arcs that this could be done. But when the seasons shortened to 13 episodes, there wasn’t enough time to develop them all and still develop and resolve the main plot for the season. Season 2 of “Beast Wars” didn’t suffer from this as much, because it could utilize what was developed in the first season. But the third season struggled a lot more with this, ending up with a number of arcs that seemed rushed — Tigerhawk, for example, resolves the Tigertron/Air Razer kidnapping plotline by his showing up to fulfill some kind of prophecy in one episode and then dying the next — which really hurt those arcs. The Dinobot clone is another example. After the wonderfully done death of Dinobot earlier, this whole arc would have to be handled carefully, but it could have been done, especially given its ending. But the clone wasn’t properly developed and there wasn’t room to really go into detail with it, so instead the whole thing seems less than monumental. At least it didn’t feel like it ruined that original wonderful arc, but it certainly was far less than it could have been and seemed almost superfluous.
“Beast Machines”, however, suffers the most from this. For the most part, this series can’t utilize what happened in “Beast Wars” because it’s a new series, back on Cybertron. It also has a mystery to resolve and a clash between the organic and technological to resolve, as well as a number of character arcs. And it has to do it in … 26 episodes. It fails to do that, and in so doing makes many of the arcs seem rushed and, ultimately, unsatisfying, as well as a bit confusing. For example, the arc of Tankor really being Rhinox and then setting out to trick Megatron and Optimus Primal into destroying each other and the organics that Optimus was protecting or trying to revive is a good one … that is hampered by there not being time to show Rhinox developing his hatred of organics or, in fact, actually explaining it, and then Rhinox is defeated after only a few short episodes, which then loses the series an interesting antagonist. And then his redemption arc takes place in a short scene in the first episode of the next season. At that point, the arc really seems like a waste.
And this happened to so many arcs, even ones that carried on throughout both seasons. Black Arachnia’s attempts to restore Silverbolt and, once that happened, having to deal with his guilt and cynicism. Cheetor’s development into a more mature leader. Optimus’ growing obsession and mysticism. They even manage a late romance for Rattrap … started and resolved in the last couple of episodes and that ties too conveniently into the plot of the last few episodes. These ideas were all good and could have been great … but they simply weren’t developed enough and so in general come across a bit flat.
“Beast Wars”, though, is still a pretty good series, especially in the first season to season and a half. “Beast Machines”, though, is merely okay and a lot of that comes from it being a continuation from characters that we already know and like. It was worth watching, though.
Posted in Not-So-Casual Commentary, TV/Movies | 1 Comment »