In Defense of Thought Experiments …

So, a common refrain I hear when people who are not philosophers talk about philosophy is the disdain for thought experiments. “What can we possibly learn from such artificial examples, that are so disconnected from real life?” This is one of the things that is usually used to argue that philosophers are ivory tower intellectual, more concerned with intellectual wanking than solving real-world problems, which is then used to justify ignoring philosophy and focusing on more “realistic” approaches, like science. The problem is that this is, in fact, based on a complete misunderstanding of what thought experiments are meant to do and how they work. I’m going to try to correct this misunderstanding by focusing on how important thought experiments are for determining the morality that we ought to use to guide our every day actions.

The first thing to note is that thought experiments are, in fact, very similar to scientific experiments, in both purpose and in how they work. Both aim to test out various theories, and have to do so by focusing on specific elements of the theories and filtering out all of the confounds. As such, lab scientific experiments are themselves incredibly artificial. You don’t test, for example, the ideal gas law by going out and experimenting in the middle of the part. No, you do it in a sealed container or lab where you can control the temperature, pressure and volume of gas as much as you can. This holds for all scientific experiments, and is particularly important in psychological experiments. And yet people rarely complain — well, except sometimes for psychology — that these experiments are too “artificial” and so don’t really reflect “real life”. In general, the experiments are designed to isolate the specific parts of real life that we want to study without invalidating what really happens in real life.

Thought experiments work the same way. What we want to do with a thought experiment is isolate the particular notions that we want to examine (or argue for) without the confounds that real life might introduce. Thus, in the trolley thought experiment, we wanted to isolate the “5 will die if you don’t act, and 1 will die if you don’t” aspect in order to test whether or not our intuitions lean more Utilitarian. The experiment is also designed to avoid the confound of the impact of taking a direct action to kill someone, which might have a moral status; a Stoic or Kantian, for example, won’t be allowed to take a direct action to kill someone, so if a person decides to pull the switch they would definitely be using Utilitarian reasoning. And as it turns out we still missed a confound when we change it to the “push a person in front of the train” example and find that many people change their minds on the permissibility of the action. So the examples need to be simplified and therefore “artificial” to allow us to test what needs to be tested.

Also, it is important to note that we want to get at what people really, at heart, intuitively — or even through explicit reason — think is the case. What we don’t want is them merely regurgitating an answer that their culture has specifically drilled into them for those specific cases from childhood. So, we can expect that everyone will have a ready answer for any normal, every day situation, but that answer might be one that was generated for them from what they learned about morality in their childhood, or is a conclusion that they generated from a moral viewpoint that they no longer hold. Thus, we want to give them a situation that they don’t have a ready answer for, one that they will have to think about and engage their moral reasoning or moral intuitions about. That means, then, making an “artificial” example.

Now, the objection is constantly raised that a moral system can’t be expected to handle situations outside of the experience of the person and/or of human society in general. It is, they assert, an ad-hoc system cobbled together by evolution or something to handle interacting in society, and so isn’t designed to handle things too far out of normal experience. Thus, the answers that are given when we create these artificial experiments just aren’t valid; moral systems can’t and don’t need to handle such outlandish cases.

The problem is that we, in our every day lives, may well come across cases that our moral system wasn’t originally designed to handle. Taking the “evolution” example, if we even went back 100 years — let alone the thousands or millions that evolution would cover — we couldn’t have, for example, conceived — outside of science fiction works — that we’d have to deal with the morality of stem cell research. They couldn’t have conceived of such a thing being possible, and it certainly wasn’t something their experiences in every day life could have prepared them for. If we are going to rely on a system that was developed or strongly influenced by factors so far in the past that we couldn’t have conceived most of the moral dilemmas that we are facing today, we had better have some confidence that it can actually handle those moral dilemmas. If we abandon it because it “wasn’t designed for those questions”, then what are we going to use to settle them? If we limit our moral systems to handling only those cases that we already know, understand, and have ready answers for, then what happens when we end up in a situation where we don’t? Are we just going to muddle through without using any moral system at all and hope we get the right answer? Better hope we get it right the first time, then.

This approach would make moral systems meaningless. All we would be able to do is regurgitate the answers we picked up from … somewhere, and any time we end up in a sufficiently new situation we, if we were being honest, would have to declare our moral intuitions and moral reasoning suspect. There’d be no point in talking about even an evolved moral compass or moral system because we could never trust it to be right except for those cases where we at least have declared that it worked right in the past. But, of course, even then we’d have no idea if it really worked right in the past, because those would have been new situations, too. Ultimately, to make this work means utter confidence in some overarching moral principle — like increasing happiness — that we can use to assess the results of the action to determine if it was the morally right one or not. Of course, then we can use that same principle to assess future actions as well, and from that even the artificial “thought experiment” ones to see if they work out.

So this leads to another common protest, which is essentially that the thought experiments are designed artificially in such a way to invalidly generate an “immoral” result using the basic principles or system that the person is using. Since the deck is stacked against a specific view in the first place, that the view “fails” the experiment doesn’t say anything about the moral view itself. So we can’t use these sorts of experiments for the purpose they are most commonly used, which is to support one moral view over another.

Here, the issue is that, in general, this objection is raised about cases where the person who holds that moral view concedes that they applied their view to the example and came up with an answer that they themselves consider immoral. The person who holds that moral view is always able to respond by “biting the bullet” and either simply stating that despite the intuitions of the person proposing the experiment or even despite their own intuitions that the answer is morally wrong, it really is the morally right thing to do. So if the person retreats to this objection, we can see that what they have is a contradiction in their moral system: when they apply their moral system, they come up with an answer, but then when they assess that answer against their moral intuitions, they believe strongly, nonetheless, that the answer is immoral. A moral system cannot survive having such contradictions, because that would mean that if someone tried to follow it they risk taking what they think are moral actions that, after they act, they consider horrifically immoral. Thus, any such case reveals a contradiction in their view that they need to resolve, and so cannot be dismissed so blythely.

Ultimately, thought experiments are designed for and perform a very important task: testing moral systems. As such, they need to a) engage the moral systems directly and b) challenge them. While some experiments might be too contrived or artificial to work, if you find yourself protesting that a thought experiment that challenges your own view is too artificial you really should consider whether it is the challenge that is the problem, not the experiment.

Advertisements

2 Responses to “In Defense of Thought Experiments …”

  1. Andrew Says:

    2 pushbacks

    * There’s a distinction between hypotheticals and counterfactuals. It’s perfectly reasonable to posit realistic situations. It’s unreasonable to expect moral systems to be robust against circumstances that are “impossible” within their framework.

    * There’s a degree of pragmatism in every liveable moral system. Extreme situations are useful to probe the grey areas (“Would you lie to hide an innocent from someone trying to arrest them to kill them?”), but the failure of a moral system to provide a clear answer in such situations doesn’t necessarily invalidate it.

    • verbosestoic Says:

      1) Note that philosophically, counterfactuals tend to refer to things that AREN’T true, not at least just things that CAN’T be true. But when it comes to evaluating morality specifically, I think it is reasonable to expect moral systems to address situations that are PHYSICALLY impossible, meaning that the conditions could not arise in this world. The circumstances that would be “impossible” that we couldn’t expect them to handle would be those that can’t happen given their moral system. So, for example, going after Stoicism by only positing situations where all possible actions — including inaction — are vices; it might be possible for that to happen, but the moral system can’t be expected to have a really clean way of handling those cases unless those cases are really common. And we’d still expect them to have some kind of way to address those cases, and so the thought experiment would only work against a claim that such circumstances could NEVER arise; if the moral system accepts that these can happen, they would be expected to and likely would already have a way to handle them, even if that way is “Pick whichever option you want for non-moral reasons, because morally there is no choice here”.

      2) As I think I said, the main use of them is to trigger a “This is what your moral code says is right, but you have at least a strong intuitive idea that it’s wrong”. In other posts, I’ve pointed out that this sort of argument is vulnerable to “biting the bullet”, where you insist that, nonetheless, it’s still the morally right thing to do. Ultimately, though, any moral system has to have an opinion on how much pragmatism is going to count in determining if the person acted morally. This is one reason why I distinguish between “understandable” and “morally right”. Under Stoicism, being asked to let your spouse die instead of acting immorally is going to be very, very hard for most people, and since no one is a true Stoic sage many if not most people will fail at that. So it’s understandable if in those specific cases someone fails to live up to their moral obligations. But that view of pragmatism doesn’t extend to whether or not what you did was morally right. The only other kind of pragmatism involves things like having a lack of perfect knowledge, but I believe that any moral system can only go with what the moral agent has reasonable access to, and so yes that would be taken into account and a thought experiment that depended on perfect knowledge would be invalid on that grounds alone.

      So, ultimately, when you’re doing moral philosophy most of these are or can be taken into account when appropriate. If someone thinks that one of these things is indeed a valid reason to claim that a thought experiment doesn’t apply, the right course is to argue for that, not argue merely that the thought experiment is artificial and that thought experiments in general are unfair or not useful.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: