So, a common refrain I hear when people who are not philosophers talk about philosophy is the disdain for thought experiments. “What can we possibly learn from such artificial examples, that are so disconnected from real life?” This is one of the things that is usually used to argue that philosophers are ivory tower intellectual, more concerned with intellectual wanking than solving real-world problems, which is then used to justify ignoring philosophy and focusing on more “realistic” approaches, like science. The problem is that this is, in fact, based on a complete misunderstanding of what thought experiments are meant to do and how they work. I’m going to try to correct this misunderstanding by focusing on how important thought experiments are for determining the morality that we ought to use to guide our every day actions.
The first thing to note is that thought experiments are, in fact, very similar to scientific experiments, in both purpose and in how they work. Both aim to test out various theories, and have to do so by focusing on specific elements of the theories and filtering out all of the confounds. As such, lab scientific experiments are themselves incredibly artificial. You don’t test, for example, the ideal gas law by going out and experimenting in the middle of the part. No, you do it in a sealed container or lab where you can control the temperature, pressure and volume of gas as much as you can. This holds for all scientific experiments, and is particularly important in psychological experiments. And yet people rarely complain — well, except sometimes for psychology — that these experiments are too “artificial” and so don’t really reflect “real life”. In general, the experiments are designed to isolate the specific parts of real life that we want to study without invalidating what really happens in real life.
Thought experiments work the same way. What we want to do with a thought experiment is isolate the particular notions that we want to examine (or argue for) without the confounds that real life might introduce. Thus, in the trolley thought experiment, we wanted to isolate the “5 will die if you don’t act, and 1 will die if you don’t” aspect in order to test whether or not our intuitions lean more Utilitarian. The experiment is also designed to avoid the confound of the impact of taking a direct action to kill someone, which might have a moral status; a Stoic or Kantian, for example, won’t be allowed to take a direct action to kill someone, so if a person decides to pull the switch they would definitely be using Utilitarian reasoning. And as it turns out we still missed a confound when we change it to the “push a person in front of the train” example and find that many people change their minds on the permissibility of the action. So the examples need to be simplified and therefore “artificial” to allow us to test what needs to be tested.
Also, it is important to note that we want to get at what people really, at heart, intuitively — or even through explicit reason — think is the case. What we don’t want is them merely regurgitating an answer that their culture has specifically drilled into them for those specific cases from childhood. So, we can expect that everyone will have a ready answer for any normal, every day situation, but that answer might be one that was generated for them from what they learned about morality in their childhood, or is a conclusion that they generated from a moral viewpoint that they no longer hold. Thus, we want to give them a situation that they don’t have a ready answer for, one that they will have to think about and engage their moral reasoning or moral intuitions about. That means, then, making an “artificial” example.
Now, the objection is constantly raised that a moral system can’t be expected to handle situations outside of the experience of the person and/or of human society in general. It is, they assert, an ad-hoc system cobbled together by evolution or something to handle interacting in society, and so isn’t designed to handle things too far out of normal experience. Thus, the answers that are given when we create these artificial experiments just aren’t valid; moral systems can’t and don’t need to handle such outlandish cases.
The problem is that we, in our every day lives, may well come across cases that our moral system wasn’t originally designed to handle. Taking the “evolution” example, if we even went back 100 years — let alone the thousands or millions that evolution would cover — we couldn’t have, for example, conceived — outside of science fiction works — that we’d have to deal with the morality of stem cell research. They couldn’t have conceived of such a thing being possible, and it certainly wasn’t something their experiences in every day life could have prepared them for. If we are going to rely on a system that was developed or strongly influenced by factors so far in the past that we couldn’t have conceived most of the moral dilemmas that we are facing today, we had better have some confidence that it can actually handle those moral dilemmas. If we abandon it because it “wasn’t designed for those questions”, then what are we going to use to settle them? If we limit our moral systems to handling only those cases that we already know, understand, and have ready answers for, then what happens when we end up in a situation where we don’t? Are we just going to muddle through without using any moral system at all and hope we get the right answer? Better hope we get it right the first time, then.
This approach would make moral systems meaningless. All we would be able to do is regurgitate the answers we picked up from … somewhere, and any time we end up in a sufficiently new situation we, if we were being honest, would have to declare our moral intuitions and moral reasoning suspect. There’d be no point in talking about even an evolved moral compass or moral system because we could never trust it to be right except for those cases where we at least have declared that it worked right in the past. But, of course, even then we’d have no idea if it really worked right in the past, because those would have been new situations, too. Ultimately, to make this work means utter confidence in some overarching moral principle — like increasing happiness — that we can use to assess the results of the action to determine if it was the morally right one or not. Of course, then we can use that same principle to assess future actions as well, and from that even the artificial “thought experiment” ones to see if they work out.
So this leads to another common protest, which is essentially that the thought experiments are designed artificially in such a way to invalidly generate an “immoral” result using the basic principles or system that the person is using. Since the deck is stacked against a specific view in the first place, that the view “fails” the experiment doesn’t say anything about the moral view itself. So we can’t use these sorts of experiments for the purpose they are most commonly used, which is to support one moral view over another.
Here, the issue is that, in general, this objection is raised about cases where the person who holds that moral view concedes that they applied their view to the example and came up with an answer that they themselves consider immoral. The person who holds that moral view is always able to respond by “biting the bullet” and either simply stating that despite the intuitions of the person proposing the experiment or even despite their own intuitions that the answer is morally wrong, it really is the morally right thing to do. So if the person retreats to this objection, we can see that what they have is a contradiction in their moral system: when they apply their moral system, they come up with an answer, but then when they assess that answer against their moral intuitions, they believe strongly, nonetheless, that the answer is immoral. A moral system cannot survive having such contradictions, because that would mean that if someone tried to follow it they risk taking what they think are moral actions that, after they act, they consider horrifically immoral. Thus, any such case reveals a contradiction in their view that they need to resolve, and so cannot be dismissed so blythely.
Ultimately, thought experiments are designed for and perform a very important task: testing moral systems. As such, they need to a) engage the moral systems directly and b) challenge them. While some experiments might be too contrived or artificial to work, if you find yourself protesting that a thought experiment that challenges your own view is too artificial you really should consider whether it is the challenge that is the problem, not the experiment.