So, you’ve been given antibiotics. I’m pretty sure everyone will know the old rule: always take all of the antibiotics, because if you don’t then you might increase resistance to antibiotics which would be bad. This was a firm rule for as long as I can remember, and so pretty much for as long as I’ve been alive. If we knew anything, we knew that taking all of the antibiotics in your prescription was the right thing to do.
Or, perhaps not.
I actually came across this earlier this week while waiting for a lunch order and watching their TV while waiting. It was on a talk show and they were talking about headaches, the expert mentioned that for some headaches antibiotics might be required, the hostess repeated the line about always finishing them, and he rather awkwardly replied that, yeah, that might not be true anymore. She was flabbergasted, as was I. This seemed so certain. We were always told that this was the way things were supposed to work. And now it might not be? Really?
What’s next? Smoking doesn’t actually cause cancer? (What a monumentally chaotic situation that be, eh?)
And medical science tends to be fraught with such examples. Recommended diets, for example, change frequently as new things are discovered. Is chocolate, alcohol, or eggs good, or bad? Is fat good, bad, or indifferent? How much should you exercise? Can you exercise in small amounts or do you need to do longer sessions to get any benefit? And so on and so forth.
And don’t even get me started on Psychology.
Now, both of those at least have the excuse that they are trying to use the perfect, third-person oriented scientific method on situations that are far more chaotic and personal than normal. Maybe all they really need to do is stop trying to universalize these principles, turn the more common ones into recommendations, and add more ways to help people determine what works for them. I even have a couple of examples of this from personal experience. In a Psychology class I was taking, the old “constant review” rule was mentioned. The problem is that constantly reviewing bores the heck out of me, and can actually make my retention worse because I stop paying attention to it. You know what surprisingly does work for me? Writing everything down, even if I have notes or slides to look at. Saying it to myself in my head seems to help me remember things even if I don’t really study or review until the end. The other example is the common “graze” advice, where you eat small meals when you’re hungry instead of having a couple of big meals. The problem for me, as I constantly tell people, is that if I tried that I’d either eat all the time or not at all, depending on what I’m doing. If I’m mentally engaged in something and not thinking about food, then I won’t even notice I’m hungry (Star Wars: Rebellion, I’m looking at you here). But if I’m sitting around just reading or watching TV, then I get bored and so at least get more inclined to eat something. So for me the best model is to have scheduled meals and even plans for what I’m going to eat. But both of these cases are ones where for others — and maybe even for most others — they would work. If I blindly followed the advice, they wouldn’t work for me, but that doesn’t mean that it doesn’t work for other people.
But are these fields exceptions, or is science not really trustworthy?
Before I get into this, I probably should fire off this disclaimer. My first degree is actually a science degree. It’s a Bachelor’s of Computer Science, but that was under the Faculty of Science at my university. I took an Astrophysics course as an elective. The reason I don’t follow a former co-worker’s advice and do a Physics degree is because I can’t handle the math, not because I hate the science. I’m not an expert scientist, but I don’t dismiss any scientific discovery without at least having reasons to do so (like finding potential confounds). I oppose scientism, but don’t oppose science itself. And my main approach to clashes between science and religion or philosophy is to conform the religion or philosophy to the scientific facts (whatever they are). So I’m not some kind of anti-science crusader trying to weaken science to bolster my non-scientific claims.
And so, let me ask: should we trust a field that is most famous for getting things wrong?
There aren’t a lot of theories in the history of science that survived unaltered, and a large number of them were, in fact, overturned. As we have seen and are seeing, a lot of these upheavals have happened with theories that were considered rock solid for ages. Newtonian physics, for example, was at least found to be wanting and so predicted the wrong things at certain levels, and so had to at a minimum be supplemented by relativistic physics (it’s a major bone of contention to say that relativity replaced Newtonian, but the more I think about it the more I think that it did, because the only things that really were saved were the ones based on precise empirical measurements, and a theory that only explains what you can measure isn’t much of a scientific theory, but I digress). Depending on what you count as science in history — and even scientists and scientismists are inconsistent about this, claiming ancient philosophers and yet dismissing some medieval figures who actually claimed to be doing science — you can claim radical changes in pretty much every field being prevalent in science. And, in fact, that radical changes are more the norm in scientific history than long-standing theories that never changed.
And in fact even one of the biggest examples of science vs religion was in fact caused by a change in the science. Natural theologians adopted the design theory based on the mechanistic view that science was promoting at the time, only to have that base cut out from under them by science deciding that, no, the world wasn’t that way and that evolution was the way to go. In fact, science’s move here also caught Immanuel Kant, as many will criticize him for assuming that Newtonian physics was settled while discussing the phenomenal world and so “getting that wrong”, despite the facts that a) he was just saying what science thought at the time, b) he wasn’t making an argument that his philosophy implied or insisted on it, c) his philosophy didn’t really require that to be the case and d) his most important point there was that science was the method to figure out the phenomenal world, because that world was empirical.
Science, then, has changed its views pretty frequently throughout its history, and yet rolled along, in general, touting that it finds and corrects its mistakes. However, any other field that relies on the current understanding of science and tries to build on that very much risks science undercutting them later, and then having scientismists chortle about how those fields would be so much more accurate if they just did things the way science does.
One wonders whether anyone should, in fact, rely on science for anything important at all, or instead just rely on what seems best to them given all they know.
The problem is that there are three main aspects to science. The first is strictly empirical: taking measurements of the world and tossing those into equations that capture those measurements. These are, in general, pretty accurate, but are mostly meaningless. Science can pretty much measure, for example, what speed something will fall at if you drop it at various heights, and even write equations to allow it to predict heights that it hasn’t directly measured, but that’s not all that impressive. The second is the explanation for why that happens, which starts to get into various theories. These are more speculative, but can be not too bad when the situations are controlled and the theories add on the caveat that they are true given that the situations are the same and that nothing has changed. The third is the inductive step, where the theories try to generalize to more and more situations that we haven’t and can’t measure. It’s this step that causes the most problems, because the predictions depend on the reasoning being correct and the situations not actually varying in odd ways that they didn’t think of when they came up with the theory.
So the first is what science can do really well, but is the least interesting, while the last is the most interesting, but the more risky. Science is going to have to correct the first the least and the last the most. But to base anything interesting on science will require you to use the more interesting results, which are the ones that are the most likely to be incorrect. This will even apply to simple life choices based on medicine or psychology. Sure, you can trust the doctor when he says that if you have this condition that taking this medicine will cure it in general because there isn’t that much variance in people or that condition and they’ve tried it millions of times. You might not be able to trust him when he says that taking cholesterol medication will reduce your chances of a heart attack because there are all sorts of other factors involved, like risk factors, your reaction to the medication, whether you can improve your diet and exercise, and so on and so forth.
Maybe what we need to do, really, is be more careful about examining which of these three cases the purported scientific theory is. Scientists — and, more often, popular science media — often tend to express any scientific result as if they are all equally “supported” by the weight of science, but that isn’t true. Yes, scientists are generally better at noting when they are more or less certain about it in their papers, but if they find something really cool they generally emphasize the “coolness” and barely mention the “preliminary” parts, because they want the recognition and want to get money to keep looking at the cool things. Being more careful about this would certainly help.
That wouldn’t do a thing for cases of long-standing theories that suddenly get overturned, however. But perhaps the problem is that scientists don’t do enough philosophy. Philosophers are famous for pointing out “Your theory doesn’t have to be true because X could be the case”. I don’t think science should go full on skeptical like philosophy does, but I also note that a lot of the problems science faces tend to be ones that philosophy would, in general, point out. Potential confounds. Theories strongly overreaching the data. The logic not actually being valid. And so on. I’ve myself read scientific and psychological works and found obvious potential confounds. It might be a good idea for scientists to take more philosophy — it generally isn’t required for scientists to take any — or to have a field like Philosophy of Science produce more philosophers whose main role is to look at scientific theories and find all the places where their logic isn’t working and to advise new experiments to make or new data to gather.
Perhaps that could be a new career for me! If I could handle the math, that is …