P.Z. Myers, in his own inimitable way, is taking on the idea that some prominent people have at least expressed some sympathy for that we are more likely to be living in a simulation than to not be living in one. He is forming his opinion based on a video by Sabine Hossenfelder. As it turns out, I’ve taken on a video of hers before, talking about free will that was referenced by Jerry Coyne (so the two of them at least have her in common despite their sniping at each other). And I think the same comments that I made there also apply to this video: she feels free to opine on the topics without really understanding them enough to justify her confidence in her position.
Let me start with what the simulation hypothesis actually is. She references Nick Boström’s argument, which as I understand it is actually this: If it is possible to simulate a world with simulated consciousnesses, then a sufficiently advanced civilization will create such simulations. But then each civilization will create one if not more simulated worlds. This means that if these simulations are perfect, there will be a significant number of perfect simulations that the consciousnesses inside the simulation will not be able to tell are simulations. Thus, if we take the total number of “worlds” like ours, a significant number of them will be simulated. Thus, there is a significant likelihood that our world is, indeed, one of those simulated worlds.
She starts by dismissing one of the more common objections to the idea:
The point I have seen people criticize most frequently about Boström’s argument is that he just assumes it is possible to simulate human-like consciousness. We don’t actually know that this is possible. However, in this case it would require explanation to assume that it is not possible. That’s because, for all we currently know, consciousness is simply a property of certain systems that process large amounts of information. It doesn’t really matter exactly what physical basis this information processing is based on. Could be neurons or could be transistors, or it could be transistors believing they are neurons. So, I don’t think simulating consciousness is the problematic part.
The first problem here is that she assumes that it doesn’t matter what the physical basis for certain systems that process large amounts of information is, but then notes that it is a property of “certain systems”. Which systems? Well, for her, that would at least be the ones with brains, and obviously simulated people will not actually have brains, and may not have anything like neurons. In short, we don’t know in any way that computers can be conscious, and so don’t at all know if we can simulate consciousness. And obviously if we can’t simulate consciousnesses then we can’t have simulations of consciousnesses, which defeats the argument. But let’s assume that she’s right and it’s not really that big an issue to assume that we can simulate consciousness. Then since we don’t really know any way to tell if something is conscious or not except by how it acts, we run into the ruminations of Bear from .hack, wondering about when he would find that a game he was playing wasn’t going too well and would start over or restore a save, what happened to the world that he was leaving behind? Was he abandoning that world to evil? Or were those consciousnesses simply snuffed out? Arguing that we could definitely create simulated consciousnesses raises a host of moral and philosophical issues beyond are we, ourselves, simulated consciousnesses (an argument could be made that presumably sufficiently advanced civilizations to create simulated consciousnesses are as likely as not to be morally advanced as well and so at that point would never actually create such a world, or at least a world where the inhabitants couldn’t tell, which would defeat the argument as well).
The second problem here is that we actually don’t know that it’s a property of systems that process large amounts of information, and in fact in line with the comment above if that really was the case then it seems like we already have many, many systems that would be conscious based on the amount of data they process. Fortunately, we don’t think that it’s simply large amounts of data processing, but instead at a minimum making it be about the sort of information it’s processing, mostly self-aware information. And even that is controversial. So her comment that the most common argument doesn’t seem well-motivated.
But since she doesn’t think this is a concern, what arguments do she think works?
The problematic part of Boström’s argument is that he assumes it is possible to reproduce all our observations using not the natural laws that physicists have confirmed to extremely high precision, but using a different, underlying algorithm, which the programmer is running. I don’t think that’s what Bostrom meant to do, but it’s what he did. He implicitly claimed that it’s easy to reproduce the foundations of physics with something else.
Actually, no, he didn’t, because she misses what a simulation would actually do. It would not be taking a set of natural laws and trying to simulate them, but would instead be creating that set of natural laws for the simulation. So the simulation isn’t aping the laws we see, but is instead producing them. In short, the foundations of physics just are what the simulation is producing by its algorithm, and so no reproduction is required or even occurring.
However, she could fairly argue that it we are talking about simulations then we are talking about simulating a “real world”, and so the system would still have to be reproducing the foundations of physics, which then would be the physics of the simulating world and not our own. Sure, but the obvious issue with that is that what is important for the original argument is that we, from inside that world, think of it as a natural world with consistent natural laws, not that the laws that we experience are consistent across all real and simulated worlds. So the first counter here is that we don’t have any reason to think that these simulated worlds will indeed try to reproduce the laws of the creating world. As we saw with the video game example, a lot of life simulations deliberately do not attempt to simulate the rules of the world itself, but instead try to simulate worlds with other rules for various reasons. This is also true for scientific simulations. And the second is that if it’s actually difficult to simulate worlds with the laws of physics of an existing world, the simulation may well be simplified to take that into account. So, then, we don’t have any reason to think that simulated worlds will necessarily have the same rules and laws as the simulating world, and so again from our perspective all we have are laws and rules, not reproduced laws and rules.
So she needs to shift, here, to an argument about it being too difficult to make a simulation that, from the inside, would really look like a natural world. She does try to make that argument:
A second issue with Boström’s argument is that, for it to work, a civilization needs to be able to simulate a lot of conscious beings, and these conscious beings will themselves try to simulate conscious beings, and so on. This means you have to compress the information that we think the universe contains. Bostrom therefore has to assume that it’s somehow possible to not care much about the details in some parts of the world where no one is currently looking, and just fill them in in case someone looks.
The problem is that this argument is basically an argument from processing power: we cannot build a processor powerful enough to do this all in real-time, so we need an algorithm that reduces those processing demands. This, however, comes completely from her and not from Bostrom at all, as he only needs to assume that such a world can be built. He doesn’t need to assume in any way how it is built. And in Computer Science arguments from processing power are pretty poor ones, since we have seen time and time again that arguments that say “You’ll never get enough processing power to do that!” have been overturned by us either finding a way to get more processing power out of our computers or finding algorithms that greatly reduce the processing requirements or both. Massive programs that used to require massive mainframes now run on small cellphones, and we are building massive computer systems to process even more demanding programs than we’d ever imagined possible. Insisting, then, that a specific method will be required to deal with these sorts of issues is a pretty weak one from a Computer Science standpoint.
And as it turns out, her skepticism that that method will work is a bit misplaced:
Again though, he doesn’t explain how this is supposed to work. What kind of computer code can actually do that? What algorithm can identify conscious subsystems and their intention and then quickly fill in the required information without ever producing an observable inconsistency.
There are two problems here. The first is that such systems already do exist, in video games and graphics processing for them with things like draw distance. While they aren’t simulating an entire world of people, MMOs even do it for a large number of people with different perspectives. So this sort of thing already exists in our existing simulations. And while she can argue that they don’t do it perfectly, it turns out that it doesn’t have to be perfect, which is the second problem. It’s certainly not true that our world is perfectly ordered and consistent. Her own example of climate change and weather proves that even in our understanding things are far more loose than we’d like. We assume that this is because we don’t know enough to make the proper predictions, but what if those really are just inconsistencies in the system? What if the odd acausal nature of quantum mechanics is simply that the system can’t keep up or, in line with climate change, that it’s just not simulating things at that level until someone actually observes it? As long as we don’t constantly see a large number of inconsistencies that we cannot explain, we will be willing to suspend disbelief and treat this like a real world. So since it doesn’t have to be perfect, and we know that we can simulate things well enough that things that we know are conscious are able to suspend disbelief, in a simulated world that is far better at it we are much more likely to be able to suspend disbelief as well. So this seems like far less of an issue than us being able to create consciousnesses at all. If the simulation can create a consciousness, it can probably simulate natural laws well-enough to keep us fooled.
She concludes:
And that’s my issue with the simulation hypothesis. Those who believe it make, maybe unknowingly, really big assumptions about what natural laws can be reproduced with computer simulations, and they don’t explain how this is supposed to work. But finding alternative explanations that match all our observations to high precision is really difficult. The simulation hypothesis, therefore, just isn’t a serious scientific argument. This doesn’t mean it’s wrong, but it means you’d have to believe it because you have faith, not because you have logic on your side.
The problem is that the assumptions are not on the side of those who believe it, but on her side … and the history of Computer Science has pretty much shown her assumptions untenable. That doesn’t mean she’s wrong, but it does mean that we shouldn’t take her conclusions as seriously as she’d like us to.