The trumpets have sounded, the simulation hypothesis, the idea that we are all living in a simulation of our universe created by our distant descendants living in the “real” universe, is dead.
In a new paper by Italian physicist Franco Vazza, a researcher in astrophysics simulations, claims that it is impossible to simulate even a sizeable portion of the universe within itself.
This conclusion seems intuitively obvious. While the universe may be bigger on the inside, it doesn’t seem like you should be able to represent the whole thing inside itself. If it is our descendants simulating our universe, then surely they would simulate us in the same one they inhabit.
The conclusion comes down to energy. It turns out that while it takes some energy to create information, it takes even more energy to erase information, and erasing information is something that has to be done in simulations all the time. Even something as simple as taking two bits and applying an AND operation to them removes some information. For example, 1 AND 0 is 0, but 0 AND 0 is 0 too. How do you know what the original two bits were? You don’t, the information is lost.
We know that entropy and information are equivalent. Information is not something that is just made of thoughts. Information is physical. And because it is physical, it costs energy to create and erase. Since simulations are, inherently, information creators and destroyers, transforming information in a state at time t to information at a state at time t+1 over and over again, every bit transformation costs energy.
The second law of thermodynamics tells us that Entropy must always stay the same or increase in a closed system. That means that information must also always stay the same or increase. If something causes information to decrease, like the AND operation above, it must be in an open system that exports excess information away from itself.
The paper says that there is a cost to this, and it is quite huge. In fact, it costs more energy to simulate our universe than there is in the universe. He does this in a clever way by using arguments from black holes.
Black holes, you may know, are usually collapsed stars that emit no light, not because they don’t have any light to give, but because light cannot escape their gravitation pull. Black holes are so massive that they warp the space-time around them. Clocks run slower as they get closer to them and stop when they reach their event horizon, the point of no return from which nothing escapes.
Black holes also have entropy that is proportional to the surface area of their event horizon. This is one of the most bizarre results in all of physics, discovered by Bekenstein in the 1970s.
We are used to the idea that information in something is proportional to its volume. For example, if I have a sphere full of SD cards, I expect that the information should increase based on the volume. If I double the radius of the sphere, the number of SD cards I can fit into it should increase by 8 times. Yet, for black holes, it is as if you can only have the SD cards spread around the surface of the sphere. Double the radius and you only get 4 times more information.
Many scientists believe that the universe works this way, too. They believe it is holographic, meaning that all the information in it is represented on its surface area, as if it is being projected into it and only appears three-dimensional, like a hologram.
The author uses this principle to compute how much energy it would cost to encode all the information in a holographic universe into another holographic universe. It turns out that for any holographic system of any appreciable size (larger than the mass of a couple of dozen fleas), it requires more energy to encode all that information than that system can contain.
This means there isn’t enough energy in the universe to simulate it down to the Planck scale, which is the smallest scale we can measure. In fact, the universe could not even store all the information needed to get the simulation started.
He goes on to calculate what it would cost to simulate the Earth down to the Planck scale and comes up with an amount of energy equivalent to converting all the mass in a globular cluster (a cluster of about 100,000 stars) into energy. Another example is that it would cost the binding energy of the Milky Way galaxy, meaning that you would have to be able to tear apart our galaxy, star by star, before you had enough energy to simulate the Earth.
If we don’t simulate the Earth down to the Planck scale but just down to the scale of contemporary physics, then things might seem a little easier, but he determines that a physically possible computer would take millions of years to compute even a second of Earth time. This is because the speed of a computer is proportional to its working temperature.
For our computer, we will use a black hole because that is our best chance of bringing sufficient information together to have a simulation. In fact, it is possible that one day your National Lab-sized supercomputers will all be black holes.
As a computer runs faster, it runs into Heisenberg’s uncertainty principle, which determines how fast it can transition from one bit to another.
No problem, you say, we will just do everything in parallel.
This doesn’t matter, however, as beautifully explained in this paper from 2000 in Nature under “parallel and serial operation”.
If you take some given amount of energy and spread it between parallel processors, you don’t get any more speed up than if you take the same amount and put it into fewer processors. The ultimate limit is the quantum transition time, which unfortunately changes depending on the energy available to the processor. Heisenberg will not allow us to compute faster without a higher operating temperature.
So make this thing run hotter! You say.
The problem we run into is that there are temperature limits. The hottest thing that we can use for computing around a black hole is the accretion disk, and that has physical limits on the order of about 10 million Kelvin.
The computing time for a black hole with a 10 million Kelvin accretion disc is still too slow for even the lower resolution Earth.
What about using a quantum computer?
Surely our descendants wouldn’t have a classical computer simulate us.
It turns out this doesn’t matter either. All the arguments are about bits, energy, and black holes. They apply equally to quantum and classical computers.
What if the holographic principle is wrong?
After all, it is just a hypothesis.
It turns out that if the HP is wrong, the problem gets worse, not better, because now the universe has even more information in it to deal with.
What if only we are being simulated, not the universe or Earth?
Perhaps only our minds and senses are being simulated, and nothing else is. Yet, still, these beings would have to simulate quite a lot of reality in order to create a physically consistent reality, even if they only simulate our brains. Such a simulation could quickly run into physical barriers as well. If the only one being simulated is you, and the rest of us don’t exist, then it would be quite a bit easier, but in that direction lies solipsism.
So there’s no way we could be a simulation?
The only plausible way that we could be simulated in another universe, it turns out, is if that universe is quite different from ours, with different laws of physics, or if we have serious misunderstandings about the laws of physics in our own universe.
The alternative, which I find vastly more plausible, is that we live in a real universe and no one is simulating us.
Vazza, Franco. "Astrophysical constraints on the simulation hypothesis for this Universe: why it is (nearly) impossible that we live in a simulation." Frontiers in Physics 13 (2025): 1561873.