Is the multiverse dead?
A recent attack on the Many Worlds Interpretation or Multiverse puts the ancient debate front and center.
Ever since probability theory was developed, people have been arguing over what it means. There are two general schools of thought: Bayesian and Frequentist.
Most people learn the frequentist approach to statistics in entry level stats classes. In a frequentist approach to probability, all probability is based on a law of large numbers: repeated experiments approach a limiting case. For example, if I flip a coin enough times, the number of heads will approach 50% over time. Flipping a coin once has no real probability associated with it.
Frequentists can also extract probability from performing replicas of the same experiment all one time and averaging over the number of replicas. So if I flip 10,000 coins once, about 5000 will be heads. Studies in medicine, such as determining the effectiveness of a vaccine, rely on a frequentist approach.
The Bayesian approach suggests that probability is a measure of uncertainty. While for the frequentist, flipping a coin once has no probability, for a Bayesian, it does because probability is based on my prior knowledge of possible outcomes: heads and tails and their likelihoods. I do not need to perform the experiment to know the probability. When we look at the outcome of a vaccine study that says it is 95% effective, and apply that to a single person as a measure of uncertainty, we are taking a Bayesian approach.
A major open question in quantum physics is whether the universe is a Bayesian or a Frequentist one. That is: does probability of quantum outcomes have any meaning apart from performing the same experiment over and over? If so, what is it?
It is well known that there is a disconnect between quantum prediction and measurement. Predictions of probabilities obey the Born rule which means that the probability of finding a particle at a point or in a particular state is related to the square of the magnitude of its wavefunction. The wavefunction is a complex (includes both real and imaginary numbers) mathematical description of the state of a quantum system which can be a particle, field, or even a macroscopic object like a person.
When we perform a quantum experiment, however, we must perform it many, many times either in a row with a sequence of particles or simultaneously with different particles. The statistics that the experiment generates validates the Born rule prediction.
This leads to the controversy: are the Born rule and associated wavefunctions simply mathematical conveniences that allow us to predict the statistics, a frequentist approach, or are they real entities that exist apart from the repetition of experiments, a Bayesian approach.
That comes down to how you interpret the experiments. In classical statistics, we assume the existence of hidden variables in any experiment. These variables represent the actual state of whatever we are trying to measure. Hidden variables can be quite simple like the state of a coin tossed before you look at it, which only has two values, heads or tails, or they can be enormously complex like the state of the global weather system. The existence of hidden variables means that, if we knew what they were, theoretically, we would have no uncertainty about the outcome of any experiment or measurement. Probability would be almost meaningless because every outcome would be uniquely determined.
Bayesians and frequentists, however, interpret hidden variables differently. For a Bayesian, the space of possible values of hidden variables is a very real thing, something we can quantify and use in our predictions as prior knowledge. By constraining those values, we get different answers for probable outcomes. For example, if I have two identically likely outcomes of a coin toss, heads or tails, other outcomes, like the coin landing on its side, have no meaning, even though they are physically possible. On the other hand, a frequentist would take that possibility into account. They would also take into account coins that are slightly unbalanced and give heads or tails more often because they do not assume any prior knowledge of the outcomes.
In the context of quantum physics, we need to be a bit more precise than in classical statistics where there is no question about hidden variables. In quantum physics, we donāt know if hidden variables exist or not. Therefore, it makes sense to define a frequentist approach as one that ascribes no meaning to the uncertainty associated with a single experimental outcome, nor making assumptions about hidden variables, albeit assuming they may exist. Rather, it assumes a probability is given by the measurement of numerous outcomes of an experiment. In other words, probability is a theoretical concept that approximates relative frequencies of outcomes.
A Bayesian approach, meanwhile, assumes that there is a reality associated with the uncertainty of a single experiment. In this case, relative frequencies of outcomes approximates the probability.
It seems that these might be the same, but we cannot actually prove it. We can only prove that they are āprobablyā the same. The frequencies of outcomes from an infinite sequence of experiments approach the probability, probably. This is known as the weak law of large numbers. The reason is because there is always an infinitesimal probability (infinitely close to zero but not actually zero) that the outcome of an infinite number of coin tosses will be all heads.
A recently uploaded paper on arxiv.org uses this argument to attack the Many Worlds Interpretation (MWI) of quantum physics and declares it effectively dead. The original, Everettian MWI proposes that every possible outcome of an experiment actually happens in infinitely branching universes that represent even the least likely outcomes.
From a frequentist perspective, this is nonsense because atypical outcomes that have negligible probabilities are never observed. Therefore, how can we say that they actually happen in some universe? All probability theory is, instead, based on outcomes that have non-negligible probabilities because these are the outcomes we actually measure. Indeed, we are perfectly justified mathematically to remove all experimental outcomes from our probability distributions that have what is called zero measure, meaning that they do not contribute to the overall probability. Yet, we can go even further than that and say we are justified in ignoring any atypical outcome because the number of experiments that can actually be observed within the universe is finite and so outcomes with vanishingly small probabilities are irrelevant to science. Thus, we can cut off probabilities that are outside of some sigma number. (This is no different from regularizing any other physical quantity to stop it from ranging over unphysical values.)
A sigma number is, of course, a measure of the likelihood of an outcome being mere chance. The gold standard in particle physics for a discovery is five sigma (a sigma is simply a standard deviation) which means that the outcome has a 1/3.5 million chance of being random. But what if we say that the universe does not in fact contain outcomes outside of some large sigma, such as 100 sigma? That is, we cannot assume that such events ever happen. The MWI says no, these must also happen. Yet, it is based on a theory of probability in which the mathematical definition of probability takes precedence over actual measurement. How is that science?
As an aside, with the start of the economic meltdown of 2007, Goldman Sachs claimed it was seeing financial events that were 25 sigma unlikely according to its models. A paper was written on the subject called āHow unlucky is 25 sigma?ā that showed that this was about as likely as winning the UK lottery 21 times in a row or an event that only occurs once in 10 to 135th power years. Clearly, the Goldman Sachs models were simply wrong. Yet, MWI would propose that there are universes where there is a person winning the lottery over and over again by mere chance.
Thus, MWI appears to be a sort of Platonist Idealism in modern dressing where our theoretical notions of probability as uncertainty (Bayesianism) take on an actuality within countless improbable universes.
That does not discount the possibility of limited multiverses such as appear in both Marvel and DC comics where branching only occurs when some event has major long-term consequences, and some physics models attempt to reconfigure the MWI in this way.
Other Bayesian approaches also exist that do not include unlikely actualities. Quantum Bayesianism (QBism) is an attempt to recast quantum probability as a subjective mental model but retains the notion of uncertainty as a feature of a local observer. In this model, all observations and uncertainties are relative to a given observer and their own perception of potential outcomes and cannot be objectively shared between observers that have different knowledge. A classical example of this is when I flip a coin and look at it but donāt show you. For me, if I see heads, then the probability of tails has dropped to zero. But, for you, who hasnāt seen it, the probability is still 50%. Thus, two observers calculate different probabilities for the same experiment.
Applied to quantum mechanics, this means there is no objective notion of uncertainty and no wavefunction at all. Instead, each observer must create their own model of the wavefunction based on their own knowledge. That wavefunction is a mathematical tool into which they insert their own beliefs about probable outcomes. This makes it not Platonic but more in line with the subjectivist philosophy of probability of early 20th century mathematician Frank Ramsey. Yet it also rejects the frequentist approach which is based on definitions of probability based on observations of multiple outcomes approaching a limit.
Other more frequentist approaches to quantum mechanics include Superdeterminism and Coherent Histories, both of which consider wavefunctions as probability to be useful mathematical abstractions for making predictions about outcomes. Superdeterminism, in particular, is the ultimate hidden variable theory since it proposes a single universe that is completely predetermined so the actual probability of anything happening is one. Coherent histories presents an quantum concept of a sample space from which outcomes are drawn over time but relies heavily on the frequentist approach to defining quantum outcomes and reconciling them between observers.
My own five dimensional deterministic flow approach to quantization might be considered a limited multiverse interpretation because it does not require all possible universes to exist, only the typical ones that result from the world sheets of particles. Probabilities will be the same as in MWI without atypical outcomes because these probabilities are based on the uncertainty of oneās position and/or the universeās state in an extra dimension.
This will likely not settle the argument between Bayesian and frequentists any more than the argument over whether mathematics is real or a human invention has been settled. Yet, it is on those proponents of the MWI who allow for atypical universes to justify the existence of such unobserved phenomena.