This twenty year old theory of quantum gravity is gaining popularity
Most people know we don’t have a working theory of quantum gravity, but it is often hard to explain why. The simplest explanation is that we don’t know how gravity behaves at the smallest scales, and, since quantum physics deals with those scales, we can’t construct such a theory. The scales at which gravity would become important in, say, particle accelerators is so small, and, thus, requires so much energy to probe (because size and energy are inversely proportional in physics) that we have no hope of approaching it with present technology.
The best descriptions of quantum theory and gravity: the standard model of quantum field theory and general relativity are incompatible mathematically. General relativity in four dimensions blows up as you go to smaller scales. It doesn’t make any sense. At the tiniest scales you just get nothing but black holes.
All other fundamental forces: the strong force that binds atomic nuclei, the weak force responsible for radioactive decay, and the electromagnetic force have well defined behavior at both the smallest and largest scales. These behaviors are known as UV and IR completion by analogy with visible light which is bounded by the IR and UV spectra at the longest and smallest wavelengths respectively. (General relativity is IR complete because it becomes non-interacting waves at the largest scales but UV behavior is just the opposite.)
Physical theories change at different scales, so we can’t assume gravity behaves the same way at the smallest scales as it does at the scale of planets and stars. As we probe higher and higher energies and smaller and smaller scales, the way that particles interact changes in ways that are analogous to how water goes from gas to liquid to solid as temperature decreases.
Coupling constants define the strength with which particles interact with one another but we know they aren’t constant. We call them running couplings. For example, how photons interact with electrons to create the electromagnetic field changes at higher energies. Eventually, the electromagnetic force merges with the weak force as well, so it is not only constants that change but fundamental symmetries through a process called spontaneous symmetry breaking.
All running couples eventually run out, meaning they stop changing, at a particular scale. We call this scale a fixed point.
One of the nice things about the three forces not including gravity is that they approach a very simple kind of fixed point called a Gaussian fixed point. These theories can be simplified into a free, non-interacting theory plus a few corrections in order to account for the most important interactions and still approach their fixed points. As they approach the fixed point, the interaction terms disappear and you get a non-interacting theory. Interaction is what drives the coupling constants to change; no interaction, no coupling, and they no longer change with scale.
For example, Quantum ChromoDynamics (QCD) approaches such a fixed point where quarks no longer interact. This is called asymptotic freedom.
Quantum ElectroDynamics (QED) which includes the electromagnetic force also has this freedom. Although electromagnetism has a finite point where the coupling becomes infinite called a Landau pole this disappears when it is united to the weak force which is one of the most beautiful results in physics.
That is nice.
The trouble with gravity is that, in its current form, it does not approach a Gaussian fixed point. It does not become asymptotically free. If anything it becomes more strongly interacting with mini black holes appearing. That means that coupling never stops and interaction terms keep growing. That suggests that our theory of gravity must be an “effective theory”. It is like having a theory for liquid water but not understanding that at a certain point it will freeze.
All theories of quantum gravity have to solve this problem. Without it, the theory simply makes no sense physically.
Classical general relativity (as well as non-interacting semi-classical relativity) avoid the problem by ignoring everything happening at higher energy scales than the lowest possible one. Hence, they are “ground state” theories.
All physical theories have an “action”, which is a description of how the fields or parameters making up the theory behave. Classical theories always minimize the action. Whatever the smallest possible action is for a given initial condition (or boundary condition), that is what the system will do. Classical mechanical systems: springs with weights on them, falling rocks, and planets in orbit all minimize the action.
Quantum systems do not minimize their actions but rather oscillate randomly around the minimal action according to a probability distribution. That distribution is related to the Planck constant as well as the action itself. Essentially the result you measure for a quantum system occurs with a probability that is related to the value of all the actions for the paths that would produce that measurement. This is the sum over histories formulation of quantum physics given by Richard Feynman and is mathematically represented by the path “integral”, which is a way of summing over continuous functions. This is one of the primary formulations for quantum field theory.
In order to calculate that probability from the path integral, you need to figure out a way to exclude paths that have arbitrarily high energies, otherwise you end up with nonsensical answers. This procedure is called regularization. Regularization introduces a cutoff for the energy. (There are other regularizations but this is the most popular one.)
Once you have regularized your path integral you then have to get rid of the cutoff you introduced because you don't know what it is. You do this by bundling it into your coupling constants which then no longer are constant. This is good because you don’t know what the coupling constants in your initial theory (called bare constants) are either. You have to measure your constants and those are not, it turns out, going to be the bare ones but “renormalized” constants.
This procedure is called renormalization, which is a terrible word for it. Essentially, you are creating a model for how the existence of a cutoff and the changes that your system experiences from one scale to another would look in your measurements in your particle accelerators. It turns out it looks like the coupling constants are changing.
Why is that so? Well, think of it like this: imagine you are looking at two cities that are exchanging goods and services for money in a trade agreement. You see the sum total of all these goods and services. Now, zoom into one city and you will see several communities perhaps trading with one another. Zoom in more and you see individual businesses. At another level, you see what goes on in a single business as it trades money for labor and receives external goods. As you zoom in, the behavior turns out to the average or net outcome of all these little trades going on. This is the same with particles and fields.
At low energy, their interactions are the result of averages over interactions of higher energy (smaller scale) interactions. That is why the coupling changes. It turns out the coupling is just an interaction strength for the averages. That coupling strength ultimately depends on the cutoff, which in the case of the city is the individual, but you don’t have to understand how individuals interact with each other to understand how cities behavior. You simply need to know that there is a cutoff and at some point the trading stops (assuming that people don’t trade with themselves).
The analogy demonstrates that renormalization is not a mathematical trick but a physical model for how a system itself excludes high energy states from influencing lower energy states at some point. We know that all physical systems do this because our measurements are always finite values.
For the three forces we understand, the way that interactions occur is relatively simple. For the most part, particles and fields act like free entities with some corrections to account for their interactions with themselves and other particles and fields. For gravity, we don’t know if that is true. Either gravity is an effective field theory for a theory that does act like that or gravity is an effective field theory for a theory that fundamentally doesn’t act like that.
Particle physicists for the most part hoped that gravity was like other forces while some other physcists felt like, because of gravity’s unique nature as a theory about the geometry of space and time, it must not be like that at all but rather a theory about geometry all the way down. You can’t take away that geometric structure, they say, and have the theory make sense.
You may be familiar with the two biggest kids on the block when it comes to solving quantum gravity: string theory and loop quantum gravity. These two theories are largely (not completely) opposed to one another in philosophy and approach.
String theory makes heavy use of supersymmetry and holography in its most popular incarnation, but deep down it relies on the Polyakov action for strings which naturally has a spin-2 tensor field that we can interpret as the gravitational field. The Polyakov action is renormalizable like the other forces so the issue is trying to show that it matches what we observe. This has happened with other theories before, so it is a reasonable approach. Unfortunately, we have not observed anything that would suggest that this is the correct description of reality.
String theory is not one theory but a collection of potential theories all stemming from an attempt to describe all matter, energy, and forces as vibrating strings. That makes it sexy and is the reason why it has had such a powerful influence over the direction of theoretical physics. No other serious contender for a quantum theory of gravity can claim to be a sweeping theory of everything.
Loop Quantum Gravity (LQG), on the other hand, uses Ashtekar variables which allows gravity to be put into a lattice form which is another common formulation of the other forces. The variables look a lot like the fields in the electroweak force, obeying the SU(2) symmetry—a symmetry over complex matrices. It then describes space and time in terms of loops of these variables in a lattice. These are called Wilson loop after the Nobel laureate who invented them, Kenneth Wilson. A Wilson loop is simply a product of matrices as it goes around a loop.
This makes gravity look at lot like other forces since the strong, the electroweak, and electromagnetism can be expressed as Wilson loops on lattices. The difference is that in those theories we can take the limit of the size of the loops to zero and get a continuum theory. In the case of LQG, the loops have no actual “size” (or position) since size is given by gravity itself. They are simply discrete loops of Ashtekar variables that connect together. Those connections are what gives space its structure according to this theory.
Both string theory and LQG require exotic additions in order to work. String theory is particularly exotic because it attempts to unify everything into strings, but LQG also proposes these loops which have never been seen or measured. This is a physical prediction whereas for other forces Wilson loops are more of a computational tool.
The loops and Ashtekar variables in LQG provide a quanta of geometry, basically a smallest geometry and this provides a natural cutoff in scale using a “smallest area” and “smallest volume”.
LQG has unfortunately never been shown to recover general relativity at the macroscale.
LQG is often suggested to be the most parsimonious version of quantum gravity. But the reality is that it is not.
Keep reading with a 7-day free trial
Subscribe to The Infinite Universe to keep reading this post and get 7 days of free access to the full post archives.