Science may be a victim of its own success
The decadence of contemporary science and how to find a new renaissance
The decadence of contemporary science and how to find a new renaissance
Judging from news headlines, science is proceeding at breakneck speed. From Mars rover landings to RNA vaccines to artificial intelligence to electric cars and more, we are living in a period of ever increasing scientific knowledge and power. Yet, if we rise above the hype to the 30,000 foot level and survey the achievements of the last 200 years or so, we find that the last 40–50 years or so have actually been lackluster, a mere parade of toys compared to the world changing convulsions of previous generations.
Culture, art, politics, economics, demographics, standards of living, all have been in a period of gridlock and slowdown for decades when compared to the upheavals, revolutions, and steady post war climb that took place in the decades leading up to the moon landing. Movies and television are now constantly recycling intellectual properties that were developed during the science fiction and fantasy golden years between 1930 and 1970, such that those properties are now the dominant source of revenue in American film. Birth rates in the developed world have been falling ever faster, which, from the perspective of those concerned about overpopulation may seem like a good thing, but realistically falling birthrates tend to lead to older, more risk averse societies that are primarily interested in holding on to what they have, a recipe for stagnation. Real income levels likewise have fallen, while the rise of an elite class of “rent collectors” (e.g., shareholders) vacuum up more and more cash out of the economy. Meanwhile, baby boomers hold on to an oversized share of global wealth ensuring younger generations do not have the capital to make their ideas a reality. Political gridlock is now the norm in America, and in Europe post-war cooperation has given way to increasing nationalism amid the EU’s bureaucratic stasis.
This is the basis for New York Times columnist Ross Douthat’s new book, The Decadent Society: How We Became the Victims of Our Own Success, a look at what it actually feels like to live in a decadent society and, for the most part, not know it. By decadent, Douthat largely avoids definitions based on excesses of personal pleasure and instead focuses on how the success of the modern liberal democracies has engendered a kind of global stagnation and repetition, resulting in a decline in innovation, creativity, and advancement. This statistics-laden treatise makes a strong case for the overall decadence of society across a variety of metrics such as life expectancy, birth rate, the impact of innovation on individual lives, personal income, increasing stratification of wealth, and so on. Whereas as recently as the 1990s we would expect constant progress in the developed world across most measures of growth, we now see decline in growth. Amid this slowdown has arisen an internet and social media driven corporate and government surveillance state, a system that makes opting out so costly that it neuters any attempts to escape or undermine it.
Science is certainly not immune to the impact of stagnation and I want to focus in particular to some worrying trends in scientific discovery that suggest that scientific and technological progress has, over all, been in decline for about 40–50 years. I’m not the first to notice this trend. In a 2018 article in The Atlantic, a survey showed that, particularly in physics, the impact of scientific discoveries, as judged by Nobel Prize winning discoveries, has a pattern of peak and decline with peaks in the 1920s and 1960s. But since the 1980s on, most Nobel Prizes in physics were given out for discoveries made in the 1980s and 1970s! (There are exceptions of course such as awards for the LIGO gravitational wave discovery and graphene.) Increasing risk aversity in the committee, wanting the evidence to mount for a number of decades before awarding the prize, accounts for some of this effect, but the relative paucity of noteworthy achievements in more recent decades has contributed as well.
While it is difficult to judge from the Nobel committee’s whims whether science has been stagnating the last 40 years, another metric is how much scientific discoveries now impact everyday life. The vast majority of technology is based on discoveries made in the 18th and 19th centuries in electricity, fluid dynamics, heat transport, the discovery and classification of elements, and so on. Medicine relies on more recent discoveries in genetics and molecular biology among others, but part of the reason for that is that many of those discoveries needed tools that rely on previous discoveries in physics and chemistry! A few technologies depend on discoveries in general relativity and quantum mechanics such as the GPS system and lasers, but both of these rely on fundamental discoveries made by Albert Einstein in 1915 and 1905, over a century ago, and the technologies themselves are both over 50 years old. LED lights are a similar example. Technologies based on more recent discoveries never seem to make it to market.
Indeed, technologically, our homes and lives are largely the same as they were in the 1950s except for the internet and the forever connectedness that brings. With internet, phone, and social media addictions now a serious problem as well as privacy concerns, it seems as though these bring quite a bit more problems with them than washing machines and vacuum cleaners did in the 1930s. Thus, rather than radically changing our lives, modern technology sucks our lives into a simulacrum, convincing us that life isn’t the same old, same old because our virtual lives continue to change.
Another kind of technology that has, for decades, promised to change our lives as completely as the steam engine in the 19th century, artificial intelligence, meanwhile, seems constantly out of reach. Anyone who has used a virtual assistant lately can attest that artificial intelligence is anything but and there are signs that, rather than reaching the Singularity, AI has reached the end of its current boom.
If science hasn’t had quite the disproportionate impact on our lives that we think it should have, that isn’t necessarily a sign of stagnation. After all, it takes time for scientific ideas to progress from theory and experiment into some useful technology. Newton’s theories of motion were moderately useful when he developed them but only became technologically useful much, much later.
After all, in the last decade, we measured gravitational waves, found the Higgs boson, catalogued numerous exo-planets including potentially habitable ones, and developed vaccines so rapidly they could be deployed only a year after the disease they targeted was discovered! Surely that is a sign of great progress.
While all this is true, it is also true that these were rare events, and that is a clear sign of stagnation. It is not that progress halts altogether, it is that substantial improvement becomes less common. Consider that from the period when research into quantum mechanics began in 1900 until the measurement of the W and Z bosons in 1983, the rate of discovery of new particles occurred about once every 4 years. In the past 38 years, only 3 major new particle discoveries have been made (not including composite particles, composite anti-particles, and quasi-particles), the top quark, the tau neutrino, and the Higgs boson. And indeed, as far as the Standard Model of particle physics goes, we have discovered all the major particles. Anything left is either composite, quasi, or hypothetical.
This slow down isn’t only in experiment, where access to funding is critical. Theoretical physics, which at best requires pencil, paper, and time, has also stagnated. 1984, besides being the ominous setting for George Orwell’s dystopian masterpiece of the same name, was also the year that String Theory began to coalesce into a serious contender for a Theory of Everything. And much like the promise of AI, ST has fallen on its face multiple times only to be resurrected with new promises but disappointing progress. Nothing the theory predicted, in any of its incarnations, has ever been observed, and yet the theory persists. Other candidates for a quantum theory of gravity either do not come close to its sweeping scope or have even more problems.
One of the hallmarks of a society in stagnation (or as Douthat calls it a “sclerotic” society) is that it likes to repeat itself. No matter how often each repetition fails, such a society is too risk averse, too attached to holding on to what is has, to throw everything out and start over. Rather it prefers to try to repeat the successes of the past over and over.
String Theory is a perfect example of this phenomenon. It resembles in many ways the successful particle theories of the past and can be seen as a successor to them. Because it has great appeal mathematically and because so many have staked their careers on it, much like the beloved franchises that dominate the current movie scene, it is too valuable to throw away, and so, no matter how much it fails to change, it continues to attract real dollars and real lives to it.
Ironically, even complaining about String Theory has become repetitive. With the 15th anniversary of Lee Smolin’s popular criticism of the theory, The Trouble with Physics, such complaints cry out for real change and action, not continual recycling of the same criticisms with darling theories that are themselves quite old now, no closer to the prize. Meanwhile, entrenched String Theorists such as Michio Kaku are free to ignore the complaints and continue to paint their theory as having “solved it all” without a shred of evidence. When scientists start behaving like leaders of opposing political parties, you know we have a problem.
Even when Smolin’s book came out, many complained that his title was far too expansive. Just because there are serious issues with fundamental high energy particle theory does not mean there is trouble with all of physics. Condensed matter, solid state, plasmas, fluids, astro, and the rest of the vast array of subdisciplines are humming along just fine. Or are they?
It is clear that there have been incremental improvements in all these areas and the discovery of graphene and its unique properties was certainly a huge achievement, but besides that we do not have plasma or other kinds of nuclear fusion powering our homes yet, computers have struggled against solid state physics in order to continue shrinking transistors and have resorted to parallel processing to continue to increasing processing power, we still don’t understand turbulence in fluids, graphene, despite its uniqueness, hasn’t impacted daily lives that much despite a decade of promises, and there is no clear pathway to solving any of these problems. I could go on.
Just because we haven’t solved outstanding problems, again, doesn’t indicate stagnation, but the lack of new ideas for solving them, even if they are wrong, does. We may laugh when we see videos of early attempts at achieving flight, but better to have a creative idea that doesn’t work than keep trying variations on the same failed ideas. Despite the enormous increase in the number of scientific papers published in the last few decades, actual progress has increased little and seems to have actually decreased. People aren’t publishing because they had a new idea. They are publishing because they want or need (for career reasons) to see their name in print.
Even people whom society sees as technology disruptors like Elon Musk are largely not. Musk is well known for two technologies: space craft and electric cars. Space craft, of course, go back to the 1940s while electric cars are even older. While Musk has certainly popularized these technologies and taken them in new directions, they are not game changers like the steam engine, the airplane, or the computer. What person, company, or institution today is a true disruptor, creating technologies that are genuinely new and not improvements of previous designs? The last major one, Steve Jobs and the introduction of the iPhone, was a blip in a sea of sameness.
Imagine now an era when something like the iPhone was being released every few years, completely changing how society functioned. That was how life was in the 19th and early 20th centuries. Since then that frontier has closed and been fenced off as we try to milk each patch of it for all it is worth.
If you believe, then that science and technology are in a period of stagnation (or at least a long lull), how did we get here? Answering that question requires looking at society as a whole.
One theory is that all the easy ideas are taken and we are now in the “business as usual” period of refining what we know until someone comes along with an idea so game changing as to make a whole new raft of ideas possible. This is Kuhn’s thesis in The Structure of Scientific Revolutions. The introduction of the telescope and microscope in the 17th century, electric dynamos and water pumps in the 18th, steam power in the 19th, and electronics and flight in the 20th made it possible to change the world rapidly, but eventually all those easy ideas dried up. The period from about 1800 to 1950 was a period of unusually rapid growth and change, and we are just in a slower phase now. And maybe that is a good thing. Society needs to catch its collective breath.
This cheerful assessment may be true, but it could also be that the problem is one of demographics. People have been getting older and having fewer children in the western world as well as parts of east Asia. No large liberal democracy is producing children above replacement rate. Fewer children means fewer young people and the burden of making discoveries thus falls on older adults who are naturally more risk averse. Indeed, the age at which an average Nobel laureate made their discovery has increased over the years from about 37 to 47. Partly scientists may need more time to learn their field because there is so much more of it than before. But partly scientists are older in general, retiring later and starting their own research later.
The average age at which a person earns a doctorate in science, the gateway to studying science in the modern world, is about 31 which is about the same as it was in 1958–9. The difference, however, is that, while those Ph.D. earners would often go right into a tenure track position at a university, the doctorate is simply the first step to one or more postdoc jobs before one is judged “qualified” to hold a permanent position. Indeed, while the majority of doctorate earners in 1958 went into university positions, now it is the opposite with about 75% leaving academia for industry. As universities steadily eliminate permanent positions in favor of temporary ones, the number of people wishing to earn Ph.D.s has increased, meaning fewer and worse jobs for more people. The age of faculty has risen and, if you count those with tenure or tenure-track, it is even older, with a median age of 49.
While birth rates are one culprit, another is simply shifts in university policies towards employment that have skewed their demographics older. Lifting mandatory retirement, eliminating permanent positions, increasing the number of steps to attaining a permanent position with often multiple postdocs, all contribute. While you could argue that younger people are still participating as students and postdocs, they are largely at the mercy of their advisors and employers and, thus, their ability to innovate will go only so far as their superiors allow.
While age has its benefits, it also has its problems in that older people are less likely to innovate, more likely to want to hold on to or improve upon what they did in their younger years, or give up on research altogether in favor of teaching, writing, and mentorship.
If we look outside of academia to corporate research, we find that innovation is no better and perhaps even worse. Bell Labs is long gone and the other corporate research centers are mostly gone as well. A few tech companies buck the trend such as Google and Microsoft, which maintain research labs, but they are constrained within the information economy, and they typically poach top researchers out of academia rather than trying to hire people out of school and training them. Thus, they inherit academia’s problems.
If we want to find a more Edisonian style research lab, we can look at federally funded research centers, which, in the United States, largely exist on funding from the U.S. Department of Defense and the Department of Energy (which is mainly focused on nuclear stockpile maintenance). Here, immediate federal priorities dominate with some spending on pie-in-the-sky ideas, but little to benefit the average person. Similar institutions are even more rare outside the U.S. and just as constrained to produce something of tangible benefit for the dollars, pounds, euros, or yen injected. In other words, funding sources are too risk averse to allow too much dreaming.
Outside of large institutions, the small business and start-up scene may be where all the real innovation is happening. And it seems to be the case as far as that goes. Indeed, behemoths like Google prefer to buy innovation rather than produce it, swallowing up any would-be competitor. Yet, for all the money invested, start-ups fail more and more often compared to past decades and, as recent scandals such as WeWork and Theranos show, the temptation to pretend to be a game changer rather than actually be one is ever present. Unfortunately, the start-up scene, like Google and Microsoft, seems largely contained within the information economy. We see little disruptive innovation in real nuts and bolts where tangible effects outside of our ever present screens might be felt. Software is easy and hardware is hard, as the saying goes. Certainly there is little incentive for start-ups to try to do their own science and much of what they do is to redesign well-worn concepts — that is, take existing ideas and try to make money off them.
So far, we have seen that institutions are failing to innovate because they are risk averse, with age as a possible reason, but there is more to it than that. In an age of necessity, age might not be as big of a factor, but ours is not an age of necessity, rather the opposite. The reason all our institutions are so risk averse, Douthat argues, comes down to our decadence.
In science, I argue that is as much the case as in anything else. We are so flush from the successes of 20th century science that all we wanted to do by the end of it was repeat those successes. If we could unify two forces (electromagnetic and weak), why not all of them? Hence monster group and other group theories. Why not matter as well? hence supersymmetry and string theory. If particle accelerators helped usher in that wave of discovery, build bigger ones. Bigger telescopes helped us to study the origins of the universe, so build even larger ones, launch them into space, or put them on the moon even. These are not necessarily bad approaches to expanding knowledge, but they aren’t new or creative. Yet almost all the resources earmarked for scientific research are pouring into these supposedly safe bets, even as funding for basic research is in decline.
This viral video sums it up nicely:
There is hope, however, from the few risk takers and visionaries left. Gravitational wave detectors which started funding in the 1980s but only became real in the early 2000s have managed to defy the odds and secure substantial funding. (If the NSF had not been willing to take substantial risk and fund this new kind of idea, it would never have happened, and we would have ended up with more particle accelerators or telescopes perhaps.) New tabletop experiments in quantum information, turbulence, and thermodynamics may be able to push past the endless repetition without needing significant funding. New theories of quantum gravity that are truly different may be able to break through the malaise of the string theory stalemate.
There are a few events that could jump start a new renaissance. The first is for some catastrophic event to force us to make changes to how we do science. The COVID-19 pandemic, for example, forced us to take big chances on vaccines. The development of RNA vaccines, and their huge success story is a lesson in how necessity can be the mother of invention. A similar push led to the Manhattan project and the atom bomb, a dubious invention but certainly something brand new. It is, however, hard to imagine the same urgency in most basic research.
Another possibility is that competition between nations, such as between the liberal democracies of North America, Europe, and East Asia and totalitarian regimes such as China and Russia could ignite a cold war style arms race that forces both sides to think more creatively. Such competition led to the space race and ultimately one giant leap for mankind.
Military research into aerospace and sensors, which may spill over into basic research, space exploration (and exploitation), and energy may all benefit, but historically basic science has always progressed better when all nations are in cooperation.
A third possibility is that an invention will occur that has such a profound effect on all aspects of life that it creates a new renaissance. For example, in the same way that steam power replaced, supplemented, or enhanced human and animal power in the industrial revolution, true artificial intelligence could replace, supplement, or enhance human intelligence. As scary as that may sound in terms of job losses and ethical issues, it would profoundly impact how we human beings organize our society, socially, economically, and culturally and perhaps make many currently out-of-reach innovations a reality. Scientific research and pure mathematics already use AI, but, unfortunately, current AI is little better than a data analysis tool. Not so much a steam engine as a winch with a counter weight, requiring huge human investment in data curation to enable it to do anything useful. There are no thinking machines.
I could be wrong but none of these prospects seem to be about to happen. As far as what we can do to get out of this malaise in the mean time, the key is to stop focusing on past successes. Stop trying to repeat what was done before. Instead, ideas must be bold and different, and failure, even massive failure, must be tolerated. People have to be willing to invest, not in hype, but in true world changing ideas. We must seek out new frontiers, not simply find new ways to exploit the fenced in ranches we have built out to the old ones.