The algorithms have taken over science
How the attention economy is stifling scientific innovation
How the attention economy is stifling scientific innovation
As a student I would often spend hours in the library. With a study carrel to call home, I would stroke the dusty tomes in monastic glee. Many of them out of print and long forgotten, I would flip through the pages, smelling the must, as if uncovering a Pharaohs tomb. Those days are long gone now, as I do virtually all my research on Google scholar, a handy tool but one that invariably looks for well-cited and popular books and papers.
In the quest for collecting and making all the myriad forms of data available to scientists today, it seems we may have lost something important: the ability to discover lost knowledge. Nowadays, lost knowledge, despite being at our fingertips literally, often stays lost because of a popularity contest that ranks all ideas against one another in terms of how much attention they have received already.
America, and I would add Europe as well, are running out of ideas after a fantastic 300 year run, a recent article in The Atlantic claims, and it suggests that media is responsible. Our innovation based economic growth that has lasted from Isaac Newton’s day until about the 1970s has passed into an attention based economy.
The upheaval of the past 300 years cannot be understated. Albert Einstein was born in 1879, at a time when there were few cars, no airplanes, the telephone had only been invented, electricity was uncommon, and Edison invented his electric light bulb that year. There were no radios, no TVs, and certainly no nuclear weapons.
By the time he died, in 1955, all these things existed as well as the first electronic computers. The time when he was publishing his famous theories on relativity and quantum mechanics was one of the most innovative times in the history of humanity.
Going back to the start of this fantastic age, Isaac Newton was born in 1642 (on Christmas Day) at about the end of the English civil war, between Puritans and Royalists. He was born at a time when people were still having wars in Europe over religion. There were no pocket watches, no pendulum clocks, no calculating machines, no steam pumps, no barometers, no pianos, no reflecting telescopes, and no mercury thermometers. By the time he died in 1727, all these things would exist and Europe had moved on from wars over religion. Again, this was one of the most innovative times, although perhaps not as extreme as the late 19th and early 20th century.
Now this once mighty gunpowder keg of ideas and intellect has been wetted in the rising waters of viral trends and popular memes and what started with good intentions is sinking the modern world in a sea of stagnation.
Arts, music, movies, and business are all running on fumes in a popularity contest that has roadside break down as its ultimate prize. From nostalgia movie remakes to a 50 year decline in new startups to endless back and forth copying in the music industry, innovation is dying not because it doesn’t exist — we are no less creative than before — but because it is being drowned out in an ocean of well-loved sameness.
Even the recent infrastructure bill that passed the US Congress, while badly needed, mainly repairs crumbling roads and bridges built in a more heady time with little funding for any innovative transportation and freight options.
We have gone from being a nation of trail blazers, making the likes of the transcontinental railroad and the Panama Canal, to one with little stomach for new pathways.
In the sciences, my area of expertise, the metric for success are paper citations and, to a lesser extent, awards. These are, of course, not measures of innovation but of attention.
Indeed, if you have ever been jealous of somebody for getting an award or for how many followers they have, you are jealous not of their success but how much attention they are getting.
Few can get quite so jealous as academics.
It wasn’t always this way. It started in the 1970s.
So what changed in the 1970s? Well, in the 1960s thanks to innovations in mass education, the federal government poured money into higher education. Student enrollments soared, faculty hiring ballooned, and a steady and increasing stream of funding expanded research programs.
On the face of it this all seemed good, but the sciences themselves immediately became subject to the attention economy.
With more scientists it was harder to get attention just by doing your job, so you had to produce more and cut out any work that wouldn’t pay off. The low hanging fruit was all snapped up quickly so people had to invent new problems to solve. The best way to do that was, counterintuitively, to work in safer and more crowded areas of research. These crowded areas were like big cities, lots of opportunities for their citizens even if it meant only a tiny space to live. Scientists would eek out a small domain of a subtopic, call it a “research program” and churn out papers, encouraging their students to do the same.
Those who chose to stay “out in the country” in risky or unpopular fields had to do everything with little help or interest from others. Innovations are rarely impressive at the outset, so these mavericks can’t count on some spectacular Eureka moment to wow the city dwellers. Just look at the Sun-centered model of the solar system. Back when Galileo was getting imprisoned for it, it was absolute garbage as a theory with wrong predictions all over the place. These simple farmers of knowledge would instead see their funding dry up from lack of interest. No funding, no students and no offspring to grow any innovations into something truly worth paying attention to.
Funding, job security, pay, tenure, and promotion all depended on not innovating too much, just enough to get attention. Any lack of originality could be covered over with sweeping claims about what the research might lead to.
Contrast this to an earlier age. Isaac Newton published virtually nothing until he was in his 40s and only then because Edmund Halley (of the comet) talked him into it. A negative review of his theory of colors (which was correct and revolutionary, creating the modern understanding of the rainbow) upset him so much that he refused to publish his Opticks until after the reviewer was dead decades later.
With so few peers, Newton could still attract attention, but in today’s crowded field it is a wonder if anyone would pay attention to his Principia that introduced so many new concepts to physics like gravity and his three laws of motion, let alone go out of their way to help him publish it like Halley did.
Likewise, Einstein’s ideas, his truly revolutionary ones on space and time in particular, attracted a lot of skepticism and negative attention in his time to the degree that it was explicitly stated in his 1921 Nobel Prize that it was not for relativity. He was lucky his career was safe because of his other discoveries. If he had only relativity he might never have gotten a job in academia let alone a Nobel. Now most likely those fruitful but more down to earth ideas would have been plucked already and only the maverick ideas would be left to him.
People like paying attention to ideas that already interest them. We want to read about innovations that might be valuable to us or that we have a special understanding of. That naturally excludes anything that is outside our own understanding. Yet anything genuinely groundbreaking is going to be difficult to understand, hard to believe, and in need of a lot of work to make it competitive with the existing marketplace of older ideas.
In a scientific marketplace that has a glut of information, we have no need to pay attention to anything that doesn’t directly concern us. Each tiny subfield has regular workshops, conferences, and journals. We have become so overloaded with all this cheap knowledge at our finger tips that appeals to our own expertise we have decided not to bother sifting through the rubbish heaps of bad ideas to find the pearls of great price, let alone spend the time polishing them as they deserve.
This is not to say we need to give every crackpot idea credence. There are still standards for good science versus bad or non-science. Rather we need to stop giving attention to the things that already have attention. Stop letting algorithms decide what we read. We need to stop looking at the most cited papers for inspiration. We need to be willing to take risks and explore ideas for their own sake and not because they incrementally build upon the work of some lauded scholar.
We need to get lost in a library again.
Stagnation arises not from a lack of innovators but from a system that prizes sameness over uniqueness. From Google search results to citations to Rotten Tomatoes scores, the algorithms now decide what we pay attention to. But how do the algorithms decide? They decide based on what we or others have liked before. More of the same.
To enjoy what is different means veering off the beaten path, watching that low or unrated movie, reading that poorly cited paper, picking up an unreviewed book by an unknown, and maybe taking a risk yourself.
There is no shame, after all, in not getting attention for your work. Although we know attention pays, eventually all passes away. Is it better to do something that gets attention now or to take a risk and do something that may affect the course of human history?