Longtermism is repackaged utilitarianism and just as bad
What Fyodor Dostoyevsky has to teach us about allowing present suffering for future harmony
What Fyodor Dostoyevsky has to teach us about allowing present suffering for future harmony
Longtermism, if you aren’t familiar with the term, is the philosophy, promoted by philosopher Nick Bostrom of Oxford University, that our primary ethical obligation as a species is to ensure the post-human future for countless sentient beings. Thus, all moral questions are reduced to existential risk — what will ensure that this post-human future comes about.
If you aren’t familiar with Bostrom’s work, he is also responsible for the Bayesian (probabilistic) argument that we are all living in a computer simulation. I don’t think much of this argument either, but at least it didn’t have powerful moral implications. Longtermism does.
Longtermism is part of Bostrom’s ethics which he calls effective altruism. Sadly, effective altruism is a special case of utilitarianism — the idea that right and wrong are determined by whatever does the most people the greatest good.
According to Bostrom’s predictions, humanity, if it survives the present epoch, will go on to a post-human future where conscious beings live lives of plenty and pleasure within elaborate computer simulations. If we successfully colonize our local cluster of the universe, this could amount to trillions upon trillions of conscious minds. Compared to those numbers, the Earth’s present number of human inhabitants at about 7.7 billion is a rounding error.
This is where the utilitarianism comes in. Our ethical obligation, according to longtermism, must be to those future people. In that case, altruism is “effective” because it relates to the greater good of benefiting them even at the expense of people living now.
This kind of moral theory, like Ayn Rand’s Objectivism of an earlier era, is popular with the billionaire class because it justifies their obscene wealth and hoarding. Unlike Rand, who saw selfishness as the great good and selflessness as an evil, dubiously redefining right and wrong, Bostrom simply redefines the economics of moral action. As long as they are doing something to ensure the technological future and the welfare of those future denizens of the cosmos, they are solidly ethical no matter what their current practices. Indeed, wealthy entrepreneurs such as Peter Thiel and Elon Musk have given money to Bostrom’s group at Oxford, perhaps in the hopes that his philosophy will justify their focus on technology at the expense of ordinary people who can’t afford their products and services or the ones they invest in.
Another practice of effective altruism is the idea that you can offset your moral failing by using money you obtain from your ventures to give to charity. Thus, rather than finding ethical ways to obtain wealth, it is morally acceptable to buy forgiveness for your sins. This dualistic attitude to good and evil, where it is simply a matter of balancing the scales, reduces atonement to a mathematical equation.
Nineteenth century Russian author, Fyodor Dostoyevsky spoke to this mathematical attitude when he said,
men love abstract reasoning and neat systematization so much that they think nothing of distorting the truth, closing their eyes and ears to contrary evidence to preserve their logical constructions.
Moral arguments such as effective altruism and longtermism ignore their slippery slope which leads to the worst kinds of evil. It was precisely arguments such as these that led to the atrocities of the 20th century in Nazi Germany, Leninist-Stalinist Russia, Maoist China, and elsewhere, in the name of benefiting the most people in an imagined utopian future. It was precisely to counter such arguments that the United States constitution got its Bill of Rights. Rights are fundamentally opposed to utilitarian arguments because they guarantee an ethical obligation to a minority of people, even one person, at the expense of the majority.
Dostoyevsky criticized Longtermism, long before it was invented, in his masterpiece The Brothers Karamasov. One of the brothers, Ivan, argues that an innocent child should never suffer for the future harmony of the species:
If all must suffer to pay for the eternal harmony, what have children to do with it, tell me, please? It’s beyond all comprehension why they should suffer, and why they should pay for the harmony. Why should they, too, furnish material to enrich the soil for the harmony of the future? … too high a price is asked for harmony; it’s beyond our means to pay so much to enter on it. [emphasis added]
This argument came about because many Christians justified present suffering on Earth because God through Christ will make all well in some distant future. That is a caricature of Christian ethics, but it does ask an important question about how God can allow children to suffer. Setting that theological question aside, we can ask how we can allow a child to suffer for the sake of a future harmony.
If the sufferings of a single child are too high a price, what of the suffering of a billion children as climate change promises to unleash untold suffering on the world’s current and as yet unborn innocents?
There is no mathematical equation that can justify ignoring or putting off the current crisis, if only for their sake.
If one cannot turn to some logical system such at utilitarianism, in its effective altruism package, to define morality, then what is the alternative?
It all comes down to a misunderstanding of what morality is.
Psychiatrist and brain lateralization expert, Iain McGilchrist offers the analogy that morals are like colors. They are an irreducible part of the human experience. While the eye is, of course, stimulated by light of particular wavelengths, like the Zen koan of the tree falling in the forest, you cannot say that light of a particular wavelength is the full experience of color. Human consciousness must contribute to that experience as well, but colors are not a thought or something that we create. They are a fundamental, pre-representational experience, a meeting between the mind and physical reality. We don’t get to decide what they are.
If this is so, then morals, likewise, are a meeting between the ethical mind and physical reality, not something that we invent. Rather they are “written on our hearts”, driven by the perception of human action, empathy, and the awareness of suffering.
Moral theory is not about balancing scales but about acting on our perceptions — obeying truth and, by analogy, not saying something is blue when we see it is clearly red. It is denial or willful ignorance that causes the most harm because we not only hurt others, we hurt ourselves. We choose not to see what we already know because we don’t want to sacrifice our own comfort of believing that what we are doing or how we are living is wrong. The great failing of utilitarianism and its evolution in effective altruism is that they replace intrinsic, irreducible values with mathematical equations.
Moral action, also like color, is immediate to our surroundings. It is not about what is going to happen thousands of years in the future. It is about what we do today because we rarely know what is going to happen in the future or how our actions will play out for good or ill.
Even if we did know and all Bostrom says comes to pass, to Longtermists I ask what Ivan Karamazov asks Alyosha:
Imagine that you are creating a fabric of human destiny with the object of making men happy in the end, giving them peace and rest at last, but that it was essential and inevitable to torture to death only one tiny creature- that baby beating its breast with its fist, for instance- and to found that edifice on its unavenged tears, would you consent to be the architect on those conditions? Tell me, and tell the truth.
If a Longtermist answers yes, God help them out of their denial.
As an aside, while for post-humans, it is likely impossible for them to fix the past, many Christians argue that the end of the world will fix not just the world at that time but the entire four dimensional universe, future, past, and all. In that case, arguments against Longtermism no longer apply to the Christian notion of rebirth and resurrection for it is a complete rebirth of the universe as a whole. All suffering at all times is erased. People are somehow “rescued” from that destruction but their own suffering also erased. Is suffering on an alternate timeline still suffering? That is a question for a future article.
There are other arguments that do not rely on this assumption and appeal rather to God’s sovereignty as in the Book of Job (that God knows best). I find those hardly comforting even if they may be true.
Dostoyevsky’s answer through the character of Father Zosima, however, is that God’s redemption is a mystery so powerful that it can overwhelm even the worst human suffering, even the torture of a child. Christ’s suffering while apparently earthly and human meets a divine atonement that washes away not only all sin but also melts all suffering past, present, and future. Rather than the hedonistic freedom, offset by charity, that effective altruism offers, Christ offers a different kind of freedom found in prayer, love, and self-discipline, a spiritual freedom beyond any computer simulation.
Whether you believe any of this, there is a choice between making moral decisions based on an imaginary future utopia or based on what we see in front of our faces.
And I find it bizarre that a little techno-babble and some dubious equations can resurrect a braindead moral theory. To see it, we only need to stop denying what is going on around us and recognize our obligation to this world, here and now.