The ethics of developing intelligent weapons
Fear of Artificial Intelligence didn’t begin with Terminator 2 and Skynet. Consider that the word robot was coined in the 1920 Czech science fiction play Rossum’s Universal Robots (R.U.R.). In this play, robot workers revolt killing their human operators and eventually the human race.
In fact, the general portrayal of robots as being murderous led Isaac Asimov to invent his famous three laws of robotics, believing that if such ethical laws were integrated into robot brains, they would be unable to harm human beings (at least intentionally) and become helpful partners with human civilization.
Asimov made no secret of his anti-war stance, stating in a 1985 interview published in the Toronto Star over the proposed Reagan “Star Wars” Strategic Defense Initiative:
I have as my theme that violence is the last resort of the incompetent. In other words, a good leader gets his way without war.
Asimov frequently explored how his 1st law,
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
could be bent in various ways. For example, disturbingly, in one novel robots had their definition of human altered so that only certain types of people would be considered to be human.
The ethics of using AI in warfare have taken on new urgency with the Ukraine war where Russia has introduced advanced new weapons. Primarily, AI is being used for facial recognition to identify Russian agents and soldiers as well as the dead. But, behind the scenes, AI is likely also used in target recognition and data analysis.
The Ukraine war may be a tipping point in using AI in war. Already drones have near full autonomy save for when to drop their bombs.
One of the bigger products of the war is the enormous quantity of data generated from it. Whenever a war happens involving a major superpower, like Russia, there is always a data bonanza. In the old days, this data would be fed to human analysts, but now it is being ingested into Artificial Neural Networks (ANNs), which are notoriously data hungry. Likewise, both NATO and Russia are feeding that data into intelligent systems in real time to generate so-called “actionable” intelligence data.
What this means is that AI will increasingly be used in future wars.
Because of the growth of AI, the DoD, in 2020, developed a set of guidelines for ethical use of AI in war. These guidelines were developed in consultation with AI experts in industry and cover five things that AI needs to be: responsible, equitable, traceable, reliable, and governable.
These ethical guidelines require that human beings maintain a chain of accountability with AI, that AI not show inappropriate biases, that how AIs are developed can be traced back to the original intent and requirements, that AI’s act as expected at all times, and that AIs not have unintended behavior.
These are not abstract guidelines but impact development at all levels. The guidelines are not significantly different, as well, from those used when other kinds of technology are developed. Consider that, long before AI existed, poor engineering had claimed lives because it failed in its intended purposes and human engineers failed to take responsibility for its design, not only in warfare, but in everyday applications. In disasters from the Titanic to the Tacoma Narrows Bridge to the Hyatt Regency Walkway Collapse, appropriate ethical guidelines such as these might have prevented them. This is why ethical guidelines usually become law after such disasters. Likewise, it is only a matter of time before an AI related disaster happens and forces industry guidelines to become law.
The DoD guidelines appear to lack guidance in how AIs should be used in war. They primarily focus on maintaining human control and accountability at all stages of an AI’s use and development. There is nothing in the guidelines that says that a self-replicating, adaptive kill machine cannot be developed. The primary concern for the US Armed Forces is that, in a battlefield, the commander’s intent and the chain of command is respected. Thus, like any soldier, sailor, marine, or airman or woman, an AI cannot go rogue or misinterpret orders.
Much like nuclear weapons safeguards, the intention is not to stop creating certain types of weapons, only in preventing accidents and avoiding unintended consequences. If a commander, even the Commander-in-Chief, the President, chooses to “cry havoc and let slip the [robot] dogs of war”, there is nothing in these guidelines to stop him or her.
It is clear, therefore, that the US Military’s concept in controlling AI is to hold it to the same standards as any other weapon or piece of equipment with the added caveat that, because it can behave in unpredictable ways, AI needs additional scrutiny.
One question is whether there should be limits on what kinds of AIs should be developed? Should an AI be allowed to kill without a human making the ultimate decision?
It is important not to anthropomorphize AIs here. They cannot make decisions. They can only be used, like any other weapon. And like any other weapon, they can have unintended consequences when the humans wielding them make mistakes or do not follow their own ethical rules.
Indeed, while the prospect of Artificial Intelligence “killer machines” seem scary, they are precisely the opposite of the scariest weapons of war developed. What makes a weapon ethically problematic is precisely its lack of intelligence, which makes it indiscriminate. Examples such as biological, chemical, and nuclear weapons come to mind, but also cluster bombs and land mines. These weapons are terrifying and typically banned because they have no precise targeting mechanisms and no means of distinguishing combatants from civilians.
Given this fear, the only way for AI to become indiscriminate is if it were to either escape human control or be deliberately programmed that way.
Let’s look at the first problem: can AI get out of control?
The issue of whether AI can become self-aware and decide to kill human beings is irrelevant here. AIs have no more or less accountability than a virus as far as we know and a virus is more deadly than any self-aware being. Thus, the problem is not self-awareness nor decisions to kill but the possibility of uncontrolled killing.
Getting out of control covers a wide range with AI from behaving with unwanted biases (such as racial biases) to self-replicating nanobots running amok.
Human beings have a poor track record when it comes to keeping technology and its results under control from microplastics to Africanized “killer” bees to industrial pollution, we have shown both the tendency to ignore problems till it is far too late, and, when problems do become extreme, solutions are applied unevenly.
The US military has been guilty of letting its own arms get out of control. For example weapons funneled to the mujaheddin in 1980s Afghanistan like Stinger anti-aircraft missile launchers later were used against US forces across the middle east. There is no guarantee that US and other NATO weapons sent to Ukraine will never be used against us either.
Despite all the guidelines on AI development, AI can also be hacked and it can be tricked, so assuming that we will maintain control forevermore is naive.
Perhaps the worst example of AI getting out of control are the potential for AI-based computer viruses and other malware. With no real barriers to self-replication the threat to global computer systems is very real.
Therefore the answer to whether AI can get out of control is yes.
The second question: can AI be deliberately programmed to be indiscriminate? The answer is clearly yes. Not only can AI be made indiscriminate at killing, it can become disturbingly discriminate which is worse in the case of oppression and genocide.
Thus, despite guidelines on development, there is no reason that AI would remain under control of ethical people. The greatest risk from AI is in fact not that it would wrest itself from human control but that it would fall into the wrong hands.
This leads us to ask whether it is ethical to develop AI for warfare at all. Should it be banned like chemical and biological weapons?
Let us suppose for example that major powers such as NATO nations, China, Russia, and so on all came to the unlikely conclusion that AI in war should be banned by treaty along the lines of similar bans on bio and chem weapons. Would there be a downside?
I think we can all agree that not developing biological weapons has no downside. There is nothing good about plague.
This is not so clear with AI. Fundamentally, AI is software and everything needs software. Military weapons systems are filled with all kinds of software. If we do not use AI software, we are going to use something else less intelligent and less adaptable. Is that a good thing? I argue no because the whole point of AI is to make software more flexible and capable than it is and AI is leading all complex software in this direction right now. And in a sense AI is more part of the evolution of software than it is anything fundamentally new. There are certainly good reasons to use it to make sensors smarter, to sift through large quantities of data, and to make smarter targeting decisions that avoid mistakes.
At the same time there is still the “killer robot” problem. Perhaps this is where we draw the line since killer robots remove any sense of personal accountability from the battlefield. The downside to not using such robots is that one must then risk humans in their place. Would you want to tell a mother or father that their son or daughter is dead because it would have been unethical to send a robot into battle in their place? I cannot draw the line there either.
The reality is that nations will never agree to stop using AI in warfare so perhaps the point is moot. We cannot unilaterally choose not to develop such weapons or we would risk utter defeat in future wars. It may take AI to defeat AI. This brings us back to the only ethical choice: maintaining tight control and safeguards on these weapons while we continue to develop them, hoping they will not be used but knowing they will.
While every engineer and scientist should be able to make the choice not to work on such weapons, I think it is hypocritical when experts demand that no one work on them and then when the next war comes around demand that something be done; all the while the adversary has been working away at AI with few ethical constraints. Idealists should at least be consistent in demanding pacifism in peace and war no matter the cost.
Likewise, comparisons between AI and nuclear weapons are ignorant and fail to recognize the fundamental differences between the concepts. A nuclear weapon is an indiscriminate destructive device. AI in a weapon could do many types of things. Hence, AI is a much more abstract category, even when used in warfare. It is little different than saying “software” or “computers”. How can you control something like that? The answer is that you have to take into account what you are using it for and whether that meets ethical standards. Regardless of whether other nations meet these, the United States has an obligation to a higher standard, yet it has to be pragmatic in its treatment of AI and not be blind to the dangers of falling behind.
Since investigating AI in warfare, I have come to a more nuanced viewpoint that AI is, itself, not really controllable because it is too abstract a concept. The problem ultimately comes down to outcomes: does the introduction of AI result in greater loss of life and property than not using it? My hope is that we will use AI to prevent loss of life and even prevent war itself. Even if this turns out to be naive, we are going to have to get used to the idea of AI in war because I don’t see either going away any time soon.