Analysis
Killer robots and the Rubicon of human existence
Ukraine is an R&D lab for machines that kill on their own, and the global powers are already racing to control this next phase of warfare. Technologists themselves are calling for caution. It may already be too late.
There has been much talk recently about an episode recounted by Colonel Tucker “Cinco” Hamilton, the Air Force’s chief of AI test and operations and director of the Air Force-MIT AI Accelerator, at a conference in the UK, where he told a story about a (simulated) mission by an autonomous drone programmed to destroy enemy anti-aircraft guns.
In the scenario, when the human operator attempted to manually override the orders, the drone classified it as an act of sabotage of its primary mission and “decided” to neutralize the interference by bombing its own controllers. The quick denial by the Air Force, stressing that it was merely a “thought experiment,” did not prevent the story from fueling the rising tide of worries about the dangers of artificial intelligence and, in this case, so-called “lethal autonomous” systems.
“AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military. We must act with urgency to address this future. The threats we face are not just at our door, they are inside our house. AI is a tool we must wield to transform our nations…or, if addressed improperly, it will be our downfall.” (Col. Tucker Hamilton, interviewed by DefenseIQ, 8/8/2022).
These systems are also called “lethal autonomous weapons systems” (LAWS), deadly weapons that use sensors to acquire targets and algorithms to determine which ones to strike and destroy, independent of human input. In a nutshell, killer robots, representing the application of artificial intelligence to lethal force, a sci-fi dystopia scenario which – as we are finding out about many aspects of AI – is already here, at least to an extent. Furthermore, according to the Pentagon, it is destined to be a hallmark of the future of armed conflict.
In the words of Maj. Scott Parsons, a professor of ethics at West Point military academy, interviewed by the Center for Public Integrity in 2021 about exercises in which cadets got to program a robotic tank which made autonomous decisions on which targets to kill: “Our job is, we fight wars and kill other people. Are we doing it the right way? Are we discriminating and killing the people we should be and … not killing the people we shouldn’t be? … That’s what we want the cadets to have a long, hard think about.” He had this to say about how the cadets approached the task: “Some of [the cadets] maybe program too much ethics in there, and maybe they’re not killing anyone or maybe they put just enough in.”
As is well known, in the field of AI the unknowns are greater than for any previous technology. In the present case, they concern precisely the impossibility of predicting the future ability for self-correction of systems expressly designed for progressive self-learning (machine learning) and the desirability of giving them the power to kill. In the words of Geoffrey Hinton, one of the fathers of Artificial Intelligence who now regrets his life’s work: “Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.”
While this is true for “harmless” applications such as ChatGPT, it only highlights the fact that warfare applications are a huge gamble that is difficult to articulate in terms of some kind of “efficientism.” In an interview with DefenseIQ, Colonel Hamilton said: “[You] need to understand that lethal autonomy is an aspect of [the] future. How will we manage the moral implications and the technical backdrop powering and coding those moralities?”
The main unknown concerns how fast this technology will be able to evolve, the first that is predicted to be capable of learning and improving itself and generating modifications to its own code. Here, the scenarios being discussed include a soft “take-off,” i.e. gradual progression to a superhuman level of intelligence, or a sudden spurt.
That is, either the “frog in boiling water” scenario or what researchers call “FOOM” (from the comic book onomatopoeia of a superhero taking flight). In either case, an evolutionary leap is looming, opening the theoretical door to well-known science fiction tropes that even giants in the field, such as Hinton, are taking as possibly the most suitable paradigms for visualizing the risks involved.
In particular, autonomous lethal weapons cannot help but lead us to consider the “Skynet theorem” – the scenario of an anti-human “rebellion” of the machines, a la Terminator or Westworld. And it’s not just Hinton who believes it’s time for some urgent reflection. In March, more than a thousand industry insiders (starting with Elon Musk) signed a petition calling for a voluntary pause in the development of AI technologies, to allow time to adapt our regulations, since the technology presents “profound risks to society and humanity.”
It may already be too late. If the “technological singularity,” the theoretical point at which artificial intelligences will be able to operate with “agency” independent of their creators, has not been reached already, there are many experts who now believe that the threshold of technological inevitability has been crossed. And, petition aside, no one really believes that there will be a pause on AI’s development.
After all, across human history, ever since the invention of the wheel, one can hardly find cases of societies “voluntarily” refraining from using technologies that are available to them. On the contrary: today, even nuclear nonproliferation treaties seem to be in the process of being scrapped. And the potential danger from AI is being compared with no less than the “existential” threat of atomic bombs – not by alarmed Luddites, but by those who are most familiar with this technology. This week, another petition was signed by 350 experts working in the AI field, containing a single terse sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
The increasingly urgent warnings issued by the likes of Sam Altman, CEO of Open AI (also among the 350 signatories) have an element of ambiguity, to say the least, given that these are the same Silicon Valley tech monopolists who “preemptively” opened applications such as ChatGPT or Midjourney to the public, and thus made the current escalation inevitable. The fact remains that, a few months after the first generative AI systems were opened to the public, the pervasive feeling is that gears are already in motion, with radical impacts now guaranteed, but not yet felt in on our lives, and that it’s a good time to get “seriously worried” (in Hinton’s words).
When it comes to applications of this technology that could, literally, kill us, talking about “extinction” may not be hyperbole. To begin with, it is well established that the three superpowers, the U.S., Russia and China (plus Israel) are already engaged in a “smart” arms race, which each of them justifies, according to the well-known logic of escalation, by the progress made by their adversaries. China, in particular, is generally considered the most advanced country in this race, and the one that has openly set for itself the goal of becoming world leader by 2030. In terms of killer robots, we might already be in the midst of the “Dr. Strangelove” phase.
Furthermore, one only has to look at the role that technology has been playing in the Ukrainian conflict, which is already being fought via kamikaze drones and missiles intercepted by other missiles, to get a foretaste of what the generals at the Pentagon see as the future we need to prepare for: the war of algorithm against algorithm. Secrecy reigns supreme when it comes to military systems, but it would appear from a U.N. report that at least one fully “autonomous” operation pitting machines against flesh-and-blood fighters was carried out in 2020 in Libya, when explosive drones supplied by Turkey to the government in Tripoli allegedly attacked Khalifa Haftar’s retreating militias in the east “of their own volition.” The Kargu-2 class four-rotor drones produced by STM are capable of hovering and autonomously selecting the enemy targets to detonate on.
Since, as we know, conflicts make excellent testing labs for the latest products of arms manufacturers, the Ukrainian front could not be otherwise than a giant R&D laboratory for the military-industrial complex. The Turkish Bayraktar and Iranian Shahed drones have had their moments in the spotlight, as well as the use of, for example, remote-controlled nautical drones for sabotage missions. The latter are, of course, also in the possession of the Chinese and American armies. The Office of Naval Research (ONR), the U.S. Navy’s weapons technology unit, has been experimenting for at least a decade with “sea swarms,” squadrons of drone boats autonomously governed by a centralized artificial intelligence that assesses potential enemies and how to respond in naval confrontations. Similar “unmanned” systems are able to control helicopters, drones or land vehicles (the mode is called “fire, forget and find”).
The modern model of combat had already been effectively de-humanized by the use of U.S. drones in Iraq, Somalia, Afghanistan and elsewhere, where Hellfire missiles were fired on “suspected combatants,” no longer by pilots on board, but by remote operators in aseptic control rooms at airbases in Nevada. The desensitization of the operators contributed to the proliferation of drone raids, which became a favored tool of geopolitical projection under Obama. But even this remote human input promises to become obsolete soon.
For years, the Pentagon’s Defense Advanced Research Projects Agency (DARPA) has been working on AI systems able to “replace human input” for air combat. As early as 2020, it had achieved the milestone of an AI victory over a human opponent in a flight simulator. This February, at Edwards Air Force Base in the California desert, an artificial intelligence autonomously flew a real F16 fighter jet for the first time.
In each of these cases, the extraordinary mathematical speed with which CPUs can make decisions that may involve the death of human beings is already beyond the threshold of human intervention and understanding. This opens up the issue of a robotic ethics, which in turn leads us back to speculative fiction, such as the “Three Laws of Robotics” postulated by Isaac Asimov, or the “unwinnable” game of Tris played against itself by the computer in War Games, the prescient 1983 film by John Badham.
Thus, we have already entered the era in which we are entrusting a robot with making the distinction between a rifle held by a fighter and a rake held by a farmer – even more, we are having to let machines deal with “ethical” problems such as the “trolley problem” (a classic philosophical thought experiment that offers the choice to save five lives by sacrificing one).
In artificial intelligence theory, these issues come back to the concept of “alignment,” i.e. of machines to human goals. But this in turn opens up the problem of who can legitimately define these goals, or the ways to be employed by inscrutable machines in order to achieve them.
As the neural networks and large language models powering the chatbots show, generative AI is “trained” on a huge volume of data. The human knowledge they draw on is that accumulated on the Internet, which is known to include all kinds of spurious, false or distorted knowledge. Artificial intelligences can only mirror this ocean of data and include its imperfections in their processing (somewhat like the sentient ocean in Solaris). There are already documented cases of anomalies in chatbots that have been inventing facts, “hallucinating” and exhibiting unpredictable behavior, which seem to confirm the potential for “non-aligned” scenarios.
At a time when even their creators don’t fully understand the mechanisms by which “original” images or texts are generated, perhaps equipping AI systems with missiles and explosives is not the best of ideas?
Stop Killer Robots, a coalition formed in 2013 that is fighting to spread awareness and organize opposition to autonomous armaments, certainly agrees. And in 2018, UN Secretary-General Antonio Guterres called on all countries to renounce the development of autonomous weapons, saying that they are “politically unacceptable, morally repugnant and should be prohibited by international law.” However, in practice, the timing and manner of all such development has remained in the hands of the military-digital complex and the platform oligopolies.
To sum up, we’re seeing warning after warning come in from the developers of AI technology themselves, and synthetic intelligences are already disrupting symbolic and social orders (see, for instance, the screenwriters’ strike, which marks the first dispute in Hollywood around creativity and copyright issues posed by AI). With the looming specter of “killing machines” entering into operation, we would be about to cross yet another Rubicon, without even having a serious discussion about it.
Guterres’ appeal has been taken up by only around 30 nations, and never seriously considered by the superpowers. Against this backdrop, the human attempts to counter the fait accompli might, in retrospect, be little more than rearguard skirmishes while artificial intelligence is advancing inexorably, operating an irreversible change in the nature of reality.
In this sense, killer robots are only the most iconic metaphor for a moment that – as 350 of the “experts” have told us once again – may prove to be a stepping stone on the way to extinction. If not of the species, then at least of the human condition, and cognition, as we have known it.
Originally published at https://ilmanifesto.it/macchine-assassine-oltre-il-rubicone-delle-nostre-esistenze on 2023-06-04