il manifesto globalSubscribe for $1.99 / month and support our mission

Interview

If a robot commits a war crime, who is responsible?

We spoke with a leading artificial intelligence expert about his campaign against killer robots. ‘What I would not want is for the threshold for going to war to be drastically lowered.’

If a robot commits a war crime, who is responsible?
Rachele Gonnelli
6 min read

Marco Dorigo is an expert in robotics and one of the first scholars of “swarm intelligence,” in which individuals behave coherently in a group without centralized instruction. (Dorigo first described ant colony optimization.) This field of study has vast implications for artificial intelligence, which was the subject of our conversation with Dorigo.

Dr. Marco Dorigo, you, as an international robotics expert, have already signed two petitions by scientists to outlaw “killer robots,” weapons systems with artificial intelligence not under human control, technically known as “human-out-of-the-loop” systems. What makes you so concerned about this?

My concern is more about the future than the present, if we will truly be able to have machines that can autonomously make a decision to kill. I see this possibility as an important problem, because it could lower the tolerance threshold at which one decides to enter into an armed conflict, since it would no longer risk the lives of soldiers.

Another worrying aspect is the issue of the responsibility for the decision: if a mistake is made, who will answer for it? As we all know, human beings also make mistakes, but it is possible to trace the responsibility for them: war crimes can be prosecuted, and, as a necessary precondition, they are attributable to particular people. Even if all the soldier does is press a button at a remote console, as is already happening, there is always a human decision made in the moment. However, if we delegate this fully to a machine, it becomes much harder to attribute responsibility for the actions. If a weapon automatically decides if, when and whom to shoot using images collected by sensors, who is responsible? The algorithm that is guiding its actions could have an error, or the context might have changed unexpectedly.

Also, when we program a machine, we’re never sure it has been programmed correctly: there are bugs, software errors, as we saw with the Boeing planes, such as the one that crashed in Ethiopia recently. There is no way to ensure that a system controlled by software will function perfectly, and it is certain that the risks may become unacceptable if we’re talking about a robot equipped with a gun.

You say “if” these intelligent and autonomous weapons will be built, but doesn’t the technology to create them already exist? Is it enough to “flip a switch” to put it into action?

I’m not a military technology expert, so I’m not up to date with the latest developments in the industry. But it’s likely that even the experts in the field only know about a small part of these developments, because the military is not willing to share information about their projects.

For the general public, the worst nightmare scenario is probably that of swarms of armed drones with explosive charges and guided by facial recognition. In fact, your field of studies is swarm intelligence, isn’t it?

My research is focused on how to control groups of autonomous robots that must cooperate to perform tasks that they would not be able to perform individually. For example, we are studying how a swarm of robots can search for injured victims in a natural disaster situation, and how they could cooperate to retrieve the most seriously injured.

Isn’t that how killer drones would work as well?

I work in the civil sector, and as I said, I’m not an expert in military applications. Indeed, killer drones could be very small and equipped with a lethal payload, and could be difficult to stop, as it’s harder to shoot down a large number of small objects than one big robot—although I believe that the military, as always, will develop countermeasures to destroy them, anti-swarm measures. However, the problem does not lie in the development of technologies that would allow large numbers of robots to cooperate with each other. Such technologies can actually have very useful applications—just think of the problem of coordination among self-driving cars, which will gradually replace our current human-driven cars in the near future. The problem lies rather in the use that is made of the technology, and the fact that the same technology that allows you to create autonomous systems can, when used in the military field, completely alter our frame of reference. For the first time in history, there wouldn’t be a human being behind the controls—military decisions would be delegated to a machine.

At the conference for the launch of the “Stop Killer Robots” campaign, one of your colleagues, USPID secretary Diego Latella, explained that if there is image noise interference, a robot could mistake a school bus for an ostrich. Or even a terrorist commando for an ostrich. People are talking about “artificial stupidity”—can it be eliminated?

I’m not surprised at all that a robot can mistake a school bus for an ostrich. The fact is that the cognitive processes that Artificial Intelligence employs are not the same as ours. It is a problem tied to how these systems learn, and how they internally represent the acquired knowledge. As I’ve said, machines, like humans, make mistakes. But the mistakes they make can be of very different kinds. And sometimes, for us humans, these errors seem incomprehensible.

Are these incomprehensible errors tied to the work of the programmer, or to the amount of stored data?

The margin of error can be reduced by increasing the amount of data used for machine learning. But I don’t think it will be possible to reduce this margin of error to zero. And the types of mistakes will probably remain very different from those committed by humans, as the machines’ “brains” will probably continue to be different from those of humans. In general, I think the heart of the issue is not the fact that machines make mistakes, but rather whether artificial autonomous systems make more or fewer errors than the human-controlled systems they are meant to replace.

In the very near future, we will have self-driving cars, and many are concerned that machine errors could cause accidents. But there are accidents happening every day—the real question is whether there will be more or fewer of them with self-driving cars. Similarly, intelligent systems for medical diagnosis, using machine learning algorithms able to sift through millions of X-rays, will enable early cancer detection and better targeted and personalized cancer care. Again, what is important in my view is whether their success rate in diagnosis is better or not than that of the best human experts.

So, is technology always good, and as regards killer robots, the only evil thing is war itself?

At a certain level, every technology, like every scientific discovery, can be used for either good or evil. But it is true that in recent years, let’s say from the Internet era, there has been an acceleration in the development of information technology, an acceleration that makes it difficult even for us researchers to keep up with all the innovations in our field, and which makes it difficult to predict what might be the “evil” uses of the technology.

Considered in the abstract, almost all algorithms for cooperation are “dual use,” meaning they are applicable in both the civil and the military field. So far, the machines that are able to learn by themselves have focused on games such as Go or chess, they have very restricted domains of action and are not very autonomous. But the technology is developing, and what I would not want is for the threshold for going to war to be drastically lowered in the future. If you can send the machines to the front lines, you don’t even need a period of war propaganda to get public opinion to approve of the war—you only need to identify an enemy. And then you just send the robots to the frontline.


Originally published at https://ilmanifesto.it/lo-scienziato-dorigo-mandare-in-guerra-i-robot-intelligenti-cancella-le-colpe/ on 2019-04-10
Copyright © 2024 il nuovo manifesto società coop. editrice. All rights reserved.