Feature
The rules of the road – for robots
The Moral Machine is trying to discover how humans make ethical choices in the hopes of training artificial intelligence to do the same. If, say, the brakes go out on a self-driving car or a child crosses the road, can we live with the snap decisions of a machine?
In his 1942 short story Runaround, Isaac Asimov proposed the “Three Laws of Robotics” as basic norms that robots should follow in order to be integrated into human society. According to Asimov, robot ethics should follow these principles, in order of priority: do not harm humans, obey human commands, and preserve the robot’s own existence. In Asimov’s version of the future, all robots are to be programmed according to these principles.
Nearly 80 years later, these laws would barely be enough to program a blender to make sure we don’t cut our fingers or get an electric shock. The robots we’re talking about today are much more complex than that. And they are accomplishing much more impressive things than making a smoothie. The robots we are most eagerly looking forward to are self-driving cars, which we’ll use to go buy apples to make our smoothie while not having to peel our eyes away from the TV series we’re binging for even a second.
Even such a banal activity as driving presents moral dilemmas that are very difficult to solve: if a child is crossing the street in front of us without looking, should we risk a head-on collision with a car in another lane to avoid hitting her?
If we’re the ones driving, we rely on what we usually call “instinct” (and then, if we survive, we reason about it under the more elevated term of “ethics”). It doesn’t work all that well, but it gets us out of the house and back. A robot, however, has no instincts at all. The engineer ultimately tells it what to do, and this is called “artificial intelligence.” And we should be the ones to tell the engineer what to do—except that we rely on instinct. That doesn’t mean we act randomly, but rather that we don’t even know explicitly what the rules are. Yet the automakers expect us to make it clear for them. Before buying a car that can drive itself, the customer will want to know exactly how it will behave (think of the last time some novice driver gave you a close call, and your thoughts turned involuntarily to prayer).
So far, only Germany has tried to negotiate a solution. The Ethics Commission for Automated and Connected Driving, a panel of 14 people, including lawyers, engineers and the Bishop of Augsburg, have drafted a Code of Ethics published as part of their 2017 Report. When it comes to the most controversial situations, however, the Code limits itself to calling on a public consensus that doesn’t yet exist.
When it comes to intelligent machines, you are guaranteed to come across the work of the Massachusetts Institute of Technology (MIT) in Boston. This issue is no exception: researchers at MIT have created the so-called “Moral Machine”—something like an Internet survey which simulates dangerous situations: for example, if your brakes fail and you have to choose between driving into a wall or hitting pedestrians, what would you do? What about if those you would hit were women, or children, or the elderly, or animals—and so on.
One day, when self-driving cars are a common presence on the streets of our cities, it will be important to know what we can expect from them. No one would buy a car that would make the wrong decisions. If enough people participate in this survey, some commonly accepted rules should become evident, at least according to the researchers at MIT. (They are under the leadership of social scientist Iyad Rahwan, 40, from Syria, who fled to Australia from Aleppo before all hell broke loose.) Half a million people from 233 countries have answered the survey so far, making a total of 40 million decisions: would you save the old person or the baby stroller? The poor person or the dog? Me or him?
The results have been published in the latest issue of the journal Nature. The researchers led by Dr. Rahwan have found some results that were predictable: for instance, if we are forced to choose, we will hit the smallest group of pedestrians; between the young and the old, we will sacrifice the old. Between a manager and a doctor, we will more likely choose to hit the manager—but, in general, we will choose to protect the rich over the poor. Nothing unusual so far.
However, if we look closer at the details, we find that the choices can be rather different from place to place. For example, in East Asia, the preference shown toward young people is lower, while in Latin America people most prefer to avoid hitting women and people who look healthy.
These decisions, the researchers say, probably reflect the cultural and social matrix of those particular countries. It should not be a coincidence that in countries where there are major economic inequalities, people tend to favor the rich even when they cross the road. Or that countries with greater gender equality will show a higher preference for not running over a woman. Or, finally, that the countries where, according to international standards, lawlessness and corruption are more widespread are also those where drivers give less importance to whether a pedestrian is on a crossing or not.
Despite the large number of responses, the study doesn’t examine a truly representative sample: the answers have been voluntarily given by those inquiring about the study online and who are interested in these issues (in any case, likely the same people who will buy the self-driving cars). It will not be easy to reconcile this data with a particular set of norms. The German code, for example, does not permit distinctions based on personal characteristics such as gender, age or social status between potential victims on the road. But would we be willing to adopt a technology that, by law, will not necessarily follow our own customs and practices?
In any case, we can draw a very useful conclusion from this study, above and beyond the problem of rules for the road: we can’t find a consensus morality even in the cold, dispassionate world of robots. Ethical practices are the result of centuries of social and gender conflicts, and they change according to the current power relations in a given place at a given time. No algorithm can solve this problem: even when the robots are driving, things will look very different, say, at a crossroad in Rome and at one in Zurich.
Even the machines themselves know this. Many intelligent machines now adopt a more flexible strategy: through trial and error, the cars driven by “neural networks” can teach themselves to develop optimal behaviors of great complexity, starting from very little a priori knowledge. This is how machines have learned to beat humans in games that require great strategic abilities, such as Go. But they need to be trained, as they can only learn by making mistakes. That means that this time, the guinea pigs will have to be us. Are we ready for that?
Originally published at https://ilmanifesto.it/il-codice-della-strada-dei-robot/ on 2018-10-28