Analysis
Migrants will be at the mercy of algorithms under EU proposal
Article Five deals with the uses of AI that should never be allowed. But it has one key oversight: there is no prohibition on the use of artificial intelligence at the borders. Nothing at all.
The problem lies in Article Five – lost in a hundred-page document. It consists of 30 or so lines, in bureaucratic jargon, about two screens of text on a computer. An article of law that should – we stress the counterfactual – protect so many. Many, but not all. Not the least of them, not migrants.
We’re talking about the Artificial Intelligence Act, an exhaustive set of rules that is supposed to regulate what the technology that attempts to simulate the abilities of the human brain will be able to do in Europe. And the Old Continent is set to offer the first attempt to “govern” this field.
Of course, the bill is still far from final passage into law. Far, but not very far. There is already a text which has been “endorsed” by the Council of the European Union, the group of governments, which has the final say. Now negotiations are underway to find a compromise position on the amendments proposed by the European Parliament. Then it will go to committees, then back to the Parliament floor before the final approval. The intention is to get it all done within a year.
But there’s already a text, as many have hailed enthusiastically – and rather hastily – because it is full of high-sounding phrases, often interspersed with references to “respecting human rights,” to the protection of minorities.
In short, the text asserts that at least here in Europe, human beings should always come first. With controls, with supervisory authorities (although their powers have been gradually watered down since the discussions began) – in short, so that the algorithms trained to intervene in aspects of public life, and especially the decisions they make, will not lead to discrimination.
These general principles – generic, as many have called them – are there. But then there is Article Five. In a somewhat byzantine fashion, the AI Act defines four major “areas” for the applications of artificial intelligence. There are the fields where it will be prohibited to use it, others for which it’s considered “high risk,” others where there is the possibility of “manipulation” that must be guarded against, and, finally, those where the application of artificial intelligence will be allowed.
Article Five deals with the uses of AI that should never be allowed. It goes into plenty of detail: no use of artificial intelligence that exploits people’s vulnerabilities, one will not be able to use such systems to profile and to evaluate citizens, biometric identification in public spaces will be prohibited. There are all too many exceptions – especially on facial recognition – but it would seem to be a groundbreaking text nonetheless. With one key oversight, though: there is no prohibition on the use of artificial intelligence at the borders. Nothing at all.
Translated, this means that the police “defending” the borders will have carte blanche to use whatever means they want. They can employ anything, invent new ones, and continue using the ones they are already experimenting with – because, as is not widely known, “predictive policing” applied to migration is already in the works. Artificial intelligence is being envisioned that will warn countries, and thus also European police, that someone is perhaps going to be on the move.
They will be able to do all that, since they will be exempt from controls. “And if they are,” explains Caterina Rodelli, EU Policy Analyst for AccessNow who is closely following the process, “it means that there will be no obligation of transparency on their part.”
The result is that it will be allowed to brand those who show up at the borders as “dangerous” and force them to carry that label with them wherever they go, which means that an artificial intelligence will be able to decide whether to reject their asylum claim. It will mean that those who knock on the doors of “fortress Europe” will not have the same rights as Europeans.
Some will argue that there is still time to correct this distortion. But as all digital rights associations – from EDRI to Algorithm Watch – have denounced, it doesn’t work that way. Negotiations are already at a very advanced stage (it must be mentioned that one of the politicians leading them is the PD’s MEP from the Socialist group, Brando Benifei, who will also be rapporteur on the Parliament floor), and, as history shows, agreements already made at this stage are never corrected, or improved. And no one among those who have power seems interested in the issue.
It’s a problem that concerns migrants, but it might not be only about them. Because – again in Caterina Rodelli’s words – “one gets the feeling that on such a delicate issue, such a risk-fraught one, authoritarian solutions are being tested on the least powerful ones, on those who have no voice, on those who aren’t able to protest.” If such solutions are allowed to proceed unopposed, sooner or later they will be extended, to everyone.
Originally published at https://ilmanifesto.it/migranti-alla-merce-degli-algoritmi on 2023-02-10