Commentary. The fact that AI is neither artificial nor intelligent is a starting point from which another conclusion directly follows: AI is an instrument of domination.

The false impartiality of an instrument of domination

“Artificial Intelligence” is neither intelligent nor artificial. This is what AI expert Kate Crawford argues, in a highly compelling manner, in her Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.

This book offers a good starting point to contextualize the newest hype on the topic, triggered by the perfectly aligned communication machine that allowed the creators of ChatGPT to essentially receive free marketing services from thousands of media outlets around the world. Pretty much everyone wanted to have a take on Open AI’s chatbot, even going so far as to ask existential questions: will this chatbot put an end to the professions of writer, author, journalist – in short, is AI already ready to replace us?

On the other hand, there has been much less speculation from those who have been dealing with AI for some time, or working with it, and are observing this debate with some detachment. Either they won’t approach such questions, or they’re keen to specify that AI is not synonymous with ChatGPT – there is also robotics, and more broadly the possibility of creating systems that can help us (e.g., in medicine or elderly care) at a high standard and without putting our human centrality in question, which many worry is very much at risk (because of AI, to wit, not because of climate change, wars, inequality, etc.).

The fact that AI is neither artificial nor intelligent is a starting point from which another conclusion directly follows: AI is an instrument of domination: “[D]ue to the capital required to build AI at scale and the ways of seeing that it optimizes[,] AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power,” Crawford writes.

The fact is that it takes money, and lots of it, to feed AI systems: it’s an extractive model, so it uses up resources to capture every moment in which we give away a piece of data to feed the AI, to process that data (with deep learning, for example), and to gather it into data sets, usually put together by white male Westerners. From this it follows that AI is not “artificial,” because it reproduces human biases and, indeed, due to its way of learning, ends up increasing their impact on the outputs provided by the AI (e.g. the answer to a question posed to ChatGPT or Siri, Alexa, etc.). It is not “intelligent,” precisely because it is unable to “create” anything and ends up regurgitating the inaccuracies and falsehoods that were there before it.

So we would do well not to forget the concept of “technology of domination” introduced by Donna Haraway: this is necessary in order to refute the theorists of the inevitability of the advancement of technology and to remind us, if we still need any reminding, that technology is not neutral. And no, it doesn’t depend on who uses it, as is often assumed in the pursuit of technological do-goodism: it depends on who owns it.

From this argument comes another recommendation: to be wary of “techno-solutionism.” An example: during the pandemic, it was believed that technology could be “the” solution in the face of the spread of the virus, which would manage to contain it. Accordingly, various tracking apps were developed, such as Immuni. They were clearly useful, in principle. But they didn’t make a decisive contribution, contrary to what the “techno-optimists” had claimed. David Lyon, who has long been working on the topic of surveillance, deals with this topic in his latest book, Pandemic Surveillance. Both democratic and authoritarian countries have used these tracking systems, while in all countries where it has been used, the system has worked only to a limited extent or hasn’t worked at all. However, it has been used as a tool of control in a number of cases.

Underlying AI and the resulting “techno-solutionism” that comes with it is data collection: companies want more and more data, regardless of whether that solves any problems right now. Let’s collect the data in the meantime, they say, then we’ll see what it’s good for.

The way the debate about AI in Italy has configured itself (and the international debate as well, given the recurring trope of the programmer who falls in love with his chatbot, so much so that he thinks it has become sentient – a romantic fable which can serve as source for countless movie or TV show plots, but has basically nothing to do with reality), it’s crucial to remember what “this kind” of AI we’re talking about is like: proprietary, backed by the money of big corporations that already own a lot of data (not surprisingly, Microsoft is willing to pay $10 billion for ChatGPT) and aimed at commercial use: that is, to lead us to spend more, create new needs, etc.

Not only that, but the ChatGPT affair also tells us two other things about our times. First of all, it points to our desire for quick, immediate answers – a paradox considering the increasingly complex world around us. Second, the eagerness for “this kind” of AI points to homogenization, to the rejection of the critical spirit, in the name of an “impartiality” that is as fictitious as the intelligence and artificiality of AI.

Many years ago, Enzo Jennacci in his song Quelli che sang with scathing irony about “Those who go, ‘The evening news said so!’” Nowadays, we’re in danger of replacing blind trust in the evening news with blind trust in “AI” – or what they’re trying to sell to us under that label.

Subscribe to our newsletter

Your weekly briefing of progressive news.

You have Successfully Subscribed!