archiveauthordonateinfoloadingloginsearchstarsubscribe

Interview. We spoke with Alessandro Vespignani, whose new book ‘The Algorithm and the Oracle’ examines the predictive abilities of machines. ‘They are learning, but we don’t understand well how they do it.’

Machines meet the physics of the social

Not a case of “brain drain,” but of a brain on the move: as Alessandro Vespignani confesses, he had no intention of leaving Italy, but his mentor strongly advised him to accept the scholarship offer he received from across the Atlantic, since there was no possibility at all to get a scholarship at La Sapienza, despite the fact that the Roman university is considered to be at the highest level. In the United States, Alessandro Vespignani began working on the topic of complexity.

As part of Benoît Mandelbrot’s team, he got to know famous physicists, mathematicians, engineers and sociologists (Albert-László Barabási and David Lazer), who were always willing to have a talk over coffee about how to study topics such as the spread of epidemics, artificial intelligence and fake news. Vespignani ended up teaching Yale, Indiana University and Northeastern University, becoming one of the major scholars working on “complex networks,” whether these are social, neural, stable or transient.

Now, his book L’algoritmo e l’oracolo (“The Algorithm and the Oracle,” published in Italian by il Saggiatore, 197 pages, €20) has just come out, in which he examines the artificial worlds that humans and machines are building. The book highlights the contrast between the actual predictive potential of computer science and its transformation into an article of faith on the part of many opinion makers, according to which the digital realm will exercise an undeniable power in shaping our future.

At the same time, the author does not take the “easy” road of rejection taken by the French philosopher Jacques Ellul, who rails against technology and science, seeing them as the means for a future soft dictatorship over our daily lives. More pragmatically, his reflections are pursuing a different avenue of interest: Vespignani’s research lies at the forefront of what has been called “social physics,” i.e. the attempt to analyze a reality that is more and more permeated by the digital realm, and which shows power equilibria and a transformation of our social mode of being calling for careful reflection.

In your book, you write that “our future is the past.” Do you mean that there is a circular recurrence of the manner of living in society?

It is an expression that refers to many machine learning projects, which are tied to the belief that the data can, by itself, establish the basis for modelling the future, which is represented as a repetition of the past and present, with minor variations. “Consumer analytics” is a set of well-established techniques whose foundations were laid after World War II, when Andrew Cole was hired by the Target chain of supermarkets to study consumption trends to come up with promotional strategies. He certainly didn’t imagine that the correlations he found between the increased sales of certain goods and the marketing strategies aimed at pregnant women would end up provoking the outrage of one parent who made a scene before the managers of one of the supermarkets because they had sent promotional material for pregnant women to his then-minor daughter. The parent was offended—but he later discovered that his daughter was indeed pregnant after all. This is to say that, if we rely on consumption, machine learning is a powerful tool, able to offer a model for future behavior. But if we look beyond that, to the more complex dynamics, the determinism underlying the attempt to derive the future from the past based on the data collected ultimately fails.

The algorithms are working on prediction, but within the context of artificial worlds dominated by logics, dynamics and rules which are in the process of being defined, and which are often still intelligible. Accordingly, I am talking about a physics of the social—the most difficult field to work in, but one which is fascinating and exciting nonetheless. It is here that the notion of the oracular ability of machine learning to predict social life ends up belied in the end.

You use the term “social atom,” but this is an oxymoron. The atom is always an isolated entity which becomes bound with other singular entities. Society works quite differently: an individual who lives together with others becomes something different from what one thought they were before—this is its main trait. How do you explain this oxymoron?

Atoms are joined together, giving rise to molecules, to complex compounds—however, they always remain atoms, whether of oxygen or hydrogen. Humans are not the same, of course. By using the term “social atom,” I want to emphasize that the individual is still the element at the basis of the community, of the nation state and of society. We need to understand how this basic element functions, how it reacts in relation to the environment, how it changes and how its mind works.

The swarm is a metaphor used by many scholars to try to explain the complexity of social life. Do you agree with that?

I grew up in Rome, a city where, in certain months of the year, the sky is full of flocks of birds that create wonderful choreographic displays. At first glance, one would be led to say that there must be someone who is commanding them, establishing hierarchies and rules of conduct. But that is not true. There is, however, a more mundane level, which speaks of the distance between one bird and another, of maintaining the same speed, of changes in direction and speed as a function of the behavior of other birds. Likewise, automatic phenomena also exist in society, just like in swarms of various animals. There is adaptation and behavior reproduction. Swarms help us understand why and how the social atoms move. We are committed to the notion of an individual as an animal driven by rationality and free will. We’d like to think that’s true, but it isn’t.

You quote Michael Polanyi, the philosopher who has centered his research on the concept of implicit knowledge and its role in learning and in social relationships. Everyone takes this for granted, but no one has managed so far to explain how this really works in social relations. It is still a mysterious aspect. However, many artificial intelligence projects are focusing on tacit knowledge, even though people are well aware that it is impossible to fully explain it.

Implicit knowledge concerns the way the mind works regarding certain things, but isn’t able to explain them. For example, you can explain how to use a mathematical formula, but you can’t explain how to ride a bike. In the latter case, the best way to learn is to get on it and try it. Finally, after you fall down a few times, as long as you’re stubborn and keep trying again, you’ll learn how to ride a bike. Then, you can try to communicate your experience, your implicit knowledge, but you are destined to fail. What has been happening for the past few years is that the machines have been learning by themselves, without any supervision or assistance. They are learning, but we don’t understand well how they do it. Understanding how implicit knowledge works means getting a better understanding of how the human mind works, and of how the predictive algorithms work once they become “operational.”

In your book, you generally refrain from using the word “power.” Yet, at one point you speak at length about the Algorithmic Justice League, an association created for the purpose of denouncing the implicit racism of certain algorithms.

The association you mention was founded by Joy Buolamwini, an African American researcher at the Massachusetts Institute of Technology. She is affectionately known as a “poet of code,” as she uses intelligence, humor and poetry to denounce the racism that might emerge when a piece of software is written. Everything started with the fact that certain software algorithms for the identification of a person based on facial analysis were very accurate, well above 90 percent, if the person being identified was white, but became unreliable if the person was African American. Thus, the “poet of code” invites us to reflect on the problem of power and on the dominant logic in the artificial worlds in which we all live. I agree with her.

The whole “predictive” dimension takes on a sinister undertone when we start talking about the use of Big Data to manipulate public opinion. Don’t you agree?

What is still being called the “digital revolution” has brought significant change in terms of power relations and social relations. There are scholars who are talking about the era of absolute freedom which has been opened by digital technologies, about horizontalism, about the possibility of direct democracy in the relations between government and the governed. I am very skeptical about all of that. The problem is that the digital corporations and the nation states, which are using these algorithms, have become aware of their increased power. Take fake news, for instance, which spread exponentially faster than true and verified information. Why does this happen, why do they have greater power to generate support? This is one of the features of the artificial worlds that humans and machines are building, and it should be investigated and studied. Obviously, one should also bring the topic of the existing power relations into the debate.

Subscribe to our newsletter

Your weekly briefing of progressive news.

You have Successfully Subscribed!