archiveauthordonateinfoloadingloginsearchstarsubscribe

Analysis. Soon, speaking and typing may be old-fashioned forms of communication. The new wave of Silicon Valley products will use your brain as the interface. Will bioethicists be able to control the new technology?

Silicon Valley wants access to your brain

As he climbed into the driver’s seat, Rodrigo Mendes saw neither pedals nor a steering wheel. The car started to move and turned onto the road. Rodrigo had never driven before in his life, but the car was following the commands he was giving with his thoughts. This is not the beginning of a South American science fiction novel, but a true-to-life account of the work of Rodrigo Mendes, 43, who’s been a paraplegic since 1992, when he took a bullet in the neck during a car theft in Sao Paulo, Brazil.

Last April, Mendes, who now directs an institute dedicated to the rehabilitation and social integration of people with disabilities of various types, drove a race car using a special device capable of “reading” brain waves and translating them into instructions which can be executed by machines, such as “turn right,” ”slow down” or “speed up.”

This type of technology is called “brain-computer interfaces” or “neurotechnology,” and it has just left the experimental stage, turning into commercial products available on the market. This is great news for those suffering from diseases that restrict their motor skills. However, these technologies are also being put to other uses.

For a long time, the U.S. agency dedicated to military research has been discussing the possibility of providing their soldiers with an armor of sensors and microchips capable of integrating the natural human faculties with additional data streams and artificial intelligence.

In the meantime, neurotechnologies are lending themselves to an unending variety of commercial applications which seem much more mundane than a crippling illness or a military conflict, but no less invasive. For €300, you can already buy online a pair of headphones that turns brain waves into direct commands to other devices.

The “big data” of brain activity, once the exclusive concern of neurologists and psychiatrists, is now the focus of many and various interests. For instance, knowing a viewer’s level of attention during the enjoyment of some particular content (a video, a song or a video game) would make any advertiser happy.

In addition, the ability of scientists to not only record and analyze brain activity but also influence it is growing. Through genetics, neuroscientists have already managed to induce artificial memories in mice, recalling experiences that the animal has never had. In this way, it becomes possible to influence their behavior by acting on individual neurons.

The application of these technologies to humans is still far off, but the neurotechnology industry is already attracting large economic investments, especially in the U.S. At the federal level, the Brain Initiative, a 10-year program of neuroscience research launched by the Obama administration, has already invested €500 million into the development of neurotechnologies.

Moreover, private investment is also soaring, chasing a market that could reach $12 billion by 2020 in devices alone, according to the website Neurotech Reports. At the last F8 conference, the annual Facebook tech showcase, there was a lot of talk about artificial intelligence, augmented reality, and the intention of the company to enable its users to communicate by transmitting messages directly from brain to brain, without the annoyance of having to use keyboards, smartphones and video cameras (not to mention the obsolete practice of meeting face-to-face).

It would have been impossible for Elon Musk (founder of Tesla, PayPal, SpaceX and other companies that produce more patents than useful goods) to stay away from this field. He added the Neuralink startup to his collection, dedicated to mitigating neurological disease and “intelligence expansion.” Private investment in research and development in the industry is currently estimated at around €100 million per year, keeping up the trend of strong growth.

The researchers are aware of the sensitivity of the subject, even if neurotechnologies are only now taking their first steps. This is demonstrated by an appeal published in mid-November in the journal Nature under the title “Four ethical priorities for neurotechnologies and AI,” signed by a group of 27 experts working in the field in various capacities.

Dubbed “the Morningside Group,” it includes neuroscientists, computer scientists, and medical doctors from four continents, with backgrounds in academia but also in industry, as evidenced by the signature of Blaise Aguera y Arcas, director of Google’s Artificial Intelligence division. Coordinating the collaboration are biologist Rafael Yuste of Columbia University and the philosopher Sara Goering of Seattle University.

The signatories propose that the scientific community should impose limitations on itself regarding neurotechnologies, agreeing on what directions should be avoided in research and innovation. Similar initiatives have been proposed in recent months in other bioethically sensitive fields, such as artificial intelligence and genetic manipulation.

The first area in need of attention concerns the privacy of the users of neurotechnologies: They must be guaranteed the right to control the use of their data. The signatories suggest that generalized verification algorithms that are difficult to manipulate should be used, like those that regulate the cryptocurrency markets such as Bitcoin, and which are now an object of interest for central banks and governments.

Another delicate issue is that of individual identity: neurotechnologies can change both the subjective and the objective perception of someone’s own identity.

For example, patients who undergo deep brain stimulation (sending weak electrical signals by means of electrodes placed in the brain, which is a technique used in the treatment of depression) struggle to recognize the behaviors they perform after the therapy as their own. If individuals will be able to act in far-off locations with the help of machines using only their thoughts, it becomes difficult to clearly define the boundaries of the body and of personal identity.

Another open question is that of potential discrimination between people who are neuro-technologically “enhanced” and those who are not; or between users of the same technologies who belong to different social groups.

For instance, a couple of years ago, a study conducted by researchers from the Carnegie Mellon University showed that a woman looking for a job online was more likely to be shown ads for less well-paid jobs compared to those shown to men. This was clearly the optimal strategy suggested by the advertising algorithms that classify users by gender. In the field of neurotechnologies, such approaches should be banned.

While reading the appeal published in Nature is very useful, as it identifies the truly sensitive points that will be brought up by technological development in the near future, one can have doubts about its impact. According to the good intentions of the group, these guidelines should be added to the Universal Declaration of Human Rights — a very ambitious goal.

While such appeals and public statements have increased in recent years, their ability to counterbalance the economic interests in play seems very limited.

The use of our personal data, grist to the mill of artificial intelligence giants like Google and Facebook, is already out of control.

Google’s participation in the Morningside Group is meant to reassure us about the social responsibility of large companies. Instead, it only undermines the group’s credibility, making their appeal look like nothing more than a campaign to protect the brand.

Subscribe to our newsletter

Your weekly briefing of progressive news.

You have Successfully Subscribed!