The study of the brain has never been so popular. Here’s a rough statistic that shows it: In 2016 alone, the database pubmed.gov logged nearly 5,000 scholarly articles containing the word “neuron” in the title or abstract. In neuroscience departments everywhere, doctors, biologists, chemists, computer scientists and psychologists are working on new neuroscience research.
Even as research funding has dried up in other areas, both the European Union and the United States have launched ambitious programs dedicated to gray matter. The boom is mainly due to the new search tools with which scientists are collecting data and information in quantities unimaginable until recently.
The recent of explosion of brain research, however, has not been meet with unanimous enthusiasm. Some scientists believe the acquisition of new data and understanding of how the brain works are not proceeding hand in hand. Despite the large amount of available information, our skills in interpreting it is reduced. We know so little about the mechanisms underlying thought that we cannot even determine whether theoretical models are an accurate reflection of the actual behavior of the brain.
To illustrate this point, a computer scientist at Berkeley and a neuroscientist from Northwestern University in Chicago studied a sort of “toy brain”: an artificial microprocessor whose architecture (i.e. the network of connections between the various electronic components) is well known. An electronic brain is simpler than a real one. Moreover, it was designed by man. By understanding the subject of analysis, they could determine whether the similar methods of analysis adopted by neuroscientists are accurate.
The processor chosen by the two scientists, Eric Jonas and Konrad Paul Kording, is the historic MOS 6502. The abbreviation may not mean anything to you, but for many it was the first computer they ever saw in person, perhaps at the home of some well-off man. In fact, the MOS 6502 was at the heart of the first Atari computer, the Apple I and II and the beloved Commodore. Jonas and Kording gathered data observing the processor perform its core activities: running vintage video games like “Donkey Kong” or “Space Invaders.” The basic units of the brain (neurons) number 100 billion. With only a few thousand transistors, deciphering the MOS 6502 was a breeze.
But what does it mean to understand the brain? Indeed, our most complex organ can be examined from various points of view. You can reconstruct the network of connections between neurons, or the organization of specialized “modules,” or even study the impact of local damage on the overall operation of the brain. All these methods, commonly used by neuroscientists, should help identify the role and the “logic” that regulates neural activity.
Instead, the analysis applied to the old microprocessor and published in January in the journal Public Library of Science was discouraging. Despite the available data, the algorithms used were not able to reconstruct the organization of the chip or the role of the individual transistors. The current study of the brain, therefore, could suffer from the same limitations. And in the absence of a basis for objective comparison, an artificial brain could never compare. “Ultimately,” conclude Jonas and Korling, “the problem is not that neuroscientists could not understand a microprocessor. The problem is that they would not understand it given the approaches they are currently taking.”
Phantom brain waves
They are not alone in doubting the accuracy of neuroscience. Even the essential tool for brain researchers, functional magnetic resonance imaging, which detects the activity of brain regions, was at the center of a case that has shocked the scientific community.
We owe most of our knowledge of the brain to the MRI. Pubmed lists about 4,000 studies based on magnetic resonance in 2016, a tenfold increase over 15 years. Yet according to Anders Eklund, Thomas Nichols and Hans Knutsson of the universities of Linköping in Sweden and Warwick in the U.K., much of this research could be based on false data analyses. Their research published in the Proceedings of the National Academy of Sciences in August 2016 showed that data analysis software contains serious statistical errors. In other words, about 4,000 scientific studies published in recent years would be wrong.
In 2012, the Ig Nobel Prize, awarded for the most ridiculous research, was given to a study that used MRI to detect activity in the brain of a dead salmon. The research showed that just a naive use of statistics can create false positives. Eklund and colleagues have shown that this is not a purely hypothetical risk.
Yet public investment in neuroscience abounds on both sides of the Atlantic. After Barack Obama launched the BRAIN Initiative, Europe responded in 2013 with the Human Brain Project, a 10-year brain research program with a €1 billion budget. Numbers this high have previously been reserved for space missions and high-energy physics laboratories, such as CERN, or the gravitational wave observatories.
Despite the avalanche of money, things are not going as expected. The Human Brain Project has been ruptured by an internal war against the director, Henry Markram of the École Polytechnique Fédérale de Lausanne. According to critics, Markram intended to spend the entire budget on a sophisticated computer simulation of the brain, even though knowledge about how the brain actually works is still superficial.
As science writer Nicola Nosengo recently wrote in the online journal Il Tascabile, “project communications have completely abandoned the image of the full brain simulation and replaced it with something much more pragmatic: to build a CERN of neuroscience.” Markram, meanwhile, “has all but disappeared from the public eye.”
According to Alessandro Treves of the International School for Advanced Studies, there was a basic misunderstanding “that by making a computer reconstruction from the bottom up, without a theory, you could understand something about the brain. Epistemologically, that doesn’t stand,” he told Nosengo.
An abundance of money, data and machines is not enough. What neuroscientists need now are ideas.