[ad_1]
The brain learns by subtly rewiring: some connections between neurons, known as synapses, are strengthened, while others have to be weakened. But since the brain contains billions of neurons, millions of which may be involved in a single task, scientists have puzzled over how it knows which synapses to change, and by how much. Dr. Hinton popularized a clever mathematical algorithm called backpropagation to solve this problem in artificial neural networks. But it was long thought to be too difficult to develop in the human brain. Now, as AI models increasingly resemble humans in their abilities, scientists are questioning whether the brain can do something similar after all.
Figuring out what the brain does is no easy task. Most of what neuroscientists understand about human learning comes from experiments done on tiny pieces of brain tissue or a handful of neurons in a petri dish. It’s often unclear whether living, learning brains operate according to larger versions of these same rules or whether something more sophisticated is going on. Even with modern experimental techniques, in which neuroscientists track hundreds of neurons at a time in living animals, it’s hard to figure out what’s really going on.
One of the most prominent and longstanding theories about how the brain learns is Hebbian learning. The idea is that neurons that activate at roughly the same time are more strongly connected; often summarized as “cells that activate together wire together”. Hebbian learning can explain how the brain learns simple relationships – think of Pavlov’s dogs salivating when they heard the sound of a bell. But for more complex tasks, such as learning a language, Hebbian learning seems very inefficient. Even with massive amounts of training, artificial neural networks trained in this way fall far behind human levels of performance.
Today’s top AI models are engineered differently. To understand how they work, imagine an artificial neural network trained to find birds in images. Such a model would be made up of thousands of synthetic neurons arranged in layers. Pictures are fed into the first layer of the network, which passes information about the contents of each pixel to the next layer via the AI equivalent of synaptic connections. Here, neurons can use this information to pick out lines or edges before sending signals to the next layer, which might pick out eyes or feet. This process continues until the signals reach the final layer that is responsible for correcting the big call: “bird” or “not a bird”.
Integral to this learning process is the so-called backpropagation-of-error algorithm, often referred to as backprop. If the network is shown an image of a bird, but mistakenly concludes that it is not a bird, then – once it realises the mistake – it generates an error signal. This error signal progresses backwards through the network layer by layer, strengthening or weakening each connection to minimise any errors in the future. If the model is shown a similar image again, the modified connections will lead the model to correctly declare: “bird”.
Neuroscientists have always suspected that backpropagation could work in the brain. In 1989, shortly after Dr. Hinton and his colleagues showed that the algorithm could be used to train layered neural networks, Nobel laureate Francis Crick, who co-discovered the structure of DNA, published a refutation of this theory in the journal Nature. He said that neural networks using the backpropagation algorithm were biologically “unrealistic in almost every respect.”
For one thing, neurons send information mostly in one direction. For backpropagation to work in the brain, a perfect mirror image of each network of neurons must exist to send the error signal backward. In addition, artificial neurons communicate using signals of varying strength. Biological neurons, for their part, send signals of fixed strength, which the backprop algorithm is not designed to deal with.
Nevertheless, the success of neural networks has revived interest in whether some kind of backprop occurs in the brain. There have been promising experimental indications of this. For example, a preprint study published in November 2023 found that individual neurons in the brain of mice appear to respond to unique error signals, one of the crucial elements of algorithms like backprop, which was long thought to be lacking in the living brain.
Scientists working at the boundary between neuroscience and AI have also shown that small changes to backprop can make it more adaptable to the brain. One influential study showed that the mirror-image network once thought to be essential does not need to be an exact replica of the original to learn (although it does happen more slowly for large networks). This makes it less unreliable. Others have found ways to bypass mirror networks altogether. If artificial neural networks can be given biologically realistic features, such as specialized neurons that can integrate activity and error signals in different parts of the cell, then backprop can be done with a set of neurons. Some researchers have even modified the backprop algorithm so that it can process spikes rather than continuous signals.
Other researchers are exploring different theories. In a paper published earlier this year in Nature Neuroscience, Yuhang Song of Oxford University and his colleagues described a method that reverses backprop. In traditional backprop, error signals lead to adjustments in synapses, which in turn cause changes in neuronal activity. The Oxford researchers proposed that the network might first change the activity in neurons, and only then adjust the synapses to fit. They called this potential configuration.
When the authors tested possible configurations in artificial neural networks they found that they learned in a much more human-like way than models trained with backprop – more robustly and with less training. They also found that the networks offered a much closer match to human behaviour on other very different tasks, such as learning how to move a joystick in response to different visual cues.
learning the hard way
For now, however, all of these theories are just that. Designing experiments to prove whether backprop or any other algorithm is working in the brain is surprisingly difficult. For Eran Naybi and his colleagues at Stanford University, this was a problem that AI could solve.
The scientists used one of four different learning algorithms to train more than a thousand neural networks to perform a variety of tasks. They then monitored each network during training, recording neuronal activity and the strength of synaptic connections. Dr. Nayebi and his colleagues trained another supervisory meta-model to extract the learning algorithm from the recordings. They found that the meta-model could tell which of the four algorithms was used by recording just a few hundred virtual neurons at different intervals during learning. The researchers hope that such a meta-model could do the same with equivalent recordings of real brains.
Identifying the algorithm or algorithms the brain uses to learn would be a huge step forward for neuroscience. Not only would it shed light on how the body’s most mysterious organ works, but it could also help scientists create new AI-powered tools to understand specific neural processes. Whether this could lead to better AI algorithms is unclear. For Dr. Hinton, at least, backprop is potentially better than anything else that happens in the brain.
© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. Original content can be found at www.economist.com.
[ad_2]


