AI scientists are creating new theories about how the brain learns. peppermint

Date:

[ad_1]

Five decades of research in artificial neural networks has earned Geoffrey Hinton the nickname the godfather of headline-grabbing AI models, including ChatGPT and LaMDA. They can write coherent (if uninspired) prose, diagnose diseases from medical scans, and drive self-driving cars. But for Dr. Hinton, building better models was never the end goal. Their hope was that by developing artificial neural networks that could learn to solve complex problems, they could shed light on how the brain’s neural networks do so.

Brains learn by subtle reconnections: Some connections between neurons, called synapses, are strengthened, while others must be weakened. But because the brain has billions of neurons, millions of which may be involved in any one task, scientists have been puzzled over how it knows which synapses to bend and by how much. Dr. Hinton popularized a clever mathematical algorithm known as backpropagation to solve this problem in artificial neural networks. But for a long time it was believed to be too cumbersome to evolve in the human brain. Now, as AI models become increasingly human-like in their capabilities, scientists are questioning whether the brain can do something similar.

Figuring out how the brain does what it does is no easy task. Much of what neuroscientists understand about human learning comes from experiments conducted on small pieces of brain tissue or a handful of neurons in a petri dish. It is often unclear whether the living, learning brain works by scaled-up versions of these same rules, or if something more sophisticated is going on. Even with modern experimental techniques, where neuroscientists track hundreds of neurons at a time in living animals, it is hard to reverse-engineer exactly what is going on.

One of the most prominent and long-standing theories about how the brain learns is Hebbian learning. The idea is that neurons that fire at roughly the same time become more strongly connected; This is often summarized as “cells that fire wired together”. Hebbian learning can explain how the brain learns simple associations – think of Pavlov’s dogs salivating at the sound of a bell. But for more complex tasks, such as language learning, Hebbian learning seems very inefficient. Even after massive amounts of training, artificial neural networks trained in this way are far below human levels of performance.

Today’s top AI models are engineered differently. To understand how they work, imagine an artificial neural network trained to recognize birds in images. Such a model would be composed of thousands of synthetic neurons arranged in layers. The images are fed into the first layer of the network, which sends information about the contents of each pixel to the next layer via the AI ​​equivalent of synaptic connections. Here, neurons can use this information to select lines or edges before sending signals to the next layer, which may select eyes or legs. This process continues until the signals reach the final layer responsible for making the big call correct: “bird” or “not bird”.

Integral to this learning process is the so-called backpropagation-of-error algorithm, often referred to as Backprop. If the network is shown an image of a bird but mistakenly concludes that it is not, then – once it realizes the mistake – it generates an error signal. This error signal propagates backward through the network layer by layer, strengthening or weakening each connection to minimize any future errors. If the model is shown a similar image again, the modified connection will cause the model to correctly declare: “bird”.

Neuroscientists have always suspected that backpropagation could work in the brain. In 1989, when Dr. Hinton and his colleagues showed that algorithms could be used to train layered neural networks, Francis Crick, the Nobel laureate who co-discovered the structure of DNA, published an account of the theory in the journal Nature. Published version. He said that neural networks using backpropagation algorithms were biologically “unrealistic in almost every case”.

For one thing, neurons mostly send information in one direction. For backpropagation to work in the brain, there must exist a perfect mirror image of each network of neurons to send the error signal back. Furthermore, artificial neurons communicate using signals of different strengths. Biological neurons, on the other hand, send signals of fixed strengths, which backprop algorithms are not designed to deal with.

Nevertheless, the success of neural networks has reignited interest in whether the brain has some kind of backprop. There are promising experimental indications that this may happen. For example, a preprint study published in November 2023 found that individual neurons in the brains of mice are responding to unique error signals, one of the key elements of backprop-like algorithms that are lacking in long-lived brains. .

Scientists working at the border between neuroscience and AI have also shown that small changes to BackProp can make it more amenable to the brain. An influential study showed that the mirror-image network that was once thought necessary does not have to be an exact replica of the original to learn (albeit more slowly for larger networks). This makes it less unbelievable. Others have found ways to bypass mirror networks altogether. If artificial neural networks can be given biologically realistic features, such as specialized neurons that can integrate activity and error signals in different parts of the cell, a single set of neurons can be backpropped up. Some researchers have also made changes to the Backprop algorithm so that it processes spikes instead of continuous signals.

Other researchers are exploring different theories. In a paper published in Nature Neuroscience earlier this year, Yuhang Song of Oxford University and colleagues described a method that turns the backprop on its head. In conventional backprops, error signals lead to adjustments at the synapse, resulting in changes in neuronal activity. The Oxford researchers proposed that the network could first alter activity in neurons, and only then adjust to fit the synapses. He called this the possible configuration.

When the authors tested possible configurations in artificial neural networks they found that they learned in a much more human-like manner than models trained with Backprop – more robustly and with less training. They also found that the network offered a very close match to human behavior on other very different tasks, such as those involving learning how to move a joystick in response to different visual cues.

learning the hard way

However, for now, all these theories are just that. Designing experiments to prove that Backprop, or any other algorithm, is at work in the brain is surprisingly difficult. For Aran Naybi and colleagues at Stanford University, this was a problem that AI could solve.

Scientists used one of four different learning algorithms to train more than a thousand neural networks to perform different tasks. They then monitored each network during training, recording neuronal activity and the strength of synaptic connections. Dr. Naybi and his colleagues trained another supervised meta-model to extract learning algorithms from the recordings. They found that the meta-model could tell which of the four algorithms was used by simply recording a few hundreds of virtual neurons at different intervals during learning. The researchers hope that such a meta-model could do something similar with equivalent recordings of a real brain.

Identifying the algorithm or algorithms that the brain uses to learn would be a major step forward for neuroscience. This will not only shed light on how the body’s most mysterious organ works, but it could also help scientists create new AI-powered tools to understand specific neural processes. It is not clear whether this will make AI algorithms better or not. For Dr. Hinton, at least, the backprop is probably better than what happens in the brain.

© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under license. Original content can be found at www.economist.com

catch ’em all business News, market news, today’s latest news events and latest news Updates on Live Mint. download mint news app To get daily market updates.

MoreLess

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

[tds_leads title_text="Subscribe" input_placeholder="Email address" btn_horiz_align="content-horiz-center" pp_checkbox="yes" pp_msg="SSd2ZSUyMHJlYWQlMjBhbmQlMjBhY2NlcHQlMjB0aGUlMjAlM0NhJTIwaHJlZiUzRCUyMiUyMyUyMiUzRVByaXZhY3klMjBQb2xpY3klM0MlMkZhJTNFLg==" f_title_font_family="653" f_title_font_size="eyJhbGwiOiIyNCIsInBvcnRyYWl0IjoiMjAiLCJsYW5kc2NhcGUiOiIyMiJ9" f_title_font_line_height="1" f_title_font_weight="700" f_title_font_spacing="-1" msg_composer="success" display="column" gap="10" input_padd="eyJhbGwiOiIxNXB4IDEwcHgiLCJsYW5kc2NhcGUiOiIxMnB4IDhweCIsInBvcnRyYWl0IjoiMTBweCA2cHgifQ==" input_border="1" btn_text="I want in" btn_tdicon="tdc-font-tdmp tdc-font-tdmp-arrow-right" btn_icon_size="eyJhbGwiOiIxOSIsImxhbmRzY2FwZSI6IjE3IiwicG9ydHJhaXQiOiIxNSJ9" btn_icon_space="eyJhbGwiOiI1IiwicG9ydHJhaXQiOiIzIn0=" btn_radius="3" input_radius="3" f_msg_font_family="653" f_msg_font_size="eyJhbGwiOiIxMyIsInBvcnRyYWl0IjoiMTIifQ==" f_msg_font_weight="600" f_msg_font_line_height="1.4" f_input_font_family="653" f_input_font_size="eyJhbGwiOiIxNCIsImxhbmRzY2FwZSI6IjEzIiwicG9ydHJhaXQiOiIxMiJ9" f_input_font_line_height="1.2" f_btn_font_family="653" f_input_font_weight="500" f_btn_font_size="eyJhbGwiOiIxMyIsImxhbmRzY2FwZSI6IjEyIiwicG9ydHJhaXQiOiIxMSJ9" f_btn_font_line_height="1.2" f_btn_font_weight="700" f_pp_font_family="653" f_pp_font_size="eyJhbGwiOiIxMyIsImxhbmRzY2FwZSI6IjEyIiwicG9ydHJhaXQiOiIxMSJ9" f_pp_font_line_height="1.2" pp_check_color="#000000" pp_check_color_a="#ec3535" pp_check_color_a_h="#c11f1f" f_btn_font_transform="uppercase" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjQwIiwiZGlzcGxheSI6IiJ9LCJsYW5kc2NhcGUiOnsibWFyZ2luLWJvdHRvbSI6IjM1IiwiZGlzcGxheSI6IiJ9LCJsYW5kc2NhcGVfbWF4X3dpZHRoIjoxMTQwLCJsYW5kc2NhcGVfbWluX3dpZHRoIjoxMDE5LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" msg_succ_radius="2" btn_bg="#ec3535" btn_bg_h="#c11f1f" title_space="eyJwb3J0cmFpdCI6IjEyIiwibGFuZHNjYXBlIjoiMTQiLCJhbGwiOiIxOCJ9" msg_space="eyJsYW5kc2NhcGUiOiIwIDAgMTJweCJ9" btn_padd="eyJsYW5kc2NhcGUiOiIxMiIsInBvcnRyYWl0IjoiMTBweCJ9" msg_padd="eyJwb3J0cmFpdCI6IjZweCAxMHB4In0="]

Popular

More like this
Related

Discover more from AyraNews24x7

Subscribe now to keep reading and get access to the full archive.

Continue reading