Brain-inspired AI learns like humans – Neuroscience News

Resume: Today’s AI can read, talk, and analyze data, but it still has critical limitations. NeuroAI researchers designed a new AI model inspired by the efficiency of the human brain.

This model allows AI neurons to receive feedback and adapt in real time, improving learning and memory processes. The innovation could lead to a new generation of more efficient and accessible AI, bringing AI and neuroscience closer together.

Key Facts:

  1. Inspired by the brain: The new AI model is based on the way human brains efficiently process and adapt data.
  2. Real-time adjustment: AI neurons can receive feedback and adapt on the fly, improving efficiency.
  3. Potential impact: This breakthrough could pioneer a new generation of AI that learns like humans, improving both AI and neuroscience.

Source: CSHL

There is. It talks. It collects mountains of data and recommends business decisions. Today’s artificial intelligence may seem more human than ever. However, AI still has some critical shortcomings.

“As impressive as ChatGPT and all these current AI technologies are in terms of interacting with the physical world, they are still very limited. Even the things they do, like solving math problems and writing essays, they need billions and billions of training examples before they can do them properly,” explains NeuroAI scientist Kyle Daruwalla of Cold Spring Harbor Laboratory (CSHL) .

Daruwalla is looking for new, unconventional ways to design AI that can overcome such computational obstacles. And maybe he just found one.

The new machine learning model provides evidence for an as-yet unproven theory that correlates working memory with learning and academic performance. Credit: Neuroscience News

The key was moving data. Today, most of the energy consumption of modern computers comes from moving data around. In artificial neural networks, which consist of billions of connections, data can still have a long way to go.

To find a solution, Daruwalla looked for inspiration in one of the most computationally powerful and energy-efficient machines in existence: the human brain.

Daruwalla designed a new way for AI algorithms to move and process data much more efficiently, based on the way our brains absorb new information. The design allows individual AI ‘neurons’ to receive feedback and adapt on the fly rather than waiting for an entire circuit to update at the same time. This way, data doesn’t have to travel as far and is processed in real time.

“In our brains, our connections are constantly changing and adapting,” says Daruwalla. “It’s not like you put everything on hold, adapt and then be yourself again.”

The new machine learning model provides evidence for an as-yet unproven theory that correlates working memory with learning and academic performance. Working memory is the cognitive system that allows us to stay on task while recalling stored knowledge and experiences.

“There are theories in neuroscience about how working memory circuits can help facilitate learning. But there is nothing as concrete as our rule that actually connects the two.

“And that was one of the nice things we encountered here. The theory led to a rule whereby individually adjusting each synapse required this working memory to be next to it,” says Daruwalla.

Daruwalla’s design could pioneer a new generation of AI that learns like us. That would not only make AI more efficient and accessible, it would also bring neuroAI a bit full circle. Neuroscience has been feeding AI valuable data long before ChatGPT uttered its first digital syllable. It looks like AI will soon return the favor.

About this artificial intelligence research news

Author: Sara Giarnieri
Source: CSHL
Contact: Sara Giarnieri – CSHL
Image: The image is credited to Neuroscience News

Original research: Open access.
“Information Bottleneck-Based Hebbian Learning Rule Naturally Links Working Memory and Synaptic Updates” by Kyle Daruwalla et al. Frontiers in computational neuroscience


Abstract

Information bottleneck-based Hebbian learning rule naturally links working memory and synaptic updates

Deep neural feedforward networks are effective models for a wide range of problems, but training and deploying such networks comes at a significant energy cost. Spiking neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution if properly deployed on neuromorphic computing hardware.

Yet many applications train SNNs offline,and performing network training directly on neuromorphic,hardware is an ongoing research problem. The main obstacle is that back-propagation, which allows training such artificial deep networks, is biologically unlikely.

Neuroscientists aren’t sure how the brain would propagate an accurate error signal backward through a network of neurons. Recent progress addresses some of this issue, for example the weight transport problem, but a complete solution remains elusive.

In contrast, new learning rules based on the information bottleneck (IB) train each layer of a network independently, circumventing the need to propagate errors across layers. Instead, the propagation is implicit due to the feedforward connectivity of the layers.

These rules take the form of a three-factor Hebbian update. A global error signal modulates local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires the simultaneous processing of multiple samples, and the brain sees only one sample at a time.

We propose a new three-factor update rule where the global signal correctly captures information about different samples through an auxiliary memory network. The support network can be trained a priori independent of the data set used with the primary network.

We demonstrate comparable performance to baselines in image classification tasks. Interestingly, unlike back-propagation-like schemes where there is no link between learning and memory, our rule presents a direct connection between working memory and synaptic updates. To our knowledge, this is the first rule that makes this connection explicit.

We explore these implications in initial experiments investigating the effect of memory capacity on learning performance. Going forward, this work suggests an alternative view of learning, where each layer balances memory-informed compression with task performance.

This view obviously includes several important aspects of neural computation, including memory, efficiency, and locality.

Leave a Comment