Neural Networks Mimic Brain Circuits for AI Advances
06-03-2025 | By Liam Critchley
Key Things to Know:
- AI's Learning Limitations – Traditional AI models struggle with catastrophic forgetting, meaning they can't learn new information without erasing previous knowledge.
- Biological Inspiration – Researchers are designing AI based on corticohippocampal circuits in the brain, which enable lifelong learning without forgetting.
- Hybrid Neural Networks – A new corticohippocampal hybrid neural network (CH-HNN) combines artificial and spiking neural networks to improve continuous learning and energy efficiency.
Artificial intelligence (AI) has been revolutionising society over the last few years and has made tremendous strides to achieve an immense level of maturity in such a short space of time. While not perfect, AI algorithms have become a part of our everyday lives to manage simple research and query tasks. However, they have also become an invaluable tool in many hi-tech, healthcare, and manufacturing industries.
A lot of AI advances have become a part of our lives so quickly due to the development of the generative pre-trained transformer (GPT)―which is the state-of-the-art learning language model (LLM) developed by OpenAI for their ChatGPT programs.
However, despite many advances, AI systems today still rely on being trained on an entire dataset at once and lack the ability to incrementally add new data without disrupting the model. This leads to catastrophic forgetting in application areas that require the AI model to incrementally learn from new data as it arises.
On the other hand, biological systems do not suffer from catastrophic forgetting when continuously learning. Biological systems use both specific and generalised memories within corticohippocampal circuits to achieve lifelong learning without catastrophic forgetting. Researchers have recently been inspired by these biological neural circuits and have developed a corticohippocampal circuits-based hybrid neural network that emulates how the brain achieves lifelong learning while mitigating catastrophic forgetting in both task-incremental and class-incremental learning scenarios.
Biological Systems Demonstrate Efficient Incremental Learning
Biological systems are highly efficient at incremental learning with a low energy consumption, so there’s a lot of potential to learn from them and create similar systems that can improve the continuous learning capabilities of AI algorithms. Neuroscientists have found that corticohippocampal circuits are behind the episodic learning and generalisation of the brain―which are fundamental for lifelong learning.
It has been discovered that the medial prefrontal cortex (mPFC) and the CA1 region of the hippocampus (HPC) provide regularity across learning episodes and can correlate the current learning episode with previous episodes to draw comparisons. On the other hand, the dentate gyrus (DG) and CA3 regions within the HPC are thought to encode specific memories relating to specific episodes. These two regions of the brain form a recurrent loop between the HPC and mPFC that allows for generalisation across learning episodes and the ability to learn new concepts.
New Hybrid Neural Networks Inspired by Corticohippocampal Circuits
Researchers have developed a corticohippocampal circuits-based hybrid neural network (CH-HNN) that emulates this recurrent loop cycle in the human brain to mitigate catastrophic forgetting in artificial AI systems so that they can incrementally learn without forgetting previous information stored within the datasets. The CH-HNNs incorporate both artificial neural networks (ANNs) and spiking neural networks (SNNs) to enable the algorithms to learn new concepts through episode inference.
The CH-HNN operates as a task-agnostic system, which allows for a reduction in memory overheads. This makes continually learning new information in real-world scenarios more practical. The model also has a low power thanks to the use of the SNNs, so it has been developed to be an energy-efficient algorithm that can continually learn in dynamic environments and retain the information for later use within the model.
The combination of both ANNs and SNNs mimics the roles of the specific and generalised memory representations within the biological-based neural circuits. ANNs provide a high spatial complexity and are analogous to the mPFC-CA1 circuits in biological networks that are responsible for integrating regularities between learning episodes. On the other hand, SNNs have sparse firing rates and a low power consumption, so they play an analogous role to the DG-CA3 circuits and can incrementally encode new concepts and form new memories.
The researchers also incorporated metaplasticity into the hybrid mode. This allowed the AI model to simulate dynamic changes in synaptic learning capabilities as the amount of knowledge accumulated and grew over time. This ensured that similar episodes had their synaptic weights preserved as the model grew so that they weren’t forgotten―and mimicked the interactions between DG-CA3 circuits and the lateral parietal cortex (LPC) that play a role in reducing memory errors in the brain.
Compared to other methods to achieve continued learning, the CH-HNN exhibits a better performance and balance between learning new information (plasticity) and retaining previous knowledge (stability). Both are critical to ensure that the algorithm can continue learning new information without forgetting the old information. Additionally, the CH-HNN also demonstrated that it could transfer information from related learning episodes to different datasets, showing that the model can perform knowledge transfer operations in many potential learning scenarios.
As well as developing the fundamentals of the models, the researchers also tested in it in real-world scenarios where it had the capability to continually learn. While there are a lot of application scenarios where the model could be used, when it was trialled in neuromorphic hardware by the research team, the SNNs significantly reduced the power consumption of the system―showing that the model could be implemented for creating more dynamic learning environments to improve energy efficiency.
Conclusion
Overall, the approach provided an indirect way of prioritising knowledge previously gained to facilitate the learning of new information and mimic the lifelong learning mechanisms found in the human brain. However, it is still not near the level of human brains as humans can learn new concepts from just a few examples, but it’s thought that integrating few-shot learning with continual learning could help to further develop the algorithms so that they can learn quicker like a human brain can. Nevertheless, the researchers have developed a model that artificially simulates and simplifies the corticohippocampal recurrent circuits found in the brain to improve the performance and adaptability of continual learning algorithms in real-world applications that use neural networks and other artificial intelligence systems―and it’s a start to developing more advanced AI algorithms that can continuously learn new concepts more effectively.
Reference:
