How the brain develops: a new way to shed light on cognition

Summary: A new study in computational neuroscience sheds light on how the brain’s cognitive abilities develop and could help shape further AI research.

Source: Montreal university

A new study presents a novel neurocomputational model of the human brain that could shed light on how the brain develops complex cognitive abilities and advance research into neural artificial intelligence.

Published on September 19, the study was carried out by an international group of researchers from the Institut Pasteur and Sorbonne University in Paris, the CHU Sainte-Justine, Mila – Quebec Institute of Artificial Intelligence and the University of Montreal.

The model, who made the cover of the newspaper Proceedings of the National Academy of Sciences of the United States of America (PNAS), describes neural development at three hierarchical levels of information processing:

  • the first sensory-motor level explores how the brain’s internal activity learns patterns from perception and associates them with action;
  • the cognitive level examines how the brain contextually combines these patterns;
  • finally, the conscious level considers how the brain dissociates itself from the outside world and manipulates learned patterns (via memory) that are no longer accessible to perception.

The team’s research provides clues to the fundamental mechanisms underlying cognition through the model’s focus on the interplay between two fundamental types of learning: Hebbian learning, which is associated with statistical regularity (i.e. repetition) “neurons that fire together, wire together” – and reinforcement learning, associated with reward and the neurotransmitter dopamine.

The model solves three tasks of increasing complexity at all these levels, from visual recognition to the cognitive manipulation of conscious percepts. Each time, the team introduced a new core mechanic to allow it to progress.

The results highlight two fundamental mechanisms for the multilevel development of cognitive abilities in biological neural networks:

  • synaptic epigenesis, with Hebbian learning at the local scale and reinforcement learning at the global scale;
  • and a self-organized dynamic, through spontaneous activity and a balanced excitatory/inhibitory ratio of neurons.
It shows a brain
The model solves three tasks of increasing complexity at all these levels, from visual recognition to the cognitive manipulation of conscious percepts. Image is in public domain

“Our model demonstrates how neuro-AI convergence highlights the biological mechanisms and cognitive architectures that can fuel the development of the next generation of artificial intelligence and even ultimately lead to artificial consciousness,” said Guillaume Dumas, member of the team, assistant professor of computational psychiatry at UdeM and principal investigator at the CHU Sainte-Justine Research Center.

Reaching this stage may require integrating the social dimension of cognition, he added. Researchers are now seeking to integrate the biological and social dimensions at play in human cognition. The team has already launched the first simulation of two interacting whole brains.

According to the team, grounding future computational models in biological and social realities will not only continue to shed light on the fundamental mechanisms underlying cognition, but will also help provide a unique bridge to artificial intelligence towards the only known system endowed with an advanced social consciousness: the human. brain.

About this Computational Neuroscience Research News

Author: Julie Gazaille
Source: Montreal university
Contact: Julie Gazaille – University of Montreal
Image: Image is in public domain

Original research: Free access.
“Multilevel development of cognitive abilities in an artificial neural network” by Guillaume Dumas et al. PNAS


See also

This shows a person working on a computer

Multilevel development of cognitive abilities in an artificial neural network

Several neural mechanisms have been proposed to account for the formation of cognitive abilities through postnatal interactions with the physical and sociocultural environment.

Here, we introduce a three-level computational model of information processing and acquisition of cognitive abilities. We propose minimum architectural requirements to build these tiers, and how parameters affect their performance and relationships.

The first sensory-motor level manages local non-conscious processing, here during a visual classification task. The second level or cognitive level globally integrates information from multiple local processors through long-range connections and synthesizes it in a global, but still non-conscious way. The third, cognitively highest level processes information holistically and consciously. It is based on the Global Neural Workspace (GNW) theory and is called the conscious level.

We use the tracking and delay conditioning tasks to challenge the second and third levels, respectively. The results first highlight the need for epigenesis through the selection and stabilization of synapses at local and global scales to allow the network to solve the first two tasks.

Globally, dopamine appears to be required to properly provide credit attribution despite the temporal lag between perception and reward. At the third level, the presence of interneurons becomes necessary to maintain a self-sustaining representation within the GNW in the absence of sensory input.

Finally, while the balanced spontaneous intrinsic activity facilitates epigenesis at local and global scales, the balanced excitatory/inhibitor ratio increases performance. We discuss the plausibility of the model in terms of neurodevelopment and artificial intelligence.

Leave a Reply

%d bloggers like this: