In the brain, neural networks are optimized to allow efficient control of behavior and transmission of information, while maintaining the ability to adapt and reconfigure to changing environments.
As with the simple cost/benefit calculation that can predict the speed at which a cat will begin to gallop, RIKEN CBS researchers are trying to uncover the basic mathematical principles underlying self-optimization of neural networks.
This is how speaking two languages changes you: the brain of a bilingual
Free energy principle
The free energy principle follows a concept called Bayesian inference, which is the key. In this system, an agent is continually updated with new incoming sensory data, as well as with their own previous results or decisions. The researchers compared the free energy principle with well-established rules that control how the strength of neural connections within a network can be altered by changes in sensory input.
Once they established that neural networks theoretically follow the free energy principle, they tested the theory through simulations. Neural networks self-organize by changing the strength of their neural connections and associating past decisions with future outcomes. In this case, neural networks can be considered to be governed by the principle of free energy, which allowed it to learn the correct path through a maze through trial and error in a statistically optimal way.
These findings point towards a set of universal mathematical rules that describe how neural networks self-optimize. These rules, along with the researchers’ new reverse engineering technique, can be used to study decision-making neural networks in people with thought disorders such as schizophrenia and predict which aspects of their neural networks have been altered.
Another practical use for these universal mathematical rules could be in the field of artificial intelligence, especially those that designers expect them to be able to learn, predict, plan and make decisions efficiently.