Workshop: Theory towards Brains, Machines and MindsWorkshop: Theory towards Brains, Machines and Minds

Title: Optimization principles for biological neural networks

Takuya Isomura, RIKEN Center for Brain Science


Knowing the normative purpose of adaptation or optimization of a neural network is essential for understanding the intelligence of biological organisms. When an environment is provided as a discrete state space, a class of biologically plausible cost functions for neural networks - where the same cost function is minimized by both neural activity and plasticity - can be cast as a variational bound of model evidence (or free energy) under an implicit generative model [1]. This equivalence suggests that any neural network minimizing its cost function implicitly performs variational Bayesian inference, indicating that variational free energy minimization is an apt principle for a canonical neural network. We showed that in vitro neural networks that receive input stimuli generated from hidden sources perform causal inference or source separation through activity-dependent synaptic plasticity - by minimizing variational free energy - as predicted by the theory [2,3].

Apart from the discrete state space, causal inference in a continuous state space is more difficult to achieve for biological neural networks, since conventional algorithms require biologically implausible nonlocal information to update synapses. Inspired by our in vitro experiments, we developed a biologically plausible local learning algorithm, called error-gated Hebbian rule (EGHR) [4]. It is derived by gradient descent of a cost function different from variational free energy and performs principal component analysis and independent component analysis (ICA), simultaneously, within a single layer neural network. Unlike conventional algorithms, EGHR can perform ICA in an environment with context switching, by extracting features shared across contexts. Subsequently, it can generalize past learning and perform causal inference or source separation even in unexperienced contexts [5], highlighting the fact that EGHR may capture the organism’s generalization capability. Possible neurophysiological implementations of this algorithm will be discussed. Our approach - based on finding the biologically plausible cost functions - is potentially useful for understanding the intelligence of biological organisms and creating biologically inspired artificial general intelligence.


References:
[1] Isomura, T. & Friston, K. Reverse engineering neural networks to characterise their cost functions. bioRxiv:654467 (2019).
[2] Isomura, T., Kotani, K. & Jimbo, Y. Cultured cortical neurons can perform blind source separation according to the free-energy principle. PLoS Computational Biology 11(12):e1004643 (2015).
[3] Isomura, T. & Friston, K. In vitro neural networks minimise variational free energy. Scientific Reports 8:16926 (2018).
[4] Isomura, T. & Toyoizumi, T. A local learning rule for independent component analysis. Scientific Reports 6:28073 (2016).
[5] Isomura, T. & Toyoizumi, T. Multi-context blind source separation by error-gated Hebbian rule. Scientific Reports 9:7127 (2019).