Dendritic computation and deep learning in the brain

Modern artificial intelligence (AI) is inspired by the biological example, and the unprecedented success of AI, in turn, inspires the modeling of cognitive processes. Yet, when looking into the brain, additional biological structures become apparent, such as dendritic morphologies, interneuron circuits, error representations, top-down signaling and various gating structures. I will give a review on these biological elements and show how they integrate in an `energy-based’ theory of cortical computation. The theory is inspired by the least-action principle in physics from which all dynamical equations of motions are derived. From our Neuronal Least-Action (NLA) principle we derive the neuronal dynamics, including the synaptic plasticity that yields gradient-descent learning on the behavioural costs. Dendrites and cortical microcircuits, according to this principle, implement a real-time version of error-backpropagation based on prospective errors. The NLA principle tells that the cortical activities follow a path that minimizes prospective errors across all neurons in the network. Prospective errors in output neurons relate to behavioural errors, while prospective errors in deep network neurons relate to errors in the neuron-specific dendritic prediction of somatic firing. I will explain how these ideas relate to cortical attention mechanisms and context-dependent cortical gating.