Efficient codes and balanced networks
Abstract:
We have come to think of neural networks from a bottom-up perspective. Each neuron is characterized by an input/ output function, and a network’s computational abilities emerge as a property of the collective. While immensely successful (see the recent deep-learning craze), this view has also created several persistent puzzles in theoretical neuroscience. The first puzzle are spikes, which have largely remained a nuisance, rather than a feature of neural
systems. The second puzzle is learning, which has been hard or impossible without violating the constraints of local information flow. The third puzzle is
robustness to perturbations, which is a ubiquitous feature of real neural systems, but often ignored in neural network models. I am going to argue that a resolution to these puzzles comes from a top-down perspective. We make two key assumptions. First, we assume that the effective output of a neural network can be extracted via linear readouts from the population. Second, we assume that a network seeks to bound the error on a given computation, and that each neuron’s voltage represents part of this global error. Spikes are fired to keep this error in check.
These assumptions yield efficient networks that exhibit irregular and asynchronous spike trains, balance of excitatory and inhibitory currents, and robustness to perturbations. I will discuss the implications of the theory, prospects for experimental tests, and future challenges.
Date: 16 May 2017, 17:00
Venue: Le Gros Clark Building, off South Parks Road OX1 3QX
Venue Details: Lecture Theatre
Speaker: Prof Christian Machens (Champalimaud Centre for the Unknown)
Organising department: Department of Physiology, Anatomy and Genetics (DPAG)
Organiser: Ines Barreiros (University of Oxford)
Organiser contact email address: ines.barreiros@chch.ox.ac.uk
Host: Ines Barreiros (University of Oxford)
Part of: Cortex Club
Booking required?: Not required
Audience: Members of the University only
Editor: Ines Barreiros