Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
Abstract:
We have come to think of neural networks from a bottom-up perspective. Each neuron is characterized by an input/ output function, and a network’s computational abilities emerge as a property of the collective. While immensely successful (see the recent deep-learning craze), this view has also created several persistent puzzles in theoretical neuroscience. The first puzzle are spikes, which have largely remained a nuisance, rather than a feature of neural
systems. The second puzzle is learning, which has been hard or impossible without violating the constraints of local information flow. The third puzzle is
robustness to perturbations, which is a ubiquitous feature of real neural systems, but often ignored in neural network models. I am going to argue that a resolution to these puzzles comes from a top-down perspective. We make two key assumptions. First, we assume that the effective output of a neural network can be extracted via linear readouts from the population. Second, we assume that a network seeks to bound the error on a given computation, and that each neuron’s voltage represents part of this global error. Spikes are fired to keep this error in check.
These assumptions yield efficient networks that exhibit irregular and asynchronous spike trains, balance of excitatory and inhibitory currents, and robustness to perturbations. I will discuss the implications of the theory, prospects for experimental tests, and future challenges.