OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Deep convolutional neural networks (CNNs) provide a potentially rich source of insight for understanding mid-level visual processing in the primate cerebral cortex. Taking the approach of an electrophysiologist to characterizing single CNN units, we found that many units exhibit translation-invariant boundary curvature selectivity approaching that of the best neurons in the mid-level visual area V4. For some of these V4-like units, particularly in the middle layers, the natural images that drove them best were also qualitatively consistent with selectivity for object boundaries. Our results identify a novel image-computable model for V4 boundary curvature selectivity and suggest that such a representation may begin to emerge within the middle layers of an artificial network trained for image categorization, even though boundary information was not provided during training. This raises the general possibility that single-unit feature selectivity learned in CNNs may become a valuable guide for understanding sensory cortex.