Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
Sensory systems have evolved impressive abilities to process complex natural scenes in a myriad of environments. In audition, the brain’s ability to seamlessly solve the cocktail party problem remains unmatched by machines, despite a long history of intensive research in diverse fields, ranging from neuroscience to machine learning. At a cocktail party, and other noisy scenes, we can broadly monitor the entire acoustic scene to detect important cues (e.g., our names being called, or the fire alarm going off), or selectively listen to a target sound source (e.g., a conversation partner). This flexible dual-mode processing ability of normal hearing listeners stands in sharp contrast to the extreme difficulty faced by hearing impaired listeners, hearing assistive devices, and state-of-the-art speech recognition algorithms in noisy scenes. In this talk, I will first describe neurons at the cortical level in songbirds which display dual-mode responses to spatially distributed natural sounds. I will then present a computational model, which replicates key features of the experimental data and predicts a critical role of inhibitory neurons underlying dual mode responses. Finally, I will present recent data revealing similar phenomena in mouse auditory cortex and discuss our efforts to understand the role of cortical inhibitory neurons using a combination of electrophysiology, optogenetics and computational modelling.