During Michaelmas Term, OxTalks will be moving to a new platform (full details are available on the Staff Gateway).
For now, continue using the current page and event submission process (freeze period dates to be advised).
If you have any questions, please contact halo@digital.ox.ac.uk
There is a tension in experimental approaches to understanding hearing, between the requirement for simple stimuli for controlled experiments, and evidence suggesting that hearing in realistic acoustic environments is very different. We have made little progress in understanding how we hear in these difficult environments, both in terms of fundamental science and in reproducing this ability in machine hearing systems. In this talk, I’ll outline an approach to reconciling these seemingly contradictory requirements for simplicity and complexity. The aim is to find a new view on what concepts, stimuli or features we consider to be simple or basic when studying complex auditory environments. We use techniques from machine learning and information theory to carry out this search ‘objectively’, but the ultimate aim is to feed the results of this process back to understand the functioning of the auditory system itself. Specifically, I will present results from a study on models of sound localisation, as well as work in progress on a project to investigate which low level features are most informative for extracting information from speech in a noisy background.