OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
There is a tension in experimental approaches to understanding hearing, between the requirement for simple stimuli for controlled experiments, and evidence suggesting that hearing in realistic acoustic environments is very different. We have made little progress in understanding how we hear in these difficult environments, both in terms of fundamental science and in reproducing this ability in machine hearing systems. In this talk, I’ll outline an approach to reconciling these seemingly contradictory requirements for simplicity and complexity. The aim is to find a new view on what concepts, stimuli or features we consider to be simple or basic when studying complex auditory environments. We use techniques from machine learning and information theory to carry out this search ‘objectively’, but the ultimate aim is to feed the results of this process back to understand the functioning of the auditory system itself. Specifically, I will present results from a study on models of sound localisation, as well as work in progress on a project to investigate which low level features are most informative for extracting information from speech in a noisy background.