OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Sound detection amidst noise presents an important challenge in audition. Many naturally occurring sounds (rain, wind) can be described and predicted statistically, so-called sound textures. Previous research has demonstrated humans’ ability to leverage this statistical predictability for sound recognition, but the neural mechanisms remain elusive. We trained mice to detect vocalizations embedded in sound textures with different statistical predictability, while recording and optogenetically modulating the neural activity in the auditory cortex. Mice showed improved performance and neural representation if they sampled the statistics longer per trial. Textures with more exploitable structure, specifically higher cross-frequency correlations (CFCs) improved performance, background representation and vocalization decoding. Activating parvalbumin-positive (PV) interneurons had an asymmetric effect, improving detection and neural representation of vocalizations for low, and vice versa for high CFCs. Thus, mice can exploit stimulus statistics to improve the sound detection in noise, reflected in performance and neural activity, while relying on PV interneurons.