How the Brain Discovers Structure in Rapidly Unfolding Sound Sequences

Sensitivity to patterns is fundamental to sensory processing and lies at the heart of predictive coding and Bayesian theories of brain function, which conceptualise perception as inference based on internal models of the environment. The brain is hypothesised to maintain a hierarchy of predictive models that track the statistical structure of ongoing sensory input. A central challenge is to understand how such models are formed, stored in memory, and dynamically engaged, or interrupted, by changing sensory contexts.
Owing to its inherently dynamic nature, audition provides a tractable and powerful test bed for addressing these questions. Over the past decade, my laboratory has investigated the neural mechanisms underlying auditory sequence processing. We use carefully designed rapid tone-pip sequences to model different types of environmental regularities, combining behavioural experiments, computational modelling, M/EEG, fMRI, and pupillometry to characterise how the brain automatically extracts, represents, and exploits regularities in sound. This work elucidates the underlying neural mechanisms and the statistical heuristics the brain employs, for example, how it arbitrates whether to maintain or interrupt existing models in the face of new evidence.