On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
Acoustic speech signals are notoriously variable within and between talkers. To aid in the linguistic decoding of such noisy signals, it is well known that listeners employ a number of perceptual mechanisms to help reduce the impact of linguistically irrelevant acoustic variation. Rapid perceptual accommodation to differences in age and gender is achieved, in part, through vowel-extrinsic normalization, whereby the immediately preceding speech signal provides a frame-of-reference within which talker-specific vowel category boundaries are determined (Ladefoged & Broadbent, 1957). Listeners also draw upon higher-order linguistic information to facilitate phonetic processing of noisy or ambiguous speech acoustic signals, as illustrated by the well-known lexical effect on perceptual category boundaries (Ganong, 1980).
Since their discovery many decades ago, these adaptive perceptual mechanisms have been considered primarily as processes supporting the decoding of ambiguous speech signals originating from other talkers. Here, I will describe two recent studies demonstrating that such adaptive processes can also alter the processing of self-generated speech acoustic signals (i.e., auditory feedback), and by extension, the sensorimotor control of speech production. The results provide strong support for the idea that short-term auditory-perceptual plasticity rapidly transfers to the sensory processes guiding speech motor function. The findings will be discussed within the context of current models of speech production, in particular those that highlight a role for auditory-feedback in the fine-tuning of predictive, feed-forward control processes.