Oxford Events, the new replacement for OxTalks, will launch on 16th March. The two-week OxTalks freeze period starts on Monday 2nd March. During this time, there will be no facility to publish or edit events. The existing OxTalks site will remain available to view during this period. Once Oxford Events launches, you will need a Halo login to submit events. Full details are available on the Staff Gateway.
Emotions provide critical cues into our health and wellbeing. This is of particular importance in the context of mental health, where changes in emotion may signify changes in symptom severity. However, information about emotion and how it varies over time is often accessible only using survey methodology (e.g., ecological momentary assessment, EMA), which can become burdensome to participants over time. Automated speech emotion recognition systems could provide an alternative, providing quantitative measures of emotion using acoustic data captured passively from a consented individual’s environment. However, speech emotion recognition systems often falter when presented with data collected from unconstrained natural environments due to issues with robustness, generalizability, and invalid assumptions. In this talk, I will discuss our journey in speech-centric mental health modeling, explaining whether, how, and when emotion recognition can be applied to natural unconstrained speech data to measure changes in mental health symptom severity.