A Data-Centric Approach to Detecting and Mitigating Demographic Bias in Pediatric Mental Health Text: A Case Study in Anxiety Detection
This is a virtual seminar. For a Zoom link, please see "Venue". Please consider subscribing to mailing list: web.maillist.ox.ac.uk/ox/subscribe/ai4mch
Healthcare analytics and Artificial Intelligence (AI) hold transformative potential, yet AI models often inherit
biases from their training data, which can exacerbate healthcare disparities, particularly among minority groups. While efforts
have primarily targeted bias in structured data, mental health heavily depends on unstructured data like clinical notes, where
bias and data sparsity introduce unique challenges. This study aims to detect and mitigate linguistic differences related to
non-biological differences in the training data of AI models designed to assist in pediatric mental health screening.
Our objectives are: (1) to assess the presence of bias by evaluating outcome parity across sex subgroups, (2) to identify bias
sources through textual distribution analysis, and (3) to develop and evaluate a de-biasing method for mental health text data.
We examined classification parity across demographic groups, identifying biases through analysis of linguistic
patterns in clinical notes. Using interpretability techniques, we assessed how gendered language influences model predictions.
We then applied a data-centric de-biasing method focused on neutralizing biased terms and retaining only the salient clinical
information. This methodology was tested on a model for automatic anxiety detection in pediatric patients—a crucial application
given the rise in youth anxiety post-COVID-19.

We developed and evaluated a data-centric de-biasing framework to address gender-based content disparities
within clinical text, specifically in pediatric anxiety detection. By neutralizing biased language and enhancing focus on clinically
essential information, our approach highlights an effective strategy for mitigating bias in AI healthcare models trained on
unstructured data. This work emphasizes the importance of developing bias mitigation techniques tailored for healthcare text,
advancing equitable AI-driven solutions in mental health.
Date: 11 March 2025, 15:00
Venue: https://zoom.us/j/92860307789?pwd=iAdkC3QG1wQ8yvbuOBFTibGofmszPY.1
Speaker: Professor Julia Ive (University College London)
Organising department: Department of Psychiatry
Organiser: Dr Andrey Kormilitzin (University of Oxford)
Organiser contact email address: andrey.kormilitzin@psych.ox.ac.uk
Host: Dr Andrey Kormilitzin (University of Oxford)
Part of: Artificial Intelligence for Mental Health Seminar Series
Booking required?: Not required
Booking url: https://web.maillist.ox.ac.uk/ox/subscribe/ai4mch
Booking email: andrey.kormilitzin@psych.ox.ac.uk
Audience: Public
Editor: Andrey Kormilitzin