Detection and Mitigation of Textual Bias in Mental Health Notes: A Case Study in Paediatric Anxiety
This is a virtual seminar. For a Zoom link, please see "Venue". Please consider subscribing to mailing list: web.maillist.ox.ac.uk/ox/subscribe/ai4mch
In healthcare, differences observed between demographic groups can generally be categorised as biological or non-biological. Non-biological differences, such as visit frequency and reporting style, are more challenging to track and can unexpectedly influence the predictive bias of machine learning algorithms. This is particularly true when dealing with complex free-text data in the mental health domain. In this talk, we will present our framework for analysing text-related bias in Natural Language Processing (NLP) models, developed for the paediatric anxiety use case with a focus on sex demographic subgroups. Our framework first measures model bias and then investigates the origins of this bias in statistical word distributions and the generalisation capacities of NLP algorithms. Motivated by these findings, we propose a data-centric bias mitigation strategy based on sentence informativeness filtering and masking gender-related words. Our approach demonstrated a bias reduction of up to 27%, improving classification parity between sex demographic groups while maintaining overall performance.
Date:
11 March 2025, 15:00
Venue:
https://zoom.us/j/92860307789?pwd=iAdkC3QG1wQ8yvbuOBFTibGofmszPY.1
Speaker:
Professor Julia Ive (University College London)
Organising department:
Department of Psychiatry
Organiser:
Dr Andrey Kormilitzin (University of Oxford)
Organiser contact email address:
andrey.kormilitzin@psych.ox.ac.uk
Host:
Dr Andrey Kormilitzin (University of Oxford)
Part of:
Artificial Intelligence for Mental Health Seminar Series
Booking required?:
Not required
Booking url:
https://web.maillist.ox.ac.uk/ox/subscribe/ai4mch
Booking email:
andrey.kormilitzin@psych.ox.ac.uk
Audience:
Public
Editor:
Andrey Kormilitzin