During Michaelmas Term, OxTalks will be moving to a new platform (full details are available on the Staff Gateway).
For now, continue using the current page and event submission process (freeze period dates to be advised).
If you have any questions, please contact halo@digital.ox.ac.uk
Importance Attempts to use Artificial Intelligence (AI) in psychiatric disorders show moderate success, high-lighting the potential of incorporating information from clinical assessments to improve the mod-els. The study focuses on using Large Language Models (LLMs) to manage unstructured medi-cal text, particularly for suicide risk detection in psychiatric care. Objective The study aims to extract information about suicidality status from the admission notes of elec-tronic health records (EHR) using privacy-sensitive, locally hosted LLMs, specifically evaluating the efficacy of Llama-2 models. Main Outcomes and Measures The study compares the performance of several variants of the open source LLM Llama-2 in extracting suicidality status from psychiatric reports against a ground truth defined by human experts, assessing accuracy, sensitivity, specificity, and F1 score across different prompting strategies. Results A German fine-tuned Llama-2 model showed the highest accuracy (87.5%), sensitivity (83%) and specificity (91.8%) in identifying suicidality, with significant improvements in sensitivity and specificity across various prompt designs. Conclusions and Relevance The study demonstrates the capability of LLMs, particularly Llama-2, in accurately extracting the information on suicidality from psychiatric records while preserving data-privacy. This suggests their application in surveillance systems for psychiatric emergencies and improving the clinical management of suicidality by improving systematic quality control and research.