On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
Scientific publications in mental health are rapidly growing in number and complexity, making curation and synthesis of abstracts for systematic reviews in the field increasingly time-consuming and challenging. With the expansion of the scientific corpus, systematic reviews on broad topic have also complicated human review, in part, due to variations and lack of objectivity among multiple human reviewers in interpretation and suitability for inclusion/exclusion. Resolving these complexities can themselves be time-consuming and impact topic selection, as well as the accuracy and breadth of systematic reviews. To address challenges in efficiency and accuracy, we propose and evaluate multiple machine learning-based approaches to capture inclusion and exclusion criteria and automate the abstract selection process. We fine-tuned or trained models for four systematic review topic areas (i.e., Resilience, Biomarkers & Disease, Stressors, and Conditions) from psychiatry abstracts selected by trained human reviewers. The trained or fine-tuned machine learning models were then applied to abstracts derived from an independently curated oncology literature database. Transformer-based machine learning models outperformed trained human reviewers in abstract screening for three out of four topic areas. Such artificial intelligence-augmented approaches may facilitate the sharing and synthesis of research expertise across disciplines.