OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Presenters
Karin Jongsma (Bioethics & Health Humanities, UMC Utrecht, Utrecht University)
Giorgia Pozzi (Department of Values, Technology and Innovation, TU Delft)
Commentators
Bartek Papiez (Big Data Institute, University of Oxford)
Dominic Wilkinson (Uehiro Oxford Institute, University of Oxford)
The progressive integration of AI systems in medical care raises questions about how physicians should work with such systems to ensure the best patient outcomes. A particularly thorny issue is how to deal with disagreement between an AI system’s and a human recommendation. Three ways of dealing with such disagreements have been suggested: deferring to the AI output, overruling the AI system’s output, and considering the AI system as an epistemic equal for which a second human opinion is required. In this roundtable, we want to spell out the shortcomings of these three approaches and offer a more nuanced perspective on AI-clinician disagreement. Distinguishing different types of disagreements is essential before determining how disagreements should be dealt with. By drawing on a case that exemplifies how multifaceted medical decision-making is, we point out the ethical implications of possible clinician-AI disagreements ensuing from it. Ultimately, with our analysis, we underscore the significant uncertainties that characterize medical decision-making and highlight the strength of a collaborative approach to disagreements between humans and AI in clinical decision-making.