OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
People supported by AI-powered decision support tools frequently overrely on the AI: they accept an AI’s suggestion even when that suggestion is wrong. Adding explanations to the AI suggestions does not appear to reduce the overreliance and some studies suggest that it might even increase it. Our research suggests that human cognitive motivation moderates the effectiveness of decision support tools powered by explainable AI. Specifically, even in high-stakes domains, people rarely engage analytically with each individual AI recommendation and explanation, and instead appear to develop general heuristics about whether and when to follow the AI suggestions. We show that interventions applied at the decision-making time to disrupt heuristic reasoning can increase people’s cognitive engagement with the AI’s output and consequently reduce (but not entirely eliminate) human overreliance on the AI. Our research also points to two shortcomings in how our research community is pursuing the explainable AI research agenda. First, the commonly-used evaluation methods rely on proxy tasks that artificially focus people’s attention on the AI models leading to misleading (overly optimistic) results. Second, by insufficiently examining the sociotechnical contexts, we may be solving problems that are technically the most obvious but that are not the most valuable to the key stakeholders.