Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
Zoom link: us02web.zoom.us/meeting/register/tZclc—orzorG9PXQ5WIFdzgnBAa4cKSMj-X
In this talk we will discuss the intricate balance of biases and explainability in the field of machine learning (ML). We commence with an exploration of the statistical decision theory framework, setting the stage for understanding the pivotal trade-off between bias and variance in predictive analysis. This foundational discussion underscores how these elements affect the accuracy and reliability of ML models across various applications.
We then address the challenge of non-representative data in AI, how such data can skew model performance, leading to inaccurate predictions and biases. We will dissect the causes of this issue, ranging from biased sampling to historical biases and data shifts over time. This section will explore strategies to identify and mitigate such biases, ensuring that ML models produce fair and equitable outcomes.
The latter part of the talk is devoted to the critical topics of explainability and interpretability in AI. We will examine current methodologies and best practices to enhance the transparency of AI systems. This talk aims to not only highlight the challenges but also showcase actionable solutions and best practices for developing AI systems that are fair, accurate, and transparent. The goal is to foster a future where AI is not only powerful and efficient but also responsible and aligned with societal values.