OxTalks is Changing
On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
On the Site of Predictive Justice
Optimism about our ability to enhance societal decision-making by leaning on Machine Learning (ML) for cheap, accurate predictions has palled in recent years, as these ‘cheap’ predictions have come at significant social cost, contributing to systematic harms suffered by already disadvantaged populations. But what precisely goes wrong when ML goes wrong? We argue that, as well as more obvious concerns about the downstream effects of ML-based decision-making, there can be moral grounds for the criticism of these predictions themselves. We introduce and defend a theory of predictive justice, according to which differential model performance for systematically disadvantaged groups can be grounds for moral criticism of the model, independently of its downstream effects. As well as helping resolve some urgent disputes around algorithmic fairness, this theory points the way to a novel dimension of epistemic ethics, related to the recently discussed category of doxastic wrong.
Date:
1 February 2023, 12:30
Venue:
Please register to receive venue details
Speaker:
Seth Lazar (ANU)
Organiser contact email address:
aiethics@philosophy.ox.ac.uk
Host:
Dr Charlotte Unruh (University of Oxford)
Part of:
Ethics in AI Lunchtime Seminars
Booking required?:
Required
Booking url:
https://forms.office.com/Pages/ResponsePage.aspx?id=G96VzPWXk0-0uv5ouFLPkUbXexlJuMhCiksodiLwh4ZUNjhIRFBGQUFQVk1RUU5RRVJNMkxPWFE1Vi4u
Audience:
Public
Editors:
Marie Watson,
Lauren Czerniawska