Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
State actors increasingly use machine-learning tools to make decisions that significantly affect people’s lives. The worry that human agency is increasingly eclipsed has, in turn, given rise to assertions of a novel ‘right to a human decision’ – roughly, a right not to be subject to fully automated decision-making. This talk explores how such a right might be justified, by identifying three possible approaches. The first grounds the right to a human decision in the notion of human dignity; the second in the desire to be judged by an agent with the capacity to grasp and explain moral reasons; and the third justifies the right on the grounds that it is constitutive of, and contributes to, the exercise of certain democratic values. On the basis of this discussion, I suggest three things. First, justifying a ‘right to a human decision’ is, despite its intuitive appeal, a surprisingly involved – though not impossible – enterprise. Second, it may give us considerably more than we bargained for. Finally, what appeals to such a right seek to accomplish may, in many cases, be better achieved by broader principles of justice that cannot be reduced to individual rights. In the end, working out what we owe to one another as we continue to make AI increasingly powerful and prevalent, requires working out not just what our values are, but also why we hold them.