Artificial Agency
In some cases, ethical questions about the use of AI systems can be addressed without much reflection on what kinds of entities those systems are; instead, we need to know things like what the systems can do and how reliable they are. In other cases, however, it matters what kind of thing we are dealing with. For example, the problem of the ‘responsibility gap’ is said to exist partly because AI systems are not the kinds of things which can be morally responsible for their behaviour. One of the fundamental issues in this area is what it would take for AI systems to be agents. I will present an account of minimal agency in AI, building on the premise that agents pursue goals through interaction with environments. To understand agency, we need to distinguish activity which constitutes the pursuit of a goal from activity which merely constitutes the performance of a function.
Date: 23 November 2022, 13:00 (Wednesday, 7th week, Michaelmas 2022)
Venue: Please register to receive venue details
Speaker: Dr Patrick Butlin (University of Oxford)
Organiser contact email address: aiethics@philosophy.ox.ac.uk
Host: Dr Linda Eggert (University of Oxford)
Part of: Ethics in AI Lunchtime Seminars
Booking required?: Required
Booking url: https://forms.office.com/Pages/ResponsePage.aspx?id=G96VzPWXk0-0uv5ouFLPkUbXexlJuMhCiksodiLwh4ZUOElSTTlEN09MRkNZTDBJTVhOWEIzQVFOTy4u
Cost: Free
Audience: Public
Editors: Marie Watson, Lauren Czerniawska