Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
In some cases, ethical questions about the use of AI systems can be addressed without much reflection on what kinds of entities those systems are; instead, we need to know things like what the systems can do and how reliable they are. In other cases, however, it matters what kind of thing we are dealing with. For example, the problem of the ‘responsibility gap’ is said to exist partly because AI systems are not the kinds of things which can be morally responsible for their behaviour. One of the fundamental issues in this area is what it would take for AI systems to be agents. I will present an account of minimal agency in AI, building on the premise that agents pursue goals through interaction with environments. To understand agency, we need to distinguish activity which constitutes the pursuit of a goal from activity which merely constitutes the performance of a function.