OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
In some cases, ethical questions about the use of AI systems can be addressed without much reflection on what kinds of entities those systems are; instead, we need to know things like what the systems can do and how reliable they are. In other cases, however, it matters what kind of thing we are dealing with. For example, the problem of the ‘responsibility gap’ is said to exist partly because AI systems are not the kinds of things which can be morally responsible for their behaviour. One of the fundamental issues in this area is what it would take for AI systems to be agents. I will present an account of minimal agency in AI, building on the premise that agents pursue goals through interaction with environments. To understand agency, we need to distinguish activity which constitutes the pursuit of a goal from activity which merely constitutes the performance of a function.