“Risk Imposition by Artificial Agents: The Moral Proxy Problem" (Global Priorities Institute Seminars)
The ambition for the design of autonomous artificial agents is that they can make decisions at least as good as, or better than those humans would make in the relevant decision context. Human agents tend to have inconsistent risk attitudes to small stakes and large stakes gambles. While expected utility theory, the theory of rational choice designers of artificial agents ideally aim to implement in the context of risk, condemns this as irrational, it does not identify which attitudes need adjusting. I argue that this creates a dilemma for regulating the programming of artificial agents that impose risks: Whether they should be programmed to be risk averse at all, and if so just how risk averse, depends on whether we take them to be moral proxies for individual users, or for those in a position to control the aggregate choices made by many artificial agents, such as the companies programming the artificial agents, or regulators representing society at large. Both options are undesirable.
Date:
19 June 2020, 15:00
Venue:
Venue to be announced
Speakers:
Speaker to be announced
Organisers:
Dr Andreas Mogensen (GPI, University of Oxford),
Dr Christian Tarsney (GPI, University of Oxford)
Organiser contact email address:
gpi-office@philosophy.ox.ac.uk
Part of:
Global Priorities Institute Seminars - Trinity Term 2020
Topics:
Booking required?:
Required
Booking url:
https://www.eventbrite.co.uk/e/global-priorities-seminar-johanna-thoma-tickets-105931401674
Cost:
Free
Audience:
Public
Editor:
William Jefferson