“Risk Imposition by Artificial Agents: The Moral Proxy Problem" (Global Priorities Institute Seminars)

The ambition for the design of autonomous artificial agents is that they can make decisions at least as good as, or better than those humans would make in the relevant decision context. Human agents tend to have inconsistent risk attitudes to small stakes and large stakes gambles. While expected utility theory, the theory of rational choice designers of artificial agents ideally aim to implement in the context of risk, condemns this as irrational, it does not identify which attitudes need adjusting. I argue that this creates a dilemma for regulating the programming of artificial agents that impose risks: Whether they should be programmed to be risk averse at all, and if so just how risk averse, depends on whether we take them to be moral proxies for individual users, or for those in a position to control the aggregate choices made by many artificial agents, such as the companies programming the artificial agents, or regulators representing society at large. Both options are undesirable.