Human interaction with artificial agents

Abstract: A large part of human cognitive abilities is dedicated to interaction with other humans. Research in social cognition addresses how we go about such interactions. Theory of mind, in particular, seeks to explain how we infer the intentions of other people, or the outcome of our own actions in social situations. With the increasing technical realisability of artificial cognitive systems, human interaction with artificial agents offers increasingly many opportunities to both study human cognition itself, and apply the resulting insights to the design of future artificial agents. My overall purpose in this talk is to illustrate this position with examples from recent and on-going work.

More specifically, I will discuss work on an ongoing project (DREAM: www.dream2020.eu), where we design robots for use in therapy with children with ASD. This will illustrate in particular that, in social HRI applications, desired functionality of a robot is often specified in terms of the desired interaction patterns rather than the behaviour of the robot itself, which poses a challenge for those building the robots. A second insight is that humans adapt their behaviour according to their beliefs about the abilities of the other agent. I will present work that investigated human interaction with adaptive vehicles that highlight the importance of understanding how a system is perceived by its users (regardless of its actual abilities). At a more general level, humans have both the ability to read intentions underlying the behaviour of another agent, and they have the ability to ascribe intentions to clearly inanimate forms, which again has implications for robot design that I will touch upon.