OxTalks is Changing
During Michaelmas Term, OxTalks will be moving to a new platform (full details are available on the Staff Gateway).
For now, continue using the current page and event submission process (freeze period dates to be advised).
If you have any questions, please contact halo@digital.ox.ac.uk
Learning relative values through reinforcement learning: computational bases and neural evidence
A fundamental question in the literature about value-based decision making is whether values are represented on an absolute, rather than on a relative scale (i.e. context-dependent). Such context-dependency of option values has been extensively investigated in economic decision-making in the form of reference point-dependence and range adaptation. However, context-dependency has been much less investigated in reinforcement learning (RL) situations. Using model-based behavioral analyses we demonstrate that option values are learnt in a context-dependent manner. In RL context-dependence produces several desirable behavioral consequences: i) reference point dependence of option values benefits punishment-avoidance learning and ii) range adaptation allows similar performance for different levels of reinforcer magnitude. Interestingly, these adaptive functions are traded against context-dependent violation of rationality, when options are extrapolated from their original choice contexts.
Date:
5 December 2017, 13:00
Venue:
Biology South Parks Road, South Parks Road OX1 3RB
Venue Details:
Schlich Theatre
Speaker:
Dr Stefano Palminteri (ENS, Paris )
Organising department:
Department of Experimental Psychology
Organiser:
Nils Kolling (Junior Research Fellow, Experimental Psychology, University of Oxford)
Organiser contact email address:
nils.kolling@psy.ox.ac.uk
Host:
Matthew Apps (University of Oxford )
Booking required?:
Not required
Audience:
Members of the University only
Editors:
Janice Young,
Stephanie Mcclain