OxTalks will soon be transitioning to Oxford Events (full details are available on the Staff Gateway). A two-week publishing freeze is expected in early Hilary to allow all events to be migrated to the new platform. During this period, you will not be able to submit or edit events on OxTalks. The exact freeze dates will be confirmed as soon as possible.
If you have any questions, please contact halo@digital.ox.ac.uk
A fundamental question in the literature about value-based decision making is whether values are represented on an absolute, rather than on a relative scale (i.e. context-dependent). Such context-dependency of option values has been extensively investigated in economic decision-making in the form of reference point-dependence and range adaptation. However, context-dependency has been much less investigated in reinforcement learning (RL) situations. Using model-based behavioral analyses we demonstrate that option values are learnt in a context-dependent manner. In RL context-dependence produces several desirable behavioral consequences: i) reference point dependence of option values benefits punishment-avoidance learning and ii) range adaptation allows similar performance for different levels of reinforcer magnitude. Interestingly, these adaptive functions are traded against context-dependent violation of rationality, when options are extrapolated from their original choice contexts.