On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
The temporal difference reinforcement learning model has successfully accounted for many aspects of phasic dopamine activity, but a number of major discrepancies have been discovered. Some of these discrepancies can be traced back to the choice of stimulus representation used by early models. In the real world, stimuli often provide ambiguous information about the underlying state, in which case the optimal representation is a conditional distribution over states given the observed stimuli —- the belief state. I will present several experimental studies and computational analyses of the dopamine system that provide support for this model. These findings demonstrate the importance of representational assumptions for understanding learning algorithms in the brain.