On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
Multiple complimentary approaches are available for modelling the adaptive behaviour of individual agents in complex systems, and in this work reinforcement learning is the focus. A core problem here is that unambiguous identification of rewards driving the behaviour of entities operating in complex (open-ended) real-world environments is at least difficult, if not impossible. In part this is because the true goals of agents are not observable; also, reward-driven behaviours emerge endogenously over longer timescales and are dynamically updated as environments change. Defining a reliable reward function to use in models therefore remains a challenge. Reproducing the emergence of rewards is a potential solution, and would be have application in many domains. Simulation experiments will be described which assess a candidate algorithm for the dynamic updating of rewards, RULE: Reward Updating through Learning and Expectation. The approach is tested in a simplified ecosystem-like setting where manipulated conditions challenge the survival of an entity population, calling for significant behavioural change.