On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
We introduce the algorithmic learning equations, a set of ordinary differential equations which characterizes the finite-time and asymptotic behavior of the stochastic interaction between state-dependent learning algorithms in dynamic games. Our framework allows for a variety of information and memory structures, including noisy, perfect, private, and public monitoring and for the possibility that players use distinct learning algorithms. We prove that play converges to a correlated equilibrium for a family of algorithms under correlated private signals. Finally, we apply our methodology in a repeated 2×2 prisoner’s dilemma game with perfect monitoring. We show that algorithms can learn a reward-punishment mechanism to sustain tacit collusion. Additionally, we find that algorithms can also learn to coordinate in cycles of cooperation and defection.