OxTalks will soon be transitioning to Oxford Events (full details are available on the Staff Gateway). A two-week publishing freeze is expected to start before the end of Hilary Term to allow all future events to be migrated to the new platform. During this period, you will not be able to submit or edit events on OxTalks. The exact freeze dates will be confirmed on the Staff Gateway and via email to identified OxTalks users.
If you have any questions, please contact halo@digital.ox.ac.uk
We introduce the algorithmic learning equations, a set of ordinary differential equations which characterizes the finite-time and asymptotic behavior of the stochastic interaction between state-dependent learning algorithms in dynamic games. Our framework allows for a variety of information and memory structures, including noisy, perfect, private, and public monitoring and for the possibility that players use distinct learning algorithms. We prove that play converges to a correlated equilibrium for a family of algorithms under correlated private signals. Finally, we apply our methodology in a repeated 2×2 prisoner’s dilemma game with perfect monitoring. We show that algorithms can learn a reward-punishment mechanism to sustain tacit collusion. Additionally, we find that algorithms can also learn to coordinate in cycles of cooperation and defection.