I fully characterize the outcomes of a wide class of model-free reinforcement learning algorithms in a prisoner’s
dilemma. The behavior is studied in the limit as players explore their options sufficiently and eventually stop experimenting.
Whether the players learn to cooperate or defect can be determined in a closed form from the relationship between the learning rate and the payoffs of the game. The results generalize to asymmetric learners and many experimentation rules with implications for the issue of algorithmic collusion.
Zoom link: us02web.zoom.us/j/83496520603
18 January 2022, 12:00-12:45pm (Tuesday)
To sign up for a 30-minute meeting with the speaker, please add your name at this link: docs.google.com/spreadsheets/d/1Ux9g5nXtbFmqIA3DWZYUljZthkvRd-Qg/edit#gid=1190663177