Stability and Robustness in Misspecified Learning Models (with Ryota Iijima and Yuhta Ishii)

We present a unified framework for analyzing learning outcomes in a broad class of misspecified learning environments, spanning both single-agent active and passive learning and sequential social learning. Our main results provide general criteria to determine—without the need to explicitly analyze learning dynamics—when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs) or fail to converge. We highlight two main applications of these criteria: First, they unify and generalize many convergence results in previously studied settings and pave the way for the study of new settings. Second, we apply our criteria to analyze whether or not learning outcomes are robust to the details of an environment: We identify a natural class of environments (including costly information acquisition and sequential social learning), where unlike most settings the literature has focused on so far, small changes to the true data generating process or agents’ perception thereof can have a large effect on long-run beliefs. In particular, even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can lead to extreme failures of learning.

Please sign up for meetings here: docs.google.com/spreadsheets/d/1UnW-JOf7vD54h9QKWYWXQS1Wc-H4oP7nbywDdeJvr4o/edit#gid=0