Manipulation-Proof Machine Learning


This seminar will take place on Zoom

An increasing number of decisions are guided by machine learning algorithms. In many settings, from consumer credit to criminal justice, those decisions are made by applying an estimator to data on an individual’s observed behavior. But when consequential decisions are encoded in rules, individuals may strategically alter their behavior to achieve desired outcomes. This paper develops a new class of estimator that is stable under manipulation, even when the decision rule is fully transparent. We explicitly model the costs of manipulating different behaviors, and identify decision rules that are stable in equilibrium. Through a large field experiment in Kenya, we show that decision rules estimated with our strategy-robust method outperform those based on standard supervised learning approaches.

Link to paper: arxiv.org/abs/2004.03865

Please sign up for meetings here: docs.google.com/spreadsheets/d/1GRwPBmtpUwstC4fdLZrnxfnARNYHedHykoRZG4Xq2Bo/edit#gid=0