Curbing Bias with Reputation

An informed, but perhaps biased, agent takes an action with payoff consequences for themselves and a principal. The agent values the direct payoff from the action, as well as a reputational payoff from appearing unbiased to an observer. Reputational concerns impact the principal’s payoff positively, by curbing the agent’s bias, but also negatively, by distorting the unbiased agent’s actions. The net effect of reputation is positive if and only if the relative importance of reputation to the unbiased versus the biased agent (denoted alphaU/alphaB), is small enough. We consider a design problem where the principal chooses how transparent the agent’s action is to the observer. We show that the optimal degree of transparency is decreasing in alphaU/alphaB and we argue that the principal can infer alphaU/alphaB —- and thus make design choices —- from observable equilibrium features. Specifically, we show that the principal should decrease the degree of transparency if and only if `reputable actions’ are used too often in equilibrium.