OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
An informed, but perhaps biased, agent takes an action with payoff consequences for themselves and a principal. The agent values the direct payoff from the action, as well as a reputational payoff from appearing unbiased to an observer. Reputational concerns impact the principal’s payoff positively, by curbing the agent’s bias, but also negatively, by distorting the unbiased agent’s actions. The net effect of reputation is positive if and only if the relative importance of reputation to the unbiased versus the biased agent (denoted alphaU/alphaB), is small enough. We consider a design problem where the principal chooses how transparent the agent’s action is to the observer. We show that the optimal degree of transparency is decreasing in alphaU/alphaB and we argue that the principal can infer alphaU/alphaB —- and thus make design choices —- from observable equilibrium features. Specifically, we show that the principal should decrease the degree of transparency if and only if `reputable actions’ are used too often in equilibrium.