Variational Bayesian Inference for Agent-based Models

Calibrating agent-based models (ABMs) is a crucial but challenging step in relating ABMs to the real world. Multiple factors contribute to this difficulty: the complexity of ABMs typically makes writing down or evaluating their likelihood functions extremely difficult and computationally expensive, prohibiting the direct application of likelihood-based inference techniques; ABMs are generally expensive to forward-simulate, posing a problem for calibration procedures that require repeated simulation from the model; and the inherently discrete nature of ABMs in many cases prevents the immediate use of gradient-assisted calibration methods as a means to improve the efficiency of simulation-based inference procedures.

In this talk, we will discuss how variational Bayesian inference schemes can be used in conjunction with powerful density estimation techniques from probabilistic machine learning to approximate parameter posterior distributions for ABMs. In particular, we will consider optimisation-based approaches to targeting posterior distributions in the general case of non-differentiable ABMs, before discussing how differentiable programming can be used to exploit gradients within the agent-based simulator to improve the efficiency of the optimisation procedure in many cases. Finally, we will demonstrate with experiments that such approaches can result in accurate inferences, and discuss avenues for future work. This talk will be based on papers co-authored with Arnau Quera-Bofarull (Oxford), Ayush Chopra (MIT), Prof. J. Doyne Farmer (Oxford), Prof. Anisoara Calinescu (Oxford), and Prof. Michael J. Wooldridge (Oxford).