Understanding neural networks and quantification of their uncertainty via exactly solvable models


This talk is the annual Oxford Maths & Stats Colloquium. There will be a Drinks Reception after the talk in the ground floor social area.

The affinity between statistical physics and machine learning has a long history. Theoretical physics often proceeds in terms of solvable synthetic models; I will describe the related line of work on solvable models of simple feed-forward neural networks. I will then discuss how this approach allows us to analyze uncertainty quantification in neural networks, a topic that gained urgency in the dawn of widely deployed artificial intelligence. I will conclude with what I perceive as important specific open questions in the field.