"Learning Stochastic Inverses", a presentation of the paper by Andreas Stuhlmüller, Jessica Taylor and Noah D. Goodman

Paper abstract:

“We describe a class of algorithms for amortized inference in Bayesian networks. In this setting, we invest computation upfront to support rapid online inference for a wide range of queries. Our approach is based on learning an inverse factorization of a model’s joint distribution: a factorization that turns observations into root nodes. Our algorithms accumulate information to estimate the local conditional distributions that constitute such a factorization. These stochastic inverses can be used to invert each of the computation steps leading to an observation, sampling backwards in order to quickly find a likely explanation. We show that estimated inverses converge asymptotically in number of (prior or posterior) training samples. To make use of inverses before convergence, we describe the Inverse MCMC algorithm, which uses stochastic inverses to make block proposals for a Metropolis-Hastings sampler. We explore the efficiency of this sampler for a variety of parameter regimes and Bayes nets.”

The paper is by Andreas Stuhlmüller, Jessica Taylor and Noah D. Goodman.

Reference: www.mit.edu/~ast/papers/inverses-nips2013.pdf

Speaker bio: BSc in Cognitive Science at the University of Osnabrück, MSc in Computational Statistics and Machine Learning at UCL, now DPhil student in Statistics at Oxford.