OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Abstract
A key quantity of interest in Bayesian inference are expectations of functions with respect to the posterior. Markov Chain Monte Carlo is a fundamental tool to consistently compute these expectations via averaging samples drawn from an approximate posterior. However, its feasibility is being challenged in the era of so called Big Data as all data needs to be processed in every iteration. Realising that such simulation is an unnecessarily hard problem if the goal is estimation, we construct a computationally scalable methodology that allows unbiased estimation of the required expectations without explicit simulation from the full posterior. The scheme’s variance is finite by construction and straightforward to control, leading to algorithms that are provably unbiased and naturally arrive at a desired error tolerance. This is achieved at an average computational complexity that is sub-linear in the size of the dataset. We demonstrate the utility and generality of the methodology on a range of common statistical models applied to large scale benchmark and real-world datasets.
—
Speaker’s bio
Heiko Strathmann first studied Jazz guitar in the Netherlands, then did a BSc in Computer Science in Germany, followed by a MSc in Machine Learning at University College London. Since 2013, he is a PhD student at the Gatsby Unit for Computational Neuroscience and Machine Learning at UCL. He is an open-source activist and one of the main developers and organisers of the Shogun Machine Learning Toolbox.