OxTalks will soon be transitioning to Oxford Events (full details are available on the Staff Gateway). A two-week publishing freeze is expected in early Hilary to allow all events to be migrated to the new platform. During this period, you will not be able to submit or edit events on OxTalks. The exact freeze dates will be confirmed as soon as possible.
If you have any questions, please contact halo@digital.ox.ac.uk
The coding theorem from algorithmic information theory is one of the most profound and underappreciated results in science. It can be viewed as a computational reformulation of the infinite monkey theorem: monkeys on universal computers instead of on typewriters. The theorem predicts that many natural processes are exponentially biased toward highly compressible outputs, that is, toward outcomes with low Kolmogorov complexity. I will discuss applications of this principle to biological evolution, where it implies a strong preference for symmetry [1], and to machine learning, where it predicts an Occam’s razor–like bias that helps explain why deep neural networks can generalize effectively despite being heavily overparameterized [2]. The central question I would like to ask you is how this generic principle extends to neural learning.
[1] Symmetry and simplicity spontaneously emerge from the algorithmic nature of evolution. Iain G Johnston et al, PNAS 119, e2113883119 (2022). [2] Deep neural networks have an inbuilt Occam’s Razor, C. Mingard et al, Nat Comm.16, 220 (2025).