OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Bio: Talfan Evans is a research scientist at Google Deepmind working. His work is focused on developing scalable data curation strategies for compute efficient large-scale pretraining. He has a MEng from Keble College and did his PhD at UCL in Cognitive Neuroscience, where he worked on adapting message-passing algorithms from the autonomous driving literature to explain neural activity during spatial exploration. As a postdoc with Andrew Davison at Imperial, he worked on real-time computer vision systems before moving to Deepmind.
Blurb: Large foundation model scaling laws tell us that to continue to make additive improvements to performance, we should expect to need to pay orders of magnitude more in compute costs and data. In this talk, I’ll present work that paints a more optimistic picture – actively choosing which data to train on can shift these curves in our favour, producing significantly more performant models for the same compute budget.