On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
Meta-learning, the ability to learn the right structure or prior knowledge to facilitate new learning, is heavily reliant on structured data. Humans, deep RL agents, and even large language models (LLMs) are all capable of meta-learning. While recurrent neural network-based models can be linked to neural activations in biological organisms, understanding how LLMs perform this quick, in-context learning is more difficult. LLMs can be pre-trained on human-generated artifacts, such as the internet and books, which contain substantial structure and enable good generalization. However, the lack of specific knowledge of the training data makes it challenging to quantify their performance, especially as they’re being increasingly deployed at scale in the real world. New approaches have been introduced that allow us to more closely interrogate how they work, approaches directly taken from the cognitive sciences. In this talk I discuss how we can better understand both deep RL agents and LLMs by looking at structure within their training data through this lens, and why they’re so powerful.