Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
As we listen to speech, our brains actively compute the meaning (semantics) of individual words. Inspired by the success of large language models (LLMs), we hypothesized that the brain employs vectorial coding principles for semantics. Just as LLMs represent meaning using patterns of numbers, we wondered if the brain might do something similar – using patterns of activity across many neurons to represent what words mean. To test this, we recorded from hundreds of individual neurons in the human hippocampus, a brain area known to support memory and meaning, while people listened to stories. We found that groups of neurons respond differently depending on a word’s meaning, especially when the meaning depends on the context of the sentence – just like how language models like BERT work. In contrast, models that ignore context, like Word2Vec, didn’t match brain patterns as well. Interestingly, we also saw that when two words are very similar in meaning, the brain sometimes makes their patterns more different, possibly to keep them from being confused. This kind of contrastive coding may help the brain sharpen subtle differences. We also found that words with more than one meaning (like “bank”) evoked a wider range of brain responses, highlighting the role of context even more. Overall, our results suggest that the human hippocampus encodes meaning using flexible, context-sensitive patterns similar to the vector-based systems used in modern AI.