[CorTalk] The neural code for semantics during language comprehension

As we listen to speech, our brains actively compute the meaning (semantics) of individual words. Inspired by the success of large language models (LLMs), we hypothesized that the brain employs vectorial coding principles for semantics. Just as LLMs represent meaning using patterns of numbers, we wondered if the brain might do something similar – using patterns of activity across many neurons to represent what words mean. To test this, we recorded from hundreds of individual neurons in the human hippocampus, a brain area known to support memory and meaning, while people listened to stories. We found that groups of neurons respond differently depending on a word’s meaning, especially when the meaning depends on the context of the sentence – just like how language models like BERT work. In contrast, models that ignore context, like Word2Vec, didn’t match brain patterns as well. Interestingly, we also saw that when two words are very similar in meaning, the brain sometimes makes their patterns more different, possibly to keep them from being confused. This kind of contrastive coding may help the brain sharpen subtle differences. We also found that words with more than one meaning (like “bank”) evoked a wider range of brain responses, highlighting the role of context even more. Overall, our results suggest that the human hippocampus encodes meaning using flexible, context-sensitive patterns similar to the vector-based systems used in modern AI.