Syntax through synapses: A mechanism for symbolic cognition in prefrontal cortex


A sandwich lunch will be provided on a first-come, first-served basis.

Human thought is inherently symbolic. For example, working memory can manipulate information according to rules that apply to any kind of content. How can populations of neurons achieve this? We model prefrontal cortex as holding flexible codes, that rapidly change the way they map onto the world.

Three examples of symbolic tasks include 1) binding features into objects in working memory, 2) pairing arbitrary stimuli with responses to perform zero-shot instructed actions, or 3) filling the grammatical roles in a sentence with arbitrary contents. These tasks can be solved with flexible codes. To encode novel information, we employ transient strengthening of synapses. This can form new stable states (attractors), or generate symbolic grammars to represent sentences. This new kind of neural network can turn a static idea into a sequence of words and back, using transient synaptic facilitation to hold grammatical relationships between words.

I will demonstrate some tiny neural networks that apply this flexible coding to working memory, task rules, and language. But are they a good model of the prefrontal cortex? I will show some cases in which the model agrees or disagrees with empirical data.