Manipulation of striatal temperature produces bidirectional and dose dependent temporal scaling of population activity and decision-making, but not moment by moment movement execution.

If you would like to chat with Joe on the day please do get in touch with Mark Walton The seminar will be held in the Sherrington Library, in the Sherrington Building. For those who don’t already have keycard access within the Sherrington, we will have delegates meet you in reception and escort you to the seminar room.

The basal ganglia (BG) are thought to contribute to decision-making and motor control by influencing action selection based on consequences. These functions are critically dependent on timing information that can be extracted from the evolving state of neural populations in the striatum, the major input area of the BG. However, it is debated whether striatal activity underlies latent, dynamic decision processes or kinematics of overt movement. Here, we measured the impact of temperature on striatal population activity and the behavior of rats and compared the observed effects to neural activity and behavior collected in multiple versions of a temporal categorization task. Cooler temperatures caused dilation, and warmer temperatures contraction, of both neural activity and patterns of judgment in time, mimicking endogenous decision-related variability in striatal activity. However, temperature did not similarly affect movement kinematics. These data provide compelling evidence that the time course of evolving striatal population activity dictates the speed of a latent process that is used to guide choices, but not moment by moment movement execution. More broadly, they establish temporal scaling of population activity as a likely cause and not simply a correlate of timing behavior in the brain. We speculate that these results may reflect an algorithmic division of labor between brain systems. Computations similar to those found in value-based reinforcement learning (RL) models may be implemented within BG circuits to learn control policies involving relatively compact and discrete action spaces (eg. action selection and decision-making), whereas direct policy learning algorithms may be implemented in other brain systems, such as the cerebellum, to learn control policies involving high dimensional and continuous action spaces (eg. continuous control and coordination).