Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
The basal ganglia (BG) are thought to contribute to decision-making and motor control by influencing action selection based on consequences. These functions are critically dependent on timing information that can be extracted from the evolving state of neural populations in the striatum, the major input area of the BG. However, it is debated whether striatal activity underlies latent, dynamic decision processes or kinematics of overt movement. Here, we measured the impact of temperature on striatal population activity and the behavior of rats and compared the observed effects to neural activity and behavior collected in multiple versions of a temporal categorization task. Cooler temperatures caused dilation, and warmer temperatures contraction, of both neural activity and patterns of judgment in time, mimicking endogenous decision-related variability in striatal activity. However, temperature did not similarly affect movement kinematics. These data provide compelling evidence that the time course of evolving striatal population activity dictates the speed of a latent process that is used to guide choices, but not moment by moment movement execution. More broadly, they establish temporal scaling of population activity as a likely cause and not simply a correlate of timing behavior in the brain. We speculate that these results may reflect an algorithmic division of labor between brain systems. Computations similar to those found in value-based reinforcement learning (RL) models may be implemented within BG circuits to learn control policies involving relatively compact and discrete action spaces (eg. action selection and decision-making), whereas direct policy learning algorithms may be implemented in other brain systems, such as the cerebellum, to learn control policies involving high dimensional and continuous action spaces (eg. continuous control and coordination).