Neural coding of space: from 3D grid cells to object × place


Registration isn't required.

Successful navigation requires knowledge of the space navigated, and the locations of specific objects within it. To investigate how these signals are represented in the case of 3D space, we recorded from medial entorhinal cortex (MEC) neurons in freely flying bats. Alongside 3D border cells and 3D head-direction cells, we found neurons with multiple 3D firing fields. Many of these multifield neurons were 3D grid cells, whose neighbouring fields were separated by a characteristic distance – forming a local order – but lacked any global lattice arrangement of the fields.

We modelled grid cells as emerging from pairwise interactions between fields, which yielded a hexagonal lattice in 2D and local order in 3D – thereby describing findings from both 2D and 3D grid cells using one unifying model. We also found neurons that fired at one or two locations, near specific rest-objects that were identical in shape but varied in their position in the room. Unlike object-vector cells, which are found in the superficial layers of MEC and fire at the vicinity of all objects, these neurons fired at the vicinity of one object in a specific position and were mainly found in the deep layers of MEC. Moreover, these cells fired near the rest-object when the bat flew from or to the object, but not when it flew through the same location without object-engagement – thus encoding object × position.

Our results suggest two things: First, the finding of local but not global order in 3D grid cells call for re-thinking the role of grid cells in spatial coding. Second, the data on object × position coding point to a broader prevalence than currently thought for conjunctive coding of navigational variables – including the encoding of objects, which are crucially important for navigation.