Where is my coffee cup? Spatial coding of objects in naturalistic environments

Goal-directed movements rely on both egocentric (target relative to the observer) and allocentric (target relative to landmarks) spatial representations. So far, it is widely unknown which factors determine the use of allocentric information when we localize objects in space. To probe allocentric coding, we established an object shift paradigm and asked participants to encode the location of multiple objects presented in naturalistic 2D scenes or 3D virtual environments. After a brief delay, a test scene reappeared with one of the objects missing (= target) and the other objects (= landmarks) systematically shifted in one direction. After the test scene vanished, participants had to indicate the remembered location of the target. By quantifying the positional error of the target relative to the physical shift of the landmarks we determined the contribution of allocentric target representations. In my talk, I will present a series of behavioral experiments in which we identified key factors influencing the use of allocentric spatial coding, such as spatial proximity, task relevance, scene coherence, and scene semantics. Overall, our results show that low-level as well as high-level factors influence how humans represent objects in naturalistic environments.