OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Goal-directed movements rely on both egocentric (target relative to the observer) and allocentric (target relative to landmarks) spatial representations. So far, it is widely unknown which factors determine the use of allocentric information when we localize objects in space. To probe allocentric coding, we established an object shift paradigm and asked participants to encode the location of multiple objects presented in naturalistic 2D scenes or 3D virtual environments. After a brief delay, a test scene reappeared with one of the objects missing (= target) and the other objects (= landmarks) systematically shifted in one direction. After the test scene vanished, participants had to indicate the remembered location of the target. By quantifying the positional error of the target relative to the physical shift of the landmarks we determined the contribution of allocentric target representations. In my talk, I will present a series of behavioral experiments in which we identified key factors influencing the use of allocentric spatial coding, such as spatial proximity, task relevance, scene coherence, and scene semantics. Overall, our results show that low-level as well as high-level factors influence how humans represent objects in naturalistic environments.