Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
Goal-directed movements rely on both egocentric (target relative to the observer) and allocentric (target relative to landmarks) spatial representations. So far, it is widely unknown which factors determine the use of allocentric information when we localize objects in space. To probe allocentric coding, we established an object shift paradigm and asked participants to encode the location of multiple objects presented in naturalistic 2D scenes or 3D virtual environments. After a brief delay, a test scene reappeared with one of the objects missing (= target) and the other objects (= landmarks) systematically shifted in one direction. After the test scene vanished, participants had to indicate the remembered location of the target. By quantifying the positional error of the target relative to the physical shift of the landmarks we determined the contribution of allocentric target representations. In my talk, I will present a series of behavioral experiments in which we identified key factors influencing the use of allocentric spatial coding, such as spatial proximity, task relevance, scene coherence, and scene semantics. Overall, our results show that low-level as well as high-level factors influence how humans represent objects in naturalistic environments.