OxTalks will soon move to the new Halo platform and will become 'Oxford Events.' There will be a need for an OxTalks freeze. This was previously planned for Friday 14th November – a new date will be shared as soon as it is available (full details will be available on the Staff Gateway).
In the meantime, the OxTalks site will remain active and events will continue to be published.
If staff have any questions about the Oxford Events launch, please contact halo@digital.ox.ac.uk
Object recognition relies on invariant representations. A longstanding view states that invariances are learned by explicitly coding how visual features are related in space. Here, we asked how invariances are learned for objects that are defined by relations among features in time (temporal objects). We trained people to classify auditory, visual and spatial temporal objects composed of four successive features into categories defined by sequential transitions across a two-dimensional feature manifold, and measured their tendency to transfer this knowledge to categorise novel objects with rotated transition vectors. Rotation-invariant temporal objects could only be learned if their features were explicitly spatial or had been associated with a physical spatial location in a prior task. Thus, space acts as a scaffold for generalising information in time.