On 28th November OxTalks will move to the new Halo platform and will become 'Oxford Events' (full details are available on the Staff Gateway).
There will be an OxTalks freeze beginning on Friday 14th November. This means you will need to publish any of your known events to OxTalks by then as there will be no facility to publish or edit events in that fortnight. During the freeze, all events will be migrated to the new Oxford Events site. It will still be possible to view events on OxTalks during this time.
If you have any questions, please contact halo@digital.ox.ac.uk
Short Bio:
Madhu Vankadari is a doctoral candidate at the University of Oxford’s Cyber Physical Systems group, under the supervision of Prof. Niki Trigoni and Prof. Andrew Markham. Prior to Oxford, he worked as a Machine Vision researcher at TCS Research in India. Madhu’s research revolves around using deep learning for SLAM-related challenges, such as improving depth estimation, camera pose accuracy, multi-motion scenarios, and visual place recognition. His work finds applications in robotics and computer vision, enhancing areas like autonomous navigation and augmented reality.
Abstract:
Understanding the world in 3D irrespective of the time of the day is crucial for applications such as autonomous navigation, and augmented and virtual reality. Amongst all the sensors through which this can be achieved, cameras have been cheap and ubiquitous. However, cameras can only capture the 2D projection of the 3D world. Extracting 3D information from one or more 2D images has been a long-standing problem in Computer Vision. Recently, the success of deep learning has made it possible to do the aforementioned by training a network on a large corpus of training data with their ground truth. Self-supervised learning made it possible to train a system to achieve the same objective without using any ground truth. In this talk, I am going to present some of the latest advances in self-supervised learning including my own research in this direction.