Oxford Events, the new replacement for OxTalks, will launch on 16th March. From now until the launch of Oxford Events, new events cannot be published or edited on OxTalks while all existing records are migrated to the new platform. The existing OxTalks site will remain available to view during this period.
From 16th, Oxford Events will launch on a new website: events.ox.ac.uk, and event submissions will resume. You will need a Halo login to submit events. Full details are available on the Staff Gateway.
In complex room settings, machine listening systems may experience a degradation in performance due to factors like room reverberations, background noise, and unwanted sounds. Concurrently, machine vision systems can suffer from issues like visual occlusions, insufficient lighting, and background clutter. Combining audio and visual data has the potential to overcome these limitations and enhance machine perception in complex audio-visual environments. In this talk, we will first discuss the machine cocktail party problem, and the development of speech source separation algorithms for extracting individual speech sources from sound mixtures. We will then discuss selected works related to audio-visual speech separation. This encompasses the fusion of audio-visual data for speech source separation, employing techniques such as Gaussian mixture models, dictionary learning, and deep learning.