Audio-Visual Speech Source Separation
In complex room settings, machine listening systems may experience a degradation in performance due to factors like room reverberations, background noise, and unwanted sounds. Concurrently, machine vision systems can suffer from issues like visual occlusions, insufficient lighting, and background clutter. Combining audio and visual data has the potential to overcome these limitations and enhance machine perception in complex audio-visual environments. In this talk, we will first discuss the machine cocktail party problem, and the development of speech source separation algorithms for extracting individual speech sources from sound mixtures. We will then discuss selected works related to audio-visual speech separation. This encompasses the fusion of audio-visual data for speech source separation, employing techniques such as Gaussian mixture models, dictionary learning, and deep learning.
Date: 22 May 2024, 17:30 (Wednesday, 5th week, Trinity 2024)
Venue: All Souls College, High Street OX1 4AL
Venue Details: Hovenden Room and Zoom
Speakers: Speaker to be announced
Organising department: Oxford School of Global and Area Studies
Organisers: Megi Kartsivadze (University of Oxford), Dr Anna Wilson (IMCC, University of Oxford)
Organiser contact email address: megikartsivadze@gmail.com
Part of: IMCC Seminar Talks
Booking required?: Not required
Booking email: megikartsivadze@gmail.com
Audience: Public
Editor: Megi Kartsivadze