A Closer Look at Audio-Visual Multi-Person Speech Recognition and Active Speaker Selection. (arXiv:2205.05684v1 [eess.AS])

Audio-visual automatic speech recognition is a promising approach to robust
ASR under noisy conditions. However, up until recently it had been
traditionally studied in isolation assuming the video of a single speaking face
matches the audio, and selecting the active speaker at inference time when
multiple people are on screen was put aside as a separate problem. As an
alternative, recent work has proposed to address the two problems
simultaneously with an attention mechanism, baking the speaker selection
problem directly into a fully differentiable model. One interesting finding was
that the attention indirectly learns the association between the audio and the
speaking face even though this correspondence is never explicitly provided at
training time. In the present work we further investigate this connection and
examine the interplay between the two problems. With experiments involving over
50 thousand hours of public YouTube videos as training data, we first evaluate
the accuracy of the attention layer on an active speaker selection task.
Secondly, we show under closer scrutiny that an end-to-end model performs at
least as well as a considerably larger two-step system that utilizes a hard
decision boundary under various noise conditions and number of parallel face

Source: https://arxiv.org/abs/2205.05684


Related post