Prompting Segmentation with Sound is Generalizable Audio-Visual Source Localizer. (arXiv:2309.07929v1 [cs.CV])

Never having seen an object and heard its sound simultaneously, can the model
still accurately localize its visual position from the input audio? In this
work, we concentrate on the Audio-Visual Localization and Segmentation tasks
but under the demanding zero-shot and few-shot scenarios. To achieve this goal,
different from existing approaches that mostly employ the
encoder-fusion-decoder paradigm to decode localization information from the
fused audio-visual feature, we introduce the encoder-prompt-decoder paradigm,
aiming to better fit the data scarcity and varying data distribution dilemmas
with the help of abundant knowledge from pre-trained models. Specifically, we
first propose to construct Semantic-aware Audio Prompt (SAP) to help the visual
foundation model focus on sounding objects, meanwhile, the semantic gap between
the visual and audio modalities is also encouraged to shrink. Then, we develop
a Correlation Adapter (ColA) to keep minimal training efforts as well as
maintain adequate knowledge of the visual foundation model. By equipping with
these means, extensive experiments demonstrate that this new paradigm
outperforms other fusion-based methods in both the unseen class and
cross-dataset settings. We hope that our work can further promote the
generalization study of Audio-Visual Localization and Segmentation in practical
application scenarios.

Source: https://arxiv.org/abs/2309.07929

webmaster

Related post