Hierarchical Audio-Visual Information Fusion with Multi-label Joint Decoding for MER 2023. (arXiv:2309.07925v1 [eess.AS])

In this paper, we propose a novel framework for recognizing both discrete and
dimensional emotions. In our framework, deep features extracted from foundation
models are used as robust acoustic and visual representations of raw video.
Three different structures based on attention-guided feature gathering (AFG)
are designed for deep feature fusion. Then, we introduce a joint decoding
structure for emotion classification and valence regression in the decoding
stage. A multi-task loss based on uncertainty is also designed to optimize the
whole process. Finally, by combining three different structures on the
posterior probability level, we obtain the final predictions of discrete and
dimensional emotions. When tested on the dataset of multimodal emotion
recognition challenge (MER 2023), the proposed framework yields consistent
improvements in both emotion classification and valence regression. Our final
system achieves state-of-the-art performance and ranks third on the leaderboard
on MER-MULTI sub-challenge.

Source: https://arxiv.org/abs/2309.07925

webmaster

Related post