Zero-shot personalized lip-to-speech synthesis with face image based voice control. (arXiv:2305.14359v1 [cs.MM])
Lip-to-Speech (Lip2Speech) synthesis, which predicts corresponding speech
from talking face images, has witnessed significant progress with various
models and training strategies in a series of independent studies. However,
existing studies can not achieve voice control under zero-shot condition,
because extra speaker embeddings need to be extracted from natural reference
speech and are unavailable when only the silent video of an unseen speaker is
given. In this paper, we propose a zero-shot personalized Lip2Speech synthesis
method, in which face images control speaker identities. A variational
autoencoder is adopted to disentangle the speaker identity and linguistic
content representations, which enables speaker embeddings to control the voice
characteristics of synthetic speech for unseen speakers. Furthermore, we
propose associated cross-modal representation learning to promote the ability
of face-based speaker embeddings (FSE) on voice control. Extensive experiments
verify the effectiveness of the proposed method whose synthetic utterances are
more natural and matching with the personality of input video than the compared
methods. To our best knowledge, this paper makes the first attempt on zero-shot
personalized Lip2Speech synthesis with a face image rather than reference audio
to control voice characteristics.
Source: https://arxiv.org/abs/2305.14359