Weakly-supervised forced alignment of disfluent speech using phoneme-level modeling. (arXiv:2306.00996v1 [eess.AS])

The study of speech disorders can benefit greatly from time-aligned data.
However, audio-text mismatches in disfluent speech cause rapid performance
degradation for modern speech aligners, hindering the use of automatic
approaches. In this work, we propose a simple and effective modification of
alignment graph construction of CTC-based models using Weighted Finite State
Transducers. The proposed weakly-supervised approach alleviates the need for
verbatim transcription of speech disfluencies for forced alignment. During the
graph construction, we allow the modeling of common speech disfluencies, i.e.
repetitions and omissions. Further, we show that by assessing the degree of
audio-text mismatch through the use of Oracle Error Rate, our method can be
effectively used in the wild. Our evaluation on a corrupted version of the
TIMIT test set and the UCLASS dataset shows significant improvements,
particularly for recall, achieving a 23-25% relative improvement over our
baselines.

Source: https://arxiv.org/abs/2306.00996

webmaster

Related post