Transductive Learning for Abstractive News Summarization. (arXiv:2104.09500v1 [cs.CL])

Pre-trained language models have recently advanced abstractive summarization.
These models are further fine-tuned on human-written references before summary
generation in test time. In this work, we propose the first application of
transductive learning to summarization. In this paradigm, a model can learn
from the test set’s input before inference. To perform transduction, we propose
to utilize input document summarizing sentences to construct references for
learning in test time. These sentences are often compressed and fused to form
abstractive summaries and provide omitted details and additional context to the
reader. We show that our approach yields state-of-the-art results on CNN/DM and
NYT datasets. For instance, we achieve over 1 ROUGE-L point improvement on
CNN/DM. Further, we show the benefits of transduction from older to more recent
news. Finally, through human and automatic evaluation, we show that our
summaries become more abstractive and coherent.

Source: https://arxiv.org/abs/2104.09500

webmaster

Related post