Unsupervised Contextual Paraphrase Generation using Lexical Control and Reinforcement Learning. (arXiv:2103.12777v1 [cs.CL])

Customer support via chat requires agents to resolve customer queries with
minimum wait time and maximum customer satisfaction. Given that the agents as
well as the customers can have varying levels of literacy, the overall quality
of responses provided by the agents tend to be poor if they are not predefined.
But using only static responses can lead to customer detraction as the
customers tend to feel that they are no longer interacting with a human. Hence,
it is vital to have variations of the static responses to reduce monotonicity
of the responses. However, maintaining a list of such variations can be
expensive. Given the conversation context and the agent response, we propose an
unsupervised frame-work to generate contextual paraphrases using autoregressive
models. We also propose an automated metric based on Semantic Similarity,
Textual Entailment, Expression Diversity and Fluency to evaluate the quality of
contextual paraphrases and demonstrate performance improvement with
Reinforcement Learning (RL) fine-tuning using the automated metric as the
reward function.

Source: https://arxiv.org/abs/2103.12777

webmaster

Related post