Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI. (arXiv:2110.06223v1 [cs.CL])

Although neural models have shown strong performance in datasets such as
SNLI, they lack the ability to generalize out-of-distribution (OOD). In this
work, we formulate a few-shot learning setup and examine the effects of natural
language explanations on OOD generalization. We leverage the templates in the
HANS dataset and construct templated natural language explanations for each
template. Although generated explanations show competitive BLEU scores against
groundtruth explanations, they fail to improve prediction performance. We
further show that generated explanations often hallucinate information and miss
key elements that indicate the label.



Related post