Interpretable Multi-Head Self-Attention model for Sarcasm Detection in social media. (arXiv:2101.05875v1 [cs.CL])

Sarcasm is a linguistic expression often used to communicate the opposite of
what is said, usually something that is very unpleasant with an intention to
insult or ridicule. Inherent ambiguity in sarcastic expressions, make sarcasm
detection very difficult. In this work, we focus on detecting sarcasm in
textual conversations from various social networking platforms and online
media. To this end, we develop an interpretable deep learning model using
multi-head self-attention and gated recurrent units. Multi-head self-attention
module aids in identifying crucial sarcastic cue-words from the input, and the
recurrent units learn long-range dependencies between these cue-words to better
classify the input text. We show the effectiveness of our approach by achieving
state-of-the-art results on multiple datasets from social networking platforms
and online media. Models trained using our proposed approach are easily
interpretable and enable identifying sarcastic cues in the input text which
contribute to the final classification score. We visualize the learned
attention weights on few sample input texts to showcase the effectiveness and
interpretability of our model.

Source: https://arxiv.org/abs/2101.05875

webmaster

Related post