TextGNN: Improving Text Encoder via Graph Neural Network in Sponsored Search. (arXiv:2101.06323v1 [cs.CL])

Text encoders based on C-DSSM or transformers have demonstrated strong
performance in many Natural Language Processing (NLP) tasks. Low latency
variants of these models have also been developed in recent years in order to
apply them in the field of sponsored search which has strict computational
constraints. However these models are not the panacea to solve all the Natural
Language Understanding (NLU) challenges as the pure semantic information in the
data is not sufficient to fully identify the user intents. We propose the
TextGNN model that naturally extends the strong twin tower structured encoders
with the complementary graph information from user historical behaviors, which
serves as a natural guide to help us better understand the intents and hence
generate better language representations. The model inherits all the benefits
of twin tower models such as C-DSSM and TwinBERT so that it can still be used
in the low latency environment while achieving a significant performance gain
than the strong encoder-only counterpart baseline models in both offline
evaluations and online production system. In offline experiments, the model
achieves a 0.14% overall increase in ROC-AUC with a 1% increased accuracy for
long-tail low-frequency Ads, and in the online A/B testing, the model shows a
2.03% increase in Revenue Per Mille with a 2.32% decrease in Ad defect rate.

Source: https://arxiv.org/abs/2101.06323

webmaster

Related post