Churn Reduction via Distillation. (arXiv:2106.02654v1 [cs.LG])

In real-world systems, models are frequently updated as more data becomes
available, and in addition to achieving high accuracy, the goal is to also
maintain a low difference in predictions compared to the base model (i.e.
predictive “churn”). If model retraining results in vastly different
behavior, then it could cause negative effects in downstream systems,
especially if this churn can be avoided with limited impact on model accuracy.
In this paper, we show an equivalence between training with distillation using
the base model as the teacher and training with an explicit constraint on the
predictive churn. We then show that distillation performs strongly for low
churn training against a number of recent baselines on a wide range of datasets
and model architectures, including fully-connected networks, convolutional
networks, and transformers.

Source: https://arxiv.org/abs/2106.02654

webmaster

Related post