Alternate Model Growth and Pruning for Efficient Training of Recommendation Systems. (arXiv:2105.01064v1 [cs.IR])

Deep learning recommendation systems at scale have provided remarkable gains
through increasing model capacity (i.e. wider and deeper neural networks), but
it comes at significant training cost and infrastructure cost. Model pruning is
an effective technique to reduce computation overhead for deep neural networks
by removing redundant parameters. However, modern recommendation systems are
still thirsty for model capacity due to the demand for handling big data. Thus,
pruning a recommendation model at scale results in a smaller model capacity and
consequently lower accuracy. To reduce computation cost without sacrificing
model capacity, we propose a dynamic training scheme, namely alternate model
growth and pruning, to alternatively construct and prune weights in the course
of training. Our method leverages structured sparsification to reduce
computational cost without hurting the model capacity at the end of offline
training so that a full-size model is available in the recurring training stage
to learn new data in real-time. To the best of our knowledge, this is the first
work to provide in-depth experiments and discussion of applying structural
dynamics to recommendation systems at scale to reduce training cost. The
proposed method is validated with an open-source deep-learning recommendation
model (DLRM) and state-of-the-art industrial-scale production models.

Source: https://arxiv.org/abs/2105.01064

webmaster

Related post