Truly Sparse Neural Networks at Scale. (arXiv:2102.01732v1 [cs.LG])

Recently, sparse training methods have started to be established as a de
facto approach for training and inference efficiency in artificial neural
networks. Yet, this efficiency is just in theory. In practice, everyone uses a
binary mask to simulate sparsity since the typical deep learning software and
hardware are optimized for dense matrix operations. In this paper, we take an
orthogonal approach, and we show that we can train truly sparse neural networks
to harvest their full potential. To achieve this goal, we introduce three novel
contributions, specially designed for sparse neural networks: (1) a parallel
training algorithm and its corresponding sparse implementation from scratch,
(2) an activation function with non-trainable parameters to favour the gradient
flow, and (3) a hidden neurons importance metric to eliminate redundancies. All
in one, we are able to break the record and to train the largest neural network
ever trained in terms of representational power — reaching the bat brain size.
The results show that our approach has state-of-the-art performance while
opening the path for an environmentally friendly artificial intelligence era.

Source: https://arxiv.org/abs/2102.01732

webmaster

Related post