Scaling Vision with Sparse Mixture of Experts. (arXiv:2106.05974v1 [cs.CV])

Sparsely-gated Mixture of Experts networks (MoEs) have demonstrated excellent
scalability in Natural Language Processing. In Computer Vision, however, almost
all performant networks are “dense”, that is, every input is processed by every
parameter. We present a Vision MoE (V-MoE), a sparse version of the Vision
Transformer, that is scalable and competitive with the largest dense networks.
When applied to image recognition, V-MoE matches the performance of
state-of-the-art networks, while requiring as little as half of the compute at
inference time. Further, we propose an extension to the routing algorithm that
can prioritize subsets of each input across the entire batch, leading to
adaptive per-image compute. This allows V-MoE to trade-off performance and
compute smoothly at test-time. Finally, we demonstrate the potential of V-MoE
to scale vision models, and train a 15B parameter model that attains 90.35% on
ImageNet.

Source: https://arxiv.org/abs/2106.05974

webmaster

Related post