Compressive Neural Representations of Volumetric Scalar Fields. (arXiv:2104.04523v1 [cs.LG])

We present an approach for compressing volumetric scalar fields using
implicit neural representations. Our approach represents a scalar field as a
learned function, wherein a neural network maps a point in the domain to an
output scalar value. By setting the number of weights of the neural network to
be smaller than the input size, we achieve compressed representations of scalar
fields, thus framing compression as a type of function approximation. Combined
with carefully quantizing network weights, we show that this approach yields
highly compact representations that outperform state-of-the-art volume
compression approaches. The conceptual simplicity of our approach enables a
number of benefits, such as support for time-varying scalar fields, optimizing
to preserve spatial gradients, and random-access field evaluation. We study the
impact of network design choices on compression performance, highlighting how
simple network architectures are effective for a broad range of volumes.



Related post