Sparse Implicit Processes for Approximate Inference. (arXiv:2110.07618v1 [stat.ML])

Implicit Processes (IPs) are flexible priors that can describe models such as
Bayesian neural networks, neural samplers and data generators. IPs allow for
approximate inference in function-space. This avoids some degenerate problems
of parameter-space approximate inference due to the high number of parameters
and strong dependencies. For this, an extra IP is often used to approximate the
posterior of the prior IP. However, simultaneously adjusting the parameters of
the prior IP and the approximate posterior IP is a challenging task. Existing
methods that can tune the prior IP result in a Gaussian predictive
distribution, which fails to capture important data patterns. By contrast,
methods producing flexible predictive distributions by using another IP to
approximate the posterior process cannot fit the prior IP to the observed data.
We propose here a method that can carry out both tasks. For this, we rely on an
inducing-point representation of the prior IP, as often done in the context of
sparse Gaussian processes. The result is a scalable method for approximate
inference with IPs that can tune the prior IP parameters to the data, and that
provides accurate non-Gaussian predictive distributions.

Source: https://arxiv.org/abs/2110.07618

webmaster

Related post