Come-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction. (arXiv:2112.05146v1 [eess.IV])

Diffusion models have recently attained significant interest within the
community owing to their strong performance as generative models. Furthermore,
its application to inverse problems have demonstrated state-of-the-art
performance. Unfortunately, diffusion models have a critical downside – they
are inherently slow to sample from, needing few thousand steps of iteration to
generate images from pure Gaussian noise. In this work, we show that starting
from Gaussian noise is unnecessary. Instead, starting from a single forward
diffusion with better initialization significantly reduces the number of
sampling steps in the reverse conditional diffusion. This phenomenon is
formally explained by the contraction theory of the stochastic difference
equations like our conditional diffusion strategy – the alternating
applications of reverse diffusion followed by a non-expansive data consistency
step. The new sampling strategy, dubbed Come-Closer-Diffuse-Faster (CCDF), also
reveals a new insight on how the existing feed-forward neural network
approaches for inverse problems can be synergistically combined with the
diffusion models. Experimental results with super-resolution, image inpainting,
and compressed sensing MRI demonstrate that our method can achieve
state-of-the-art reconstruction performance at significantly reduced sampling
steps.

Source: https://arxiv.org/abs/2112.05146

webmaster

Related post