GAR: Generalized Autoregression for Multi-Fidelity Fusion. (arXiv:2301.05729v1 [stat.ML])

In many scientific research and engineering applications where repeated
simulations of complex systems are conducted, a surrogate is commonly adopted
to quickly estimate the whole system. To reduce the expensive cost of
generating training examples, it has become a promising approach to combine the
results of low-fidelity (fast but inaccurate) and high-fidelity (slow but
accurate) simulations. Despite the fast developments of multi-fidelity fusion
techniques, most existing methods require particular data structures and do not
scale well to high-dimensional output. To resolve these issues, we generalize
the classic autoregression (AR), which is wildly used due to its simplicity,
robustness, accuracy, and tractability, and propose generalized autoregression
(GAR) using tensor formulation and latent features. GAR can deal with arbitrary
dimensional outputs and arbitrary multifidelity data structure to satisfy the
demand of multi-fidelity fusion for complex problems; it admits a fully
tractable likelihood and posterior requiring no approximate inference and
scales well to high-dimensional problems. Furthermore, we prove the
autokrigeability theorem based on GAR in the multi-fidelity case and develop
CIGAR, a simplified GAR with the exact predictive mean accuracy with
computation reduction by a factor of d 3, where d is the dimensionality of the
output. The empirical assessment includes many canonical PDEs and real
scientific examples and demonstrates that the proposed method consistently
outperforms the SOTA methods with a large margin (up to 6x improvement in RMSE)
with only a couple high-fidelity training samples.



Related post