An Online Learning Approach to Interpolation and Extrapolation in Domain Generalization. (arXiv:2102.13128v1 [cs.LG])

A popular assumption for out-of-distribution generalization is that the
training data comprises sub-datasets, each drawn from a distinct distribution;
the goal is then to “interpolate” these distributions and “extrapolate” beyond
them — this objective is broadly known as domain generalization. A common
belief is that ERM can interpolate but not extrapolate and that the latter is
considerably more difficult, but these claims are vague and lack formal
justification. In this work, we recast generalization over sub-groups as an
online game between a player minimizing risk and an adversary presenting new
test distributions. Under an existing notion of inter- and extrapolation based
on reweighting of sub-group likelihoods, we rigorously demonstrate that
extrapolation is computationally much harder than interpolation, though their
statistical complexity is not significantly different. Furthermore, we show
that ERM — or a noisy variant — is provably minimax-optimal for both tasks.
Our framework presents a new avenue for the formal analysis of domain
generalization algorithms which may be of independent interest.

Source: https://arxiv.org/abs/2102.13128

webmaster

Related post