Confounder Balancing in Adversarial Domain Adaptation for Pre-Trained Large Models Fine-Tuning. (arXiv:2310.16062v1 [cs.LG])
The excellent generalization, contextual learning, and emergence abilities in
the pre-trained large models (PLMs) handle specific tasks without direct
training data, making them the better foundation models in the adversarial
domain adaptation (ADA) methods to transfer knowledge learned from the source
domain to target domains. However, existing ADA methods fail to account for the
confounder properly, which is the root cause of the source data distribution
that differs from the target domains. This study proposes an adversarial domain
adaptation with confounder balancing for PLMs fine-tuning (ADA-CBF). The
ADA-CBF includes a PLM as the foundation model for a feature extractor, a
domain classifier and a confounder classifier, and they are jointly trained
with an adversarial loss. This loss is designed to improve the domain-invariant
representation learning by diluting the discrimination in the domain
classifier. At the same time, the adversarial loss also balances the confounder
distribution among source and unmeasured domains in training. Compared to
existing ADA methods, ADA-CBF can correctly identify confounders in
domain-invariant features, thereby eliminating the confounder biases in the
extracted features from PLMs. The confounder classifier in ADA-CBF is designed
as a plug-and-play and can be applied in the confounder measurable,
unmeasurable, or partially measurable environments. Empirical results on
natural language processing and computer vision downstream tasks show that
ADA-CBF outperforms the newest GPT-4, LLaMA2, ViT and ADA methods.
Source: https://arxiv.org/abs/2310.16062