Model-Contrastive Federated Domain Adaptation. (arXiv:2305.10432v1 [cs.LG])
Federated domain adaptation (FDA) aims to collaboratively transfer knowledge
from source clients (domains) to the related but different target client,
without communicating the local data of any client. Moreover, the source
clients have different data distributions, leading to extremely challenging in
knowledge transfer. Despite the recent progress in FDA, we empirically find
that existing methods can not leverage models of heterogeneous domains and thus
they fail to achieve excellent performance. In this paper, we propose a
model-based method named FDAC, aiming to address {bf F}ederated {bf D}omain
{bf A}daptation based on {bf C}ontrastive learning and Vision Transformer
(ViT). In particular, contrastive learning can leverage the unlabeled data to
train excellent models and the ViT architecture performs better than
convolutional neural networks (CNNs) in extracting adaptable features. To the
best of our knowledge, FDAC is the first attempt to learn transferable
representations by manipulating the latent architecture of ViT under the
federated setting. Furthermore, FDAC can increase the target data diversity by
compensating from each source model with insufficient knowledge of samples and
features, based on domain augmentation and semantic matching. Extensive
experiments on several real datasets demonstrate that FDAC outperforms all the
comparative methods in most conditions. Moreover, FDCA can also improve
communication efficiency which is another key factor in the federated setting.
Source: https://arxiv.org/abs/2305.10432