Published on Mon Nov 20 2017

Parameter Reference Loss for Unsupervised Domain Adaptation

Jiren Jin, Richard G. Calland, Takeru Miyato, Brian K. Vogel, Hideki Nakayama

Unsupervised domain adaptation (UDA) aims to utilize labeled data from a source domain to learn a model that generalizes to a target domain of unlabeled data. Common evaluation procedures require target domain labels for hyper-parameter tuning and model selection, contradicting the definition of the task. We design a novel

0
0
0
Abstract

The success of deep learning in computer vision is mainly attributed to an abundance of data. However, collecting large-scale data is not always possible, especially for the supervised labels. Unsupervised domain adaptation (UDA) aims to utilize labeled data from a source domain to learn a model that generalizes to a target domain of unlabeled data. A large amount of existing work uses Siamese network-based models, where two streams of neural networks process the source and the target domain data respectively. Nevertheless, most of these approaches focus on minimizing the domain discrepancy, overlooking the importance of preserving the discriminative ability for target domain features. Another important problem in UDA research is how to evaluate the methods properly. Common evaluation procedures require target domain labels for hyper-parameter tuning and model selection, contradicting the definition of the UDA task. Hence we propose a more reasonable evaluation principle that avoids this contradiction by simply adopting the latest snapshot of a model for evaluation. This adds an extra requirement for UDA methods besides the main performance criteria: the stability during training. We design a novel method that connects the target domain stream to the source domain stream with a Parameter Reference Loss (PRL) to solve these problems simultaneously. Experiments on various datasets show that the proposed PRL not only improves the performance on the target domain, but also stabilizes the training procedure. As a result, PRL based models do not need the contradictory model selection, and thus are more suitable for practical applications.

Tue Sep 01 2020
Machine Learning
A Review of Single-Source Deep Unsupervised Visual Domain Adaptation
Domain adaptation is a machine learning paradigm that aims to learn a model from a source domain that can perform well on a different (but related) target domain. Large-scale labeled training datasets have enabled deep neural networks to extraordinarilyexcel across a wide range of benchmark vision tasks.
0
0
0
Thu Sep 02 2021
Machine Learning
Adversarial Robustness for Unsupervised Domain Adaptation
Unsupervised Domain Adaptation (UDA) studies have shown great success in practice. Conventional adversarial training methods are not suitable for the adversarial robustness on the unlabeled target domain of UDA models. We leverage intermediate representations learned by multiple robust ImageNet models.
7
0
0
Mon Sep 06 2021
Computer Vision
Improving Transferability of Domain Adaptation Networks Through Domain Alignment Layers
Deep learning (DL) has been the primary approach used in various computer vision tasks. DL methods are also prone to the well-known domain shift problem. Multi-source unsupervised domain adaptation (MSDA) aims at learning a predictor for an unlabeled domain.
1
1
2
Fri Aug 13 2021
Computer Vision
Learning Transferable Parameters for Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) enables a learning machine to adapt from a labeled source domain to an unlabeled domain under the distribution shift. We find that only partial parameters are essential for learning domain-invariant information and generalizing well in UDA. We propose Transferable Parameter Learning (TransPar) to reduce the side effect brought by domain-specific information.
3
0
1
Fri May 28 2021
Computer Vision
Transformer-Based Source-Free Domain Adaptation
The paper is based on the task of source-free domain adaptation (SFDA) The model accuracy is highly correlated with whether or not attention is focused on the objects in an image. By doing so, the model is encouraged to turn attention towards the object regions.
1
0
0
Thu Jun 10 2021
Computer Vision
Cross-domain Contrastive Learning for Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a fully-labeled source domain to a different unlabeled target domain. Most existing UDA methods learn domain-invariant feature representations by minimizing feature distances across domains. In this work, we build
1
0
1