Published on Wed Jun 16 2021

Domain Consistency Regularization for Unsupervised Multi-source Domain Adaptive Classification

Zhipeng Luo, Xiaobing Zhang, Shijian Lu, Shuai Yi

Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years. Most existing algorithms focus on extracting domain-invariant representations among all domains. We propose an end-to-end trainable network that exploits domain Consistency

4
0
0
Abstract

Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years. Compared with single-source unsupervised domain adaptation (SUDA), domain shift in MUDA exists not only between the source and target domains but also among multiple source domains. Most existing MUDA algorithms focus on extracting domain-invariant representations among all domains whereas the task-specific decision boundaries among classes are largely neglected. In this paper, we propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification (CRMA). CRMA aligns not only the distributions of each pair of source and target domains but also that of all domains. For each pair of source and target domains, we employ an intra-domain consistency to regularize a pair of domain-specific classifiers to achieve intra-domain alignment. In addition, we design an inter-domain consistency that targets joint inter-domain alignment among all domains. To address different similarities between multiple source domains and the target domain, we design an authorization strategy that assigns different authorities to domain-specific classifiers adaptively for optimal pseudo label prediction and self-training. Extensive experiments show that CRMA tackles unsupervised domain adaptation effectively under a multi-source setup and achieves superior adaptation consistently across multiple MUDA datasets.

Thu Nov 05 2020
Artificial Intelligence
Universal Multi-Source Domain Adaptation
Unsupervised domain adaptation enables intelligent models to transfer knowledge from a labeled source domain to a similar but unlabeled target domain. We propose a universal multi-source adaptation network to solve the domain adaptation problem without increasing the complexity of the model.
0
0
0
Tue Apr 09 2019
Computer Vision
Domain-Symmetric Networks for Adversarial Domain Adaptation
Unsupervised domain adaptation aims to learn a model of classifier for unlabeled samples on the target domain. The proposed SymNet is based on a symmetric design of source and target task classifiers, based on which we also construct an additional classifier.
0
0
0
Fri Nov 22 2019
Machine Learning
Multi-source Distilling Domain Adaptation
Deep neural networks suffer from performance decay when there is a domain shift between the labeled source domain and unlabeled target domain. Conventional DA methods usually assume that the labeled data is sampled from a single source distribution. In reality, labeled data may be collected from multiple sources.
0
0
0
Wed Jul 08 2020
Machine Learning
Domain Adaptation with Auxiliary Target Domain-Oriented Classifier
Domain adaptation (DA) aims to transfer knowledge from a label-rich but heterogeneous domain. pseudo-labeling assigns pseudo labels for each unlabeled data via the classifier trained by labeled data. It ignores the distribution shift in DA problems and is inevitably biased to source data.
0
0
0
Wed Jul 01 2020
Computer Vision
Adversarial Network with Multiple Classifiers for Open Set Domain Adaptation
Domain adaptation aims to transfer knowledge from a domain with adequate labeled samples to a Domain with scarce labeled samples. We propose a novel adversarial domain adaptation model with multiple auxiliary classifiers. The proposed multi-classifier structure introduces a weighting module that evaluates distinctive domain characteristics.
0
0
0
Sun Mar 08 2020
Computer Vision
Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation
Unsupervised domain adaptation aims to leverage labeled data from a source domain to learn a classifier for an unlabeled target domain. We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
0
0
0
Fri Oct 07 2016
Computer Vision
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Gradient-weighted Class Activation Mapping (Grad-CAM) uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map. It is applicable to CNNs with fully-connected layers, CNNs used for structured outputs and CNNs without any architectural changes or re-training.
4
2
7
Wed Mar 03 2021
Computer Vision
Cross-View Regularization for Domain Adaptive Panoptic Segmentation
Panoptic segmentation unifies semantic segmentation and instance segmentation. We design a network that exploits inter-style consistency and inter-task regularization. The network leverages geometric invariance across styles to guide the network to learn domain-invariant features.
0
0
0
Thu Dec 07 2017
Computer Vision
Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
We present a method for unsupervised domain adaptation. We propose to maximize the discrepancy between two classifiers' outputs. A feature generator learns to generate target features near the support of the source. Our method outperforms other methods on several metrics.
0
0
0
Mon Oct 09 2017
Computer Vision
Deeper, Broader and Artier Domain Generalization
Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos.
0
0
0
Fri Sep 26 2014
Neural Networks
Unsupervised Domain Adaptation by Backpropagation
Top-performing deep architectures are trained on massive amounts of labeled data. domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain is available. We show that this adaptation behaviour can be achieved in almost any feed-forward model.
0
0
0
Sun May 20 2018
Machine Learning
Algorithms and Theory for Multiple-Source Adaptation
This work includes a number of novel contributions for the multiple-source adaptation problem. We present new normalized solutions with strong theoretical guarantees for the cross-entropy loss and other similar losses. We find that our algorithm outperforms competing approaches.
0
0
0