Published on Fri Feb 19 2021

Conditional Adversarial Networks for Multi-Domain Text Classification

Yuan Wu, Diana Inkpen, Ahmed El-Roby

The proposed CAN introduces aconditional domain discriminator to model the domain variance in both shared feature representations and class-aware information. CAN has a good ability to generalize learned knowledge to unseen domains.

0
0
0
Abstract

In this paper, we propose conditional adversarial networks (CANs), a framework that explores the relationship between the shared features and the label predictions to impose more discriminability to the shared features, for multi-domain text classification (MDTC). The proposed CAN introduces a conditional domain discriminator to model the domain variance in both shared feature representations and class-aware information simultaneously and adopts entropy conditioning to guarantee the transferability of the shared features. We provide theoretical analysis for the CAN framework, showing that CAN's objective is equivalent to minimizing the total divergence among multiple joint distributions of shared features and label predictions. Therefore, CAN is a theoretically sound adversarial network that discriminates over multiple distributions. Evaluation results on two MDTC benchmarks show that CAN outperforms prior methods. Further experiments demonstrate that CAN has a good ability to generalize learned knowledge to unseen domains.

Thu Feb 15 2018
Machine Learning
Multinomial Adversarial Networks for Multi-Domain Text Classification
The availability of training data can vary drastically across domains. For some domains there may not be any annotated data at all. We propose a multinomial adversarial network (MAN) to tackle the text classification problem.
0
0
0
Sun Jan 31 2021
Machine Learning
Mixup Regularized Adversarial Networks for Multi-Domain Text Classification
Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models. We propose a mixup regularized adversarial network to address these two issues.
0
0
0
Wed Sep 18 2019
Machine Learning
Dual Adversarial Co-Learning for Multi-Domain Text Classification
We propose a novel dual adversarial co-learning approach for multi-domain text classification (MDTC) The approach learns shared-private networks for feature extraction. We conduct experiments on multi-domain sentiment classification datasets.
0
0
0
Wed Mar 25 2020
Machine Learning
Adversarial Multi-Binary Neural Network for Multi-class Classification
Multi-class text classification is one of the key problems in machine learning and natural language processing. Emerging neural networks deal with the problem using a multi-output softmax layer and achieve substantial progress.
0
0
0
Wed Apr 19 2017
NLP
Adversarial Multi-task Learning for Text Classification
Neural network models focus on learning the shared layers to extract the common and task-invariant features. In most existing approaches, the extracted features are prone to be contaminated by task-specific features or the noise brought by other tasks. We propose an adversarial multi-task learning framework,
0
0
0
Sun Sep 16 2018
NLP
Cross-Domain Labeled LDA for Cross-Domain Text Classification
Cross-domain text classification aims at building a classifier for a target domain. One promising idea is to minimize the feature distribution differences of the two domains. To address this problem, we propose a novel group alignment. We also propose a partial supervision for model's learning in source domain.
0
0
0