The proposed CAN introduces aconditional domain discriminator to model the domain variance in both shared feature representations and class-aware information. CAN has a good ability to generalize learned knowledge to unseen domains.
In this paper, we propose conditional adversarial networks (CANs), a
framework that explores the relationship between the shared features and the
label predictions to impose more discriminability to the shared features, for
multi-domain text classification (MDTC). The proposed CAN introduces a
conditional domain discriminator to model the domain variance in both shared
feature representations and class-aware information simultaneously and adopts
entropy conditioning to guarantee the transferability of the shared features.
We provide theoretical analysis for the CAN framework, showing that CAN's
objective is equivalent to minimizing the total divergence among multiple joint
distributions of shared features and label predictions. Therefore, CAN is a
theoretically sound adversarial network that discriminates over multiple
distributions. Evaluation results on two MDTC benchmarks show that CAN
outperforms prior methods. Further experiments demonstrate that CAN has a good
ability to generalize learned knowledge to unseen domains.