Published on Wed Apr 04 2018

Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval

Chao Li, Cheng Deng, Ning Li, Wei Liu, Xinbo Gao, Dacheng Tao

Cross-modal retrieval has made significant progress recently. We propose a self-supervised adversarial hashing approach. The proposed SSAH surpasses the current state of the art methods.

0
0
0
Abstract

Thanks to the success of deep learning, cross-modal retrieval has made significant progress recently. However, there still remains a crucial bottleneck: how to bridge the modality gap to further enhance the retrieval accuracy. In this paper, we propose a self-supervised adversarial hashing (\textbf{SSAH}) approach, which lies among the early attempts to incorporate adversarial learning into cross-modal hashing in a self-supervised fashion. The primary contribution of this work is that two adversarial networks are leveraged to maximize the semantic correlation and consistency of the representations between different modalities. In addition, we harness a self-supervised semantic network to discover high-level semantic information in the form of multi-label annotations. Such information guides the feature learning process and preserves the modality relationships in both the common semantic space and the Hamming space. Extensive experiments carried out on three benchmark datasets validate that the proposed SSAH surpasses the state-of-the-art methods.

Wed Apr 01 2020
Machine Learning
Task-adaptive Asymmetric Deep Cross-modal Hashing
Supervised cross-modal hashing aims to embed the semantic correlations of heterogeneous modality data into the binary hash codes with discriminative semantic labels. The superiority of TA-ADCMH is proved on two standard datasets.
0
0
0
Sun Nov 26 2017
Computer Vision
HashGAN:Attention-aware Deep Adversarial Hashing for Cross Modal Retrieval
The proposed new adversarial network, HashGAN, consists of three building blocks. The generative module and the discriminative module are trained in an adversarial way. Extensive evaluations on several benchmark datasets demonstrate that the proposed HashGAN brings substantial improvements over state-of-the-art cross-modal hashing methods.
0
0
0
Mon Apr 30 2018
Computer Vision
Cycle-Consistent Deep Generative Hashing for Cross-Modal Retrieval
In this paper, we propose a novel deep generative approach to cross-modal retrieval to learn hash functions in the absence of paired training samples. Our proposed approach employs adversarial training scheme to lean a couple of hash functions enabling translation between modalities while assuming the underlying semantic
0
0
0
Fri Nov 06 2020
Computer Vision
Deep Cross-modal Hashing via Margin-dynamic-softmax Loss
Cross-modal hashing methods have attracted considerable attention due to their high retrieval efficiency and low storage cost. Almost all supervised cross- modal. hashing methods usually depends on defining a.similarity between datapoints with the label information to guide the hashing.model learning fully or partly. However, the defined similarity between datasets can only capture the label info of
1
0
0
Wed Feb 07 2018
Computer Vision
SCH-GAN: Semi-supervised Cross-modal Hashing by Generative Adversarial Network
Cross-modal hashing aims to map heterogeneous multimedia data into a common Hamming space. We propose a novel Semi-supervised Cross-Modal Hashing approach by Generative Adversarial Network (SCH-GAN)
0
0
0
Fri Feb 07 2020
Computer Vision
Deep Robust Multilevel Semantic Cross-Modal Hashing
0
0
0