Published on Wed Mar 15 2017

Zero-Shot Recognition using Dual Visual-Semantic Mapping Paths

Yanan Li, Donghui Wang, Huanhang Hu, Yuetan Lin, Yueting Zhuang

Zero-shot recognition aims to accurately recognize objects of unseen classes. It uses a shared visual-semantic mapping between the image feature space and the semantic embedding space. This mapping is learned on training data of seen classes and is expected to have transfer ability to unseen Classes.

0
0
0
Abstract

Zero-shot recognition aims to accurately recognize objects of unseen classes by using a shared visual-semantic mapping between the image feature space and the semantic embedding space. This mapping is learned on training data of seen classes and is expected to have transfer ability to unseen classes. In this paper, we tackle this problem by exploiting the intrinsic relationship between the semantic space manifold and the transfer ability of visual-semantic mapping. We formalize their connection and cast zero-shot recognition as a joint optimization problem. Motivated by this, we propose a novel framework for zero-shot recognition, which contains dual visual-semantic mapping paths. Our analysis shows this framework can not only apply prior semantic knowledge to infer underlying semantic manifold in the image feature space, but also generate optimized semantic embedding space, which can enhance the transfer ability of the visual-semantic mapping to unseen classes. The proposed method is evaluated for zero-shot recognition on four benchmark datasets, achieving outstanding results.

Thu Mar 08 2018
Computer Vision
Preserving Semantic Relations for Zero-Shot Learning
Zero-shot learning has gained popularity due to its potential to scale recognizing models without requiring additional training data. We believe that the potential offered by this paradigm is not yet fully exploited. We propose to utilize the structure of the space spanned by the attributes using a set of relations.
0
0
0
Tue Jul 24 2018
Computer Vision
Learning Class Prototypes via Structure Alignment for Zero-Shot Recognition
Zero-shot learning (ZSL) aims to recognize objects of novel classes without training samples of specific classes. We propose a coupled dictionary learning approach to align the visual-semantic structures using the class prototypes.
0
0
0
Fri Mar 01 2019
Computer Vision
Semantic-Guided Multi-Attention Localization for Zero-Shot Learning
Zero-shot learning extends the conventional object classification to the unseen class recognition by introducing semantic representations of classes. We propose a semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects.
0
0
0
Tue Jun 16 2020
Computer Vision
Learning the Redundancy-free Features for Generalized Zero-Shot Object Recognition
Zero-shot object recognition or zero-shot learning aims to transfer the ability among semantically related categories. However, the images of different fine-grained objects tend to merely exhibit subtle differences in appearance. To reduce the superfluous information in the fine- grained objects, we propose
0
0
0
Sat Mar 30 2019
Machine Learning
Adaptive Adjustment with Semantic Feature Space for Zero-Shot Recognition
zero-shot recognition (ZSR) has gained increasing attention in machine learning and image processing fields. The conventional ZSR easily suffers from domain shift and hubness problems. We propose a novel ZSR learning framework that can handle these two issues well.
0
0
0
Fri Jun 22 2018
Computer Vision
Global Semantic Consistency for Zero-Shot Learning
Global Semantic Consistency Network (GSC-Net) makes complete use of the semantic information of both seen and unseen classes. We also adopt a soft label embedding loss to further exploit the semantic relationships among classes.
0
0
0