Published on Tue May 25 2021

Improving Few-shot Learning with Weakly-supervised Object Localization

Inyong Koo, Minki Jeong, Changick Kim

Few-shot learning often involves metric learning-based classifiers. However, applying global pooling in the feature extractor may not produce an embedding that correctly focuses on the class object. We propose a novel framework that generates class representations by extracting features from class-relevant regions of the images.

0
0
0
Abstract

Few-shot learning often involves metric learning-based classifiers, which predict the image label by comparing the distance between the extracted feature vector and class representations. However, applying global pooling in the backend of the feature extractor may not produce an embedding that correctly focuses on the class object. In this work, we propose a novel framework that generates class representations by extracting features from class-relevant regions of the images. Given only a few exemplary images with image-level labels, our framework first localizes the class objects by spatially decomposing the similarity between the images and their class prototypes. Then, enhanced class representations are achieved from the localization results. We also propose a loss function to enhance distinctions of the refined features. Our method outperforms the baseline few-shot model in miniImageNet and tieredImageNet benchmarks.

Wed Dec 18 2019
Computer Vision
Semantic Regularization: Improve Few-shot Image Classification by Reducing Meta Shift
Few-shot image classification requires the classifier to robustly cope with unseen classes even if there are only a few samples for each class. The key is to train a class encoder and decoder structure that can encode the sample embedding features with trained semantic basis.
0
0
0
Tue Nov 12 2019
Computer Vision
SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning
Few-shot learners aim to recognize new object classes based on a small number of labeled training examples. To prevent overfitting, state-of-the-art few-shot learners use meta-learning on convolutional-network features.
0
0
0
Wed Jun 05 2019
Machine Learning
Discriminative Few-Shot Learning Based on Directional Statistics
Metric-based few-shot learning methods try to overcome the difficulty due to the lack of training examples by learning embedding to make comparison easy. We propose a novel algorithm to generate class representatives for few- shot classification tasks.
0
0
0
Fri Mar 01 2019
Computer Vision
Semantic-Guided Multi-Attention Localization for Zero-Shot Learning
Zero-shot learning extends the conventional object classification to the unseen class recognition by introducing semantic representations of classes. We propose a semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects.
0
0
0
Mon Feb 12 2018
Machine Learning
Few-Shot Learning with Metric-Agnostic Conditional Embeddings
Learn high quality class representations from few examples is a key problem in metric-learning approaches to few-shot learning. To accomplish this, we introduce a novel architecture where class representations are conditioned for each few- shot trial based on a target image.
0
0
0
Sat Dec 26 2020
Machine Learning
Few Shot Learning With No Labels
0
0
0
Mon Jun 13 2016
Machine Learning
Matching Networks for One Shot Learning
The standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. We employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories.
1
0
9
Thu Sep 04 2014
Computer Vision
Very Deep Convolutional Networks for Large-Scale Image Recognition
Convolutional networks of increasing depth can achieve state-of-the-art results. The research was the basis of the team's ImageNet Challenge 2014.
2
2
7
Wed May 23 2018
Artificial Intelligence
TADAM: Task dependent adaptive metric for improved few-shot learning
Few-shot learning has become essential for producing models that generalize from few examples. We identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms.
0
0
0
Wed Mar 25 2020
Machine Learning
Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
Few-shot learning is widely used as one of the standard benchmarks in meta-learning. We show that a good learned embedding model can be more effective than sophisticated algorithms.
0
0
0
Wed Mar 15 2017
Machine Learning
Prototypical Networks for Few-shot Learning
We propose prototypical networks for the problem of few-shot classification. We further extend prototypical networks to zero-shot learning and achieve state-of-the-art results on the CU-Birds dataset.
0
0
0
Fri Mar 02 2018
Machine Learning
Meta-Learning for Semi-Supervised Few-Shot Classification
In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. We propose novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples. These models are trained in an end-to-end way on episodes, to
0
0
0