Published on Sun Nov 22 2020

RNNP: A Robust Few-Shot Learning Approach

Pratik Mazumder, Pravendra Singh, Vinay P. Namboodiri

Learnings from a few examples is an important practical aspect of training classifiers. All existing approaches assume that the few examples provided are always correctly labeled. We address this issue by proposing a novel robust few-shot learning approach. Our method relies on generating robust prototypes from a set of few examples.

0
0
0
Abstract

Learning from a few examples is an important practical aspect of training classifiers. Various works have examined this aspect quite well. However, all existing approaches assume that the few examples provided are always correctly labeled. This is a strong assumption, especially if one considers the current techniques for labeling using crowd-based labeling services. We address this issue by proposing a novel robust few-shot learning approach. Our method relies on generating robust prototypes from a set of few examples. Specifically, our method refines the class prototypes by producing hybrid features from the support examples of each class. The refined prototypes help to classify the query images better. Our method can replace the evaluation phase of any few-shot learning method that uses a nearest neighbor prototype-based evaluation procedure to make them robust. We evaluate our method on standard mini-ImageNet and tiered-ImageNet datasets. We perform experiments with various label corruption rates in the support examples of the few-shot classes. We obtain significant improvement over widely used few-shot learning methods that suffer significant performance degeneration in the presence of label noise. We finally provide extensive ablation experiments to validate our method.

Wed Jun 17 2020
Machine Learning
Enhancing Few-Shot Image Classification with Unlabelled Examples
We develop a transductive meta-learning method that uses unlabelled instances to improve few-shot image classification performance. Our approach combines aregularized Mahalanobis-distance-based soft k-means clustering procedure with a state of the art neural adaptive feature extractor.
5
0
0
Sun Aug 23 2020
Artificial Intelligence
Few-Shot Image Classification via Contrastive Self-Supervised Learning
Most previous few-shot learning algorithms are based on meta-training with fake few- shot tasks as training samples. The trained model is also limited by the type of tasks. In this paper we propose a new paradigm of unsupervised few-shots learning to repair the deficiencies.
0
0
0
Tue Feb 26 2019
Machine Learning
Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning via Random Labels and Data Augmentation
The field of few-shot learning has been laboriously explored in the supervised setting. On the other hand, the unsupervised few- shot learning setting has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-
0
0
0
Thu Mar 26 2020
Machine Learning
Instance Credibility Inference for Few-Shot Learning
Few-shot learning (FSL) aims to recognize new objects with extremely limited training data for each category. This paper presents a simple statistical approach, dubbed Instance Credibility Inference (ICI), to exploit the distribution support of unlabeled instances.
0
0
0
Wed Nov 29 2017
Machine Learning
Semi-Supervised and Active Few-Shot Learning with Prototypical Networks
We consider the problem of semi-supervised few-shot classification. A classifier needs to adapt to new tasks using a few labeled examples and(potentially many) unlabeled examples. The features extracted with Prototypical Networks are clustered using -means.
0
0
0
Mon May 28 2018
Artificial Intelligence
Object-Level Representation Learning for Few-Shot Image Classification
Few-shot learning that trains image classifiers over few labeled examples per category is a challenging task. We use the object-level relation learned from the additional dataset to infer the similarity of images in our target dataset with unseen categories.
0
0
0