Published on Fri Sep 04 2020

Don't miss the Mismatch: Investigating the Objective Function Mismatch for Unsupervised Representation Learning

Bonifaz Stuhr, Jürgen Brauer

Finding general evaluation metrics for unsupervised representation learning techniques is a challenging open research question. Most approaches currently suffer from the objective function mismatch. This states that the performance on a desired target task can decrease when it is learned too long - especially when both tasks are ill-posed.

0
0
0
Abstract

Finding general evaluation metrics for unsupervised representation learning techniques is a challenging open research question, which recently has become more and more necessary due to the increasing interest in unsupervised methods. Even though these methods promise beneficial representation characteristics, most approaches currently suffer from the objective function mismatch. This mismatch states that the performance on a desired target task can decrease when the unsupervised pretext task is learned too long - especially when both tasks are ill-posed. In this work, we build upon the widely used linear evaluation protocol and define new general evaluation metrics to quantitatively capture the objective function mismatch and the more generic metrics mismatch. We discuss the usability and stability of our protocols on a variety of pretext and target tasks and study mismatches in a wide range of experiments. Thereby we disclose dependencies of the objective function mismatch across several pretext and target tasks with respect to the pretext model's representation size, target model complexity, pretext and target augmentations as well as pretext and target task types.

Mon Aug 31 2020
Machine Learning
A Framework For Contrastive Self-Supervised Learning And Designing A New Approach
Contrastive self-supervised learning (CSL) is an approach to learn useful representations. We present a conceptual framework that characterizes CSL approaches. We show the utility of our framework by designing Yet Another DIM (YADIM)
0
0
0
Tue Sep 15 2020
Artificial Intelligence
Evaluating representations by the complexity of learning low-loss predictors
0
0
0
Wed May 05 2021
Machine Learning
How Fine-Tuning Allows for Effective Meta-Learning
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm. We then provide risk bounds on the best predictor found by fine-tuning via gradient descent. The upper bound applies to general function classes.
1
2
11
Thu Feb 27 2020
Machine Learning
LEEP: A New Measure to Evaluate Transferability of Learned Representations
The Log Expected Empirical Prediction(LEEP) measure is simple and easy to compute. LEEP can achieve up to 30% improvement when transferring from ImageNet to CIFAR100. It can predict the performance and convergence speed of both transfer and meta-transfer learning methods.
0
0
0
Thu Mar 25 2021
Computer Vision
Contrasting Contrastive Self-Supervised Representation Learning Models
In the past few years, we have witnessed remarkable breakthroughs in self-supervised representation learning. Despite the success and adoption of representations learned through this paradigm, much is yet to be understood about how different training methods and datasets influence performance on downstream tasks.
1
0
3
Tue Oct 01 2019
Computer Vision
A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark
Representation learning promises to unlock deep learning for the long tail of vision tasks without expensive labelled datasets. Popular protocols are often too constrained (linear classification) or limited in diversity (ImageNet, CIFAR, Pascal-VOC)
1
0
6