Published on Tue Mar 08 2016

Discriminative models for robust image classification

Umamahesh Srinivas

This dissertation explores the development of discriminative models for robust image classification that exploit underlying signal structure. Probabilistic graphical models are widely used in many applications to approximate high-dimensional data in a reduced complexity set-up. We propose a discrim inative tree-based scheme for feature fusion.

0
0
0
Abstract

A variety of real-world tasks involve the classification of images into pre-determined categories. Designing image classification algorithms that exhibit robustness to acquisition noise and image distortions, particularly when the available training data are insufficient to learn accurate models, is a significant challenge. This dissertation explores the development of discriminative models for robust image classification that exploit underlying signal structure, via probabilistic graphical models and sparse signal representations. Probabilistic graphical models are widely used in many applications to approximate high-dimensional data in a reduced complexity set-up. Learning graphical structures to approximate probability distributions is an area of active research. Recent work has focused on learning graphs in a discriminative manner with the goal of minimizing classification error. In the first part of the dissertation, we develop a discriminative learning framework that exploits the complementary yet correlated information offered by multiple representations (or projections) of a given signal/image. Specifically, we propose a discriminative tree-based scheme for feature fusion by explicitly learning the conditional correlations among such multiple projections in an iterative manner. Experiments reveal the robustness of the resulting graphical model classifier to training insufficiency.

Fri Dec 21 2012
Machine Learning
Optimal classification in sparse Gaussian graphic model
We find that when useful features are rare and weak, the limiting behavior of HCT is essentially just as good as the ideal threshold. We propose a two-stage classification method where we first select features by the method of IT, and then use the retained features and Fisher'sLDA for classification.
0
0
0
Tue Aug 26 2014
Computer Vision
Sparse Graph-based Transduction for Image Classification
Graph-based Classifier (SGC) for image classification. SGC inherits the merits of both GT and SR. Compared to SR, SGC improves the robustness and the discriminating power of GT.
0
0
0
Tue Jun 04 2019
Machine Learning
Sparse Representation Classification via Screening for Graphs
The sparse representation classifier (SRC) is shown to work well for image recognition problems that satisfy a subspace assumption. The new algorithm achieves comparable numerical performance but significantly faster.
0
0
0
Tue Jan 15 2013
Machine Learning
Efficient Learning of Domain-invariant Image Representations
We present an algorithm that learns representations which explicitly compensate for domain mismatch. We form a linear transformation that maps features from the test domain to the training domain. We optimize both the transformation and classifier parameters jointly.
0
0
0
Thu Oct 20 2011
Machine Learning
Learning Hierarchical and Topographic Dictionaries with Structured Sparsity
Dictionary learning has proven effective for various signal processing tasks. We consider a structured sparse regularization to learn dictionaries embedded in a particular structure.
0
0
0
Wed Nov 12 2014
Computer Vision
Sparse Modeling for Image and Vision Processing
In statistics and machine learning, the sparsity principle is used to perform model selection. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. The corresponding tools have been widely adopted by several scientific communities.
0
0
0