Published on Sun Sep 27 2015

Discriminative Learning of the Prototype Set for Nearest Neighbor Classification

Shin Ando

The nearest neighbor rule is a classic yet essential classification model. Many existing methods assume and rely on the input vector space. In the proposed model, the selection of the nearest prototypes is influenced by the respective prototypes.

0
0
0
Abstract

The nearest neighbor rule is a classic yet essential classification model, particularly in problems where the supervising information is given by pairwise dissimilarities and the embedding function are not easily obtained. Prototype selection provides means of generalization and improving efficiency of the nearest neighbor model, but many existing methods assume and rely on the analyses of the input vector space. In this paper, we explore a dissimilarity-based, parametrized model of the nearest neighbor rule. In the proposed model, the selection of the nearest prototypes is influenced by the parameters of the respective prototypes. It provides a formulation for minimizing the violation of the extended nearest neighbor rule over the training set in a tractable form to exploit numerical techniques. We show that the minimization problem reduces to a large-margin principle learning and demonstrate its advantage by empirical comparisons with other prototype selection methods.

Sun Jun 07 2020
Machine Learning
Distributionally Robust Weighted -Nearest Neighbors
Learning a robust classifier from a few samples remains a key challenge in machine learning. We develop an algorithm, Dr.k-NN, that efficiently solves this functional optimization problem and features.
0
0
0
Mon Aug 17 2009
Machine Learning
Classification by Set Cover: The Prototype Vector Machine
The PVM selects a relatively small number of representative points which can then be used for classification. The method is compatible with any dissimilarity measure, making itenable to situations in which the data are not embedded in an underlying feature space.
0
0
0
Thu Sep 13 2012
Machine Learning
Parametric Local Metric Learning for Nearest Neighbor Classification
We study the problem of learning local metrics for nearest neighbor classification. We learn local metrics as linear combinations of basis metrics defined on anchor points over different regions of the instance space. Our metric learning method has excellent performance both in terms of predictive power and scalability.
0
0
0
Thu Apr 24 2014
Machine Learning
Scalable Similarity Learning using Large Margin Neighborhood Embedding
Non-parametric approaches such as nearest neighbor classifiers have shown promising results. Most existing algorithms are impractical to handle large-scale data sets. The effectiveness of our proposed model is validated on several data sets with scales varying from tens of thousands to one million images.
0
0
0
Thu Aug 16 2012
Machine Learning
Distance Metric Learning for Kernel Machines
Recent work in metric learning has significantly improved the state-of-the-art in k-nearest neighbor classification. We show that none of these algorithms generate metrics that lead to particularly satisfying improvements for SVM-RBF classification.
0
0
0
Tue Sep 05 2017
Machine Learning
Discriminative Similarity for Clustering and Semi-Supervised Learning
The proposed framework learns classifier from each hypothetical labeling. It searches for the optimal labeling by minimizing the generalization error of the learned classifiers. Based on the discriminative similarity induced by the kernel classifier, we propose new clustering and semi-supervised learning methods.
0
0
0