Published on Wed May 01 2013

Perceptron Mistake Bounds

Mehryar Mohri, Afshin Rostamizadeh

We present a brief survey of existing mistake bounds and introduce novel bounds for the Perceptron. Our novel bounds generalize beyond standard margin-loss type bounds, allow for any convex and convex loss function, and admit a very simple proof.

0
0
0
Abstract

We present a brief survey of existing mistake bounds and introduce novel bounds for the Perceptron or the kernel Perceptron algorithm. Our novel bounds generalize beyond standard margin-loss type bounds, allow for any convex and Lipschitz loss function, and admit a very simple proof.

Sun May 18 2008
Machine Learning
The Margitron: A Generalised Perceptron with Margin
We identify the classical Perceptron algorithm with margin as a member of a broader family of large margin classifiers. The Margitron, (despite its) sharing the same update rule with the Perceptron, is shown in an incremental setting to converge in a finite number of updates.
0
0
0
Wed Oct 09 2019
Machine Learning
Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin
For linear classifiers, the relationship between output margin and generalization is captured in a clear and simple bound. For deep models, however, this relationship is less clear. In this work, we propose to instead analyze a new notion of margin, which we call the "all-layer margin"
1
1
2
Mon Jul 06 2015
Machine Learning
A Simple Algorithm for Maximum Margin Classification, Revisited
In this note, we revisit the algorithm of Har-Peled et. al. for computing a linear maximum margin classifier. Our presentation is self contained, and the algorithm itself is slightly simpler than the original algorithm.
0
0
0
Thu Jun 11 2020
Machine Learning
Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization
We study the generalization performances of standard.classifiers in the high-dimensional regime where. is kept finite in the limit of a high dimension and. number of samples . We prove a formula for the. generalization error achieved
0
0
0
Tue Apr 03 2012
Machine Learning
The Kernelized Stochastic Batch Perceptron
We present a novel approach for training kernel Support Vector Machines. We establish learning runtime guarantees for our method that are better than those of any other known kernelized SVM optimization approach. We show that our method works well in practice compared to existing alternatives.
0
0
0
Sat Dec 30 2017
Machine Learning
PAC-Bayesian Margin Bounds for Convolutional Neural Networks
The generalization error of deep neural networks has been analyzed through the PAC-Bayesian framework, for the case of fully connected layers. We adapt this approach to the convolutional setting.
0
0
0