Published on Sat Aug 28 2021

Generalized Huber Loss for Robust Learning and its Efficient Minimization for a Robust Statistics

Kaan Gokcesu, Hakan Gokcesu

We show that with a suitable function of choice, specifically the log-exp transform, we can achieve a loss function which combines the desirable properties of both the absolute and the quadratic loss. We provide an algorithm to find the minimizer of such loss functions.

1
0
0
Abstract

We propose a generalized formulation of the Huber loss. We show that with a suitable function of choice, specifically the log-exp transform; we can achieve a loss function which combines the desirable properties of both the absolute and the quadratic loss. We provide an algorithm to find the minimizer of such loss functions and show that finding a centralizing metric is not that much harder than the traditional mean and median.

Mon Jul 18 2016
Machine Learning
Geometric Mean Metric Learning
We revisit the task of learning a Euclidean metric from data. We approach this problem from first principles and formulate it as a surprisingly simple optimization problem. Our closed-form solution consistently attains higher classification accuracy.
0
0
0
Tue Dec 02 2014
Machine Learning
Easy Hyperparameter Search Using Optunity
Optunity is a free software package dedicated to hyperparameter optimization. It contains various types of solvers, ranging from undirected methods to direct search, particle swarm and evolutionary optimization.
0
0
0
Wed Aug 28 2019
Machine Learning
Lecture Notes: Selected topics on robust statistical learning theory
These notes gather recent results on robust statistical learning theory. The goal is to stress the main principles underlying the construction andoretical analysis of these estimators. The notes are the basis of lectures at the conference StatMathAppli 2019.
0
0
0
Tue Mar 26 2013
Machine Learning
A Note on k-support Norm Regularized Risk Minimization
The k-support norm has been recently introduced to perform correlated sparsity regularization. Although Argyriou et al. only reported experiments using squared loss, here we apply it to several other commonly used settings. This results in novel machine learning algorithms with interesting and familiar cases.
0
0
0
Mon Jun 15 2020
Machine Learning
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
The noise in stochastic gradient descent (SGD) provides a crucial implicit regularization effect for training overparameterized models. The paper theoretically characterizes this phenomenon on aquadratically-parameterize model.
0
0
0
Tue Mar 05 2013
Artificial Intelligence
GURLS: a Least Squares Library for Supervised Learning
GURLS is a least squares, modular, easy-to-extend software library. It offers state-of-the-art training strategies for medium and large-scale learning. The library is particularly well suited for multi-output problems (multi-category/multi-label)
0
0
0