Published on Mon Oct 24 2016

Nonlinear Adaptive Algorithms on Rank-One Tensor Models

Felipe C. Pinheiro, Cassio G. Lopes

This work proposes a low complexity nonlinearity model and develops adaptive algorithms over it. The model is based on the decomposable---or rank-one, in Volterra kernels.

0
0
0
Abstract

This work proposes a low complexity nonlinearity model and develops adaptive algorithms over it. The model is based on the decomposable---or rank-one, in tensor language---Volterra kernels. It may also be described as a product of FIR filters, which explains its low-complexity. The rank-one model is also interesting because it comes from a well-posed problem in approximation theory. The paper uses such model in an estimation theory context to develop an exact gradient-type algorithm, from which adaptive algorithms such as the least mean squares (LMS) filter and its data-reuse version---the TRUE-LMS---are derived. Stability and convergence issues are addressed. The algorithms are then tested in simulations, which show its good performance when compared to other nonlinear processing algorithms in the literature.

Mon Sep 07 2015
Machine Learning
Matrix Factorisation with Linear Filters
This text investigates relations between two well-known family of algorithms. Using the probabilistic model, we derive a matrix factorisation algorithm as a recursive linear filter. We also show that it is possible to interpret our algorithm as a nontrivial stochastic gradient algorithm.
0
0
0
Tue Mar 01 2016
Machine Learning
A Nonlinear Adaptive Filter Based on the Model of Simple Multilinear Functionals
Nonlinear adaptive filtering allows for modeling of some additional aspects of a general system. It usually relies on highly complex algorithms, such as those based on the Volterra series. The new algorithm is tested in a system identification setup.
0
0
0
Tue Nov 14 2017
Machine Learning
Statistically Optimal and Computationally Efficient Low Rank Tensor Completion from Noisy Entries
We develop methods for estimating a low rank tensor from noisy observations on a subset of its entries. Our method is fairly easy to implement and numerical experiments are presented to demonstrate the practical merits of our estimator.
0
0
0
Tue Jul 21 2020
Machine Learning
Sparse Nonnegative Tensor Factorization and Completion with Noisy Observations
Because of sparsity and nonnegativity, the underling tensor is decomposed into one sparse nonnegative tensor and one nonnegative negative tensor. We show that the error bounds of the estimator of the proposed model can be established under general noise observations.
0
0
0
Wed Mar 18 2015
Artificial Intelligence
Interpolating Convex and Non-Convex Tensor Decompositions via the Subspace Norm
We consider the problem of recovering a low-rank tensor from its noisy observation. Previous work has shown a recovery guarantee with signal to noiseRatio
0
0
0
Thu Aug 03 2017
Computer Vision
Beyond Low Rank: A Data-Adaptive Tensor Completion Method
Low rank tensor representation underpins much of recent progress in tensor completion. In real applications, this approach is confronted with two challenging problems. We develop a data-adaptive tensor completion model which explicitly represents both the low-rank and non-low-rank structures.
0
0
0