Published on Tue Apr 30 2013

Uniqueness of Tensor Decompositions with Applications to Polynomial Identifiability

Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan

We prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error.

0
0
0
Abstract

We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions: we prove that given a tensor whose decomposition satisfies a robust form of Kruskal's rank condition, it is possible to approximately recover the decomposition if the tensor is known up to a sufficiently small (inverse polynomial) error. Kruskal's theorem has found many applications in proving the identifiability of parameters for various latent variable models and mixture models such as Hidden Markov models, topic models etc. Our robust version immediately implies identifiability using only polynomially many samples in many of these settings. This polynomial identifiability is an essential first step towards efficient learning algorithms for these models. Recently, algorithms based on tensor decompositions have been used to estimate the parameters of various hidden variable models efficiently in special cases as long as they satisfy certain "non-degeneracy" properties. Our methods give a way to go beyond this non-degeneracy barrier, and establish polynomial identifiability of the parameters under much milder conditions. Given the importance of Kruskal's theorem in the tensor literature, we expect that this robust version will have several applications beyond the settings we explore in this work.

Mon Oct 29 2012
Machine Learning
Tensor decompositions for learning latent variable models
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models. It exploits a certain tensor structure in their low-order observable moments. The decomposition of these specially structured tensors can be efficiently obtained.
0
0
0
Thu Sep 12 2013
Machine Learning
Efficient Orthogonal Tensor Decomposition, with an Application to Latent Variable Model Learning
Decomposing tensors into orthogonal factors is a well-known task inistics, machine learning, and signal processing. We show that it is a non-trivial assumption for a tensor to have such an Orthogonal Decomposition.
0
0
0
Fri Jul 17 2020
Machine Learning
Perturbation Bounds for Orthogonally Decomposable Tensors and Their Applications in High Dimensional Data Analysis
We develop deterministic perturbation bounds for singular values and vectors of orthogonally decomposable tensors. Most notably, they indicate that for higher-order tensorsperturbation affects each singular value/ vector in isolation.
0
0
0
Mon Dec 12 2016
Machine Learning
Tensor Decompositions via Two-Mode Higher-Order SVD (HOSVD)
Tensor decompositions have rich applications in statistics and machine learning. This tensor decomposition method provably handles a greater level of noise compared to previous methods and achieves a high estimation accuracy.
0
0
0
Tue Dec 09 2014
Machine Learning
Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity
We present a simple, general technique for reducing the sample complexity ofMatrix and tensor decomposition algorithms. We use the technique to give a polynomial-time algorithm for standard ICA with sample complexity nearly linear in the dimension.
0
0
0
Fri May 24 2019
Machine Learning
A general method for regularizing tensor decomposition methods via pseudo-data
Tensor decomposition methods allow us to learn the parameters of latent variable models through decomposition of low-order moments of data. A significant limitation of these algorithms is that there exists no general method to regularize them. We present a general method of regularizing tensor decomposition.
0
0
0