Published on Thu Apr 16 2020

Spectral Learning on Matrices and Tensors

Majid Janzamin, Rong Ge, Jean Kossaifi, Anima Anandkumar

Spectral methods have been the mainstay in several domains such as machine learning and scientific computing. By extending the spectral decomposition methods to higher order moments, we demonstrate the ability to learn a wide range of latent variable models efficiently. We also outline the computational techniques to design efficient tensor decomposition methods.

0
0
0
Abstract

Spectral methods have been the mainstay in several domains such as machine learning and scientific computing. They involve finding a certain kind of spectral decomposition to obtain basis functions that can capture important structures for the problem at hand. The most common spectral method is the principal component analysis (PCA). It utilizes the top eigenvectors of the data covariance matrix, e.g. to carry out dimensionality reduction. This data pre-processing step is often effective in separating signal from noise. PCA and other spectral techniques applied to matrices have several limitations. By limiting to only pairwise moments, they are effectively making a Gaussian approximation on the underlying data and fail on data with hidden variables which lead to non-Gaussianity. However, in most data sets, there are latent effects that cannot be directly observed, e.g., topics in a document corpus, or underlying causes of a disease. By extending the spectral decomposition methods to higher order moments, we demonstrate the ability to learn a wide range of latent variable models efficiently. Higher-order moments can be represented by tensors, and intuitively, they can encode more information than just pairwise moment matrices. More crucially, tensor decomposition can pick up latent effects that are missed by matrix methods, e.g. uniquely identify non-orthogonal components. Exploiting these aspects turns out to be fruitful for provable unsupervised learning of a wide range of latent variable models. We also outline the computational techniques to design efficient tensor decomposition methods. We introduce Tensorly, which has a simple python interface for expressing tensor operations. It has a flexible back-end system supporting NumPy, PyTorch, TensorFlow and MXNet amongst others, allowing multi-GPU and CPU operations and seamless integration with deep-learning functionalities.

Thu Jan 29 2015
Machine Learning
Tensor Factorization via Matrix Factorization
Tensor factorization arises in many machine learning applications. We prove that a small number random projections essentially preserves spectral information in the tensor. Experimentally, our method outperforms existing tensor factorization methods on both simulated and real data.
0
0
0
Fri Jun 10 2016
Machine Learning
Discovery of Latent Factors in High-dimensional Data Using Tensor Methods
Unsupervised learning aims at the discovery of hidden structure that drives observations in the real world. Latent variable models are versatile and have applications in almost every domain. Training latent variable models is challenging due to the non-convexity of the likelihood objective. An alternative method is based on the spectral decomposition of low order moment tensors.
0
0
0
Wed Jul 06 2016
Machine Learning
Tensor Decomposition for Signal Processing and Machine Learning
Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines. Tensors have only recently become ubiquitous in signal and data analytics. This article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors.
0
0
0
Thu Sep 12 2013
Machine Learning
Efficient Orthogonal Tensor Decomposition, with an Application to Latent Variable Model Learning
Decomposing tensors into orthogonal factors is a well-known task inistics, machine learning, and signal processing. We show that it is a non-trivial assumption for a tensor to have such an Orthogonal Decomposition.
0
0
0
Wed Aug 30 2017
Machine Learning
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions.
0
0
0
Fri May 24 2019
Machine Learning
A general method for regularizing tensor decomposition methods via pseudo-data
Tensor decomposition methods allow us to learn the parameters of latent variable models through decomposition of low-order moments of data. A significant limitation of these algorithms is that there exists no general method to regularize them. We present a general method of regularizing tensor decomposition.
0
0
0