Published on Thu May 16 2019

Random Sampling for Distributed Coded Matrix Multiplication

Wei-Ting Chang, Ravi Tandon

Matrix multiplication is a fundamental building block for large scale computer computations. There has been significant recent interest in using coding to speed up distributed computation.

0
0
0
Abstract

Matrix multiplication is a fundamental building block for large scale computations arising in various applications, including machine learning. There has been significant recent interest in using coding to speed up distributed matrix multiplication, that are robust to stragglers (i.e., machines that may perform slower computations). In many scenarios, instead of exact computation, approximate matrix multiplication, i.e., allowing for a tolerable error is also sufficient. Such approximate schemes make use of randomization techniques to speed up the computation process. In this paper, we initiate the study of approximate coded matrix multiplication, and investigate the joint synergies offered by randomization and coding. Specifically, we propose two coded randomized sampling schemes that use (a) codes to achieve a desired recovery threshold and (b) random sampling to obtain approximation of the matrix multiplication. Tradeoffs between the recovery threshold and approximation error obtained through random sampling are investigated for a class of coded matrix multiplication schemes.

Tue Dec 08 2015
Machine Learning
Speeding Up Distributed Machine Learning Using Codes
Codes are widely used in many engineering applications to offer robustness against noise. In large-scale systems there are several types of noise that can adversely affect the performance of distributed machine learning algorithms. There has been little interaction cutting across codes, machine learning, and distributed systems.
0
0
0
Fri Nov 23 2012
Machine Learning
Analysis of a randomized approximation scheme for matrix multiplication
This note gives a simple analysis of a randomized approximation scheme for matrix multiplication proposed by Sarlos (2006) based on a random rotation. The result follows from a matrix version of Bernstein's inequality and a tail inequality for quadratic forms in Subgaussian random vectors.
0
0
0
Tue Feb 10 2015
Machine Learning
Implementing Randomized Matrix Algorithms in Parallel and Distributed Environments
Randomized algorithms for matrix problems have received a great deal of attention in recent years. Theoretical results demonstrate that in near input-sparsity time and with only a few passes through the data one can retrieve very strong relative-error approximate solutions.
0
0
0
Sun Aug 06 2017
Machine Learning
A Bootstrap Method for Error Estimation in Randomized Matrix Multiplication
In recent years, randomized methods for numerical linear algebra have received growing interest as a general approach to large-scale problems. The exact numerical relationship between cost and accuracy is typically unknown. The proposed method does not substantially increase the cost of standard sketching methods.
0
0
0
Tue Oct 14 2014
Machine Learning
Tighter Low-rank Approximation via Sampling the Leveraged Element
In this work, we propose a new randomized algorithm for computing a low-rank approximation to a given matrix. The method first involves a specific biased sampling, with an element being chosen based on the leverage scores of its row and column. It then involves weighted alternating minimization over the factored form of the intention.
0
0
0
Tue Nov 27 2018
Machine Learning
A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication
Generalized PolyDot codes bridge between Polynomial codes and MatDot codes. The technique uses "garbage alignment," i.e.,aligning computations in coded computing that are not a part of the desired output. The problem of DNN training under soft-errors motivates an interesting probabilistic error model.
0
0
0