Published on Mon Jun 11 2018

Adaptive Denoising of Signals with Local Shift-Invariant Structure

Zaid Harchaoui, Anatoli Juditsky, Arkadi Nemirovski, Dmitrii Ostrovskii

We discuss the problem of adaptive discrete-time signal denoising in the situation where the signal to be recovered admits a "linear oracle" -- an unknown linear estimate that takes the form of convolution of observations with a time-invariant filter. We show that such estimators possess better statistical properties than those based on fit.

0
0
0
Abstract

We discuss the problem of adaptive discrete-time signal denoising in the situation where the signal to be recovered admits a "linear oracle" -- an unknown linear estimate that takes the form of convolution of observations with a time-invariant filter. It was shown by Juditsky and Nemirovski (2009) that when the -norm of the oracle filter is small enough, such oracle can be "mimicked" by an efficiently computable adaptive estimate of the same structure with an observation-driven filter. The filter in question was obtained as a solution to the optimization problem in which the -norm of the Discrete Fourier Transform (DFT) of the estimation residual is minimized under constraint on the -norm of the filter DFT. In this paper, we discuss a new family of adaptive estimates which rely upon minimizing the -norm of the estimation residual. We show that such estimators possess better statistical properties than those based on -fit; in particular, we prove oracle inequalities for their -loss and improved bounds for - and pointwise losses. The oracle inequalities rely on the "approximate shift-invariance" assumption stating that the signal to be recovered is close to an (unknown) shift-invariant subspace. We also study the relationship of the approximate shift-invariance assumption with the "signal simplicity" assumption introduced in Juditsky and Nemirovski (2009) and discuss the application of the proposed approach to harmonic oscillations denoising.

Fri Oct 19 2012
Machine Learning
Bayesian Estimation for Continuous-Time Sparse Stochastic Processes
We consider continuous-time sparse stochastic processes from which we have only a finite number of noisy/noiseless samples. By relying on tools from the theory of splines, we derive the joint a priori distribution of the samples.
0
0
0
Sun Mar 21 2010
Computer Vision
On MMSE and MAP Denoising Under Sparse Representation Modeling Over a Unitary Dictionary
Bayesian denoising algorithms lead to shrinkage on the transformed coefficients. Upper bounds on the MAP and MMSE estimation errors are derived. We tie these to the error obtained by a so-called oracle estimator.
0
0
0
Wed Apr 24 2019
Machine Learning
Prediction bounds for higher order total variation regularized least squares
Our approach is based on combining a general oracle inequality for the least squares estimator with "interpolating vectors" to create the "effective sparsity" This allows one to show that the penalty on the order differences leads to an estimator that can adapt to the number of jumps.
0
0
0
Tue Oct 02 2012
Machine Learning
Local stability and robustness of sparse dictionary learning in the presence of noise
S sparse coding, or sparse dictionary learning, relies on a non-convex procedure whose local minima have not been fully analyzed yet. The study takes into account the over-complete dictionaries and noisy signals, thus extending previous work.
0
0
0
Sun Nov 17 2019
Machine Learning
Adaptive Rates for Total Variation Image Denoising
We study the theoretical properties of image denoising via total variation. We define the total vatiation in terms of the two-dimensional total discrete derivative of the image. We show that the denoised image enjoys oracle properties.
0
0
0
Tue Jul 22 2008
Machine Learning
Universal Denoising of Discrete-time Continuous-Amplitude Signals
We consider the problem of reconstructing a discrete-time signal (sequence) corrupted by a known memoryless channel. We develop a sequence of denoisers that, although independent of the distribution of the underlying `clean' sequence, is universally optimal in the limit of large sequence length.
0
0
0