Dictionary learning problems are typically non-convex and NP-hard. This paper exploits the ideas that drive algorithms such as K-SVD. The resulting block coordinate descent algorithms involve efficient closed-form solutions. We also propose novel and efficient algorithms for adaptive image reconstruction.
The sparsity of signals in a transform domain or dictionary has been
exploited in applications such as compression, denoising and inverse problems.
More recently, data-driven adaptation of synthesis dictionaries has shown
promise compared to analytical dictionary models. However, dictionary learning
problems are typically non-convex and NP-hard, and the usual alternating
minimization approaches for these problems are often computationally expensive,
with the computations dominated by the NP-hard synthesis sparse coding step.
This paper exploits the ideas that drive algorithms such as K-SVD, and
investigates in detail efficient methods for aggregate sparsity penalized
dictionary learning by first approximating the data with a sum of sparse
rank-one matrices (outer products) and then using a block coordinate descent
approach to estimate the unknowns. The resulting block coordinate descent
algorithms involve efficient closed-form solutions. Furthermore, we consider
the problem of dictionary-blind image reconstruction, and propose novel and
efficient algorithms for adaptive image reconstruction using block coordinate
descent and sum of outer products methodologies. We provide a convergence study
of the algorithms for dictionary learning and dictionary-blind image
reconstruction. Our numerical experiments show the promising performance and
speed-ups provided by the proposed methods over previous schemes in sparse data
representation and compressed sensing-based image reconstruction.