Loading...

Sparse Recovery and Dictionary Learning based on Proximal Methods in Optimization

Sadeghi, Mostafa | 2018

458 Viewed
  1. Type of Document: Ph.D. Dissertation
  2. Language: Farsi
  3. Document No: 51763 (05)
  4. University: Sharif University of Technology
  5. Department: Electrical Engineering
  6. Advisor(s): Babaie Zadeh, Massoud
  7. Abstract:
  8. Sparse representation has attracted much attention over the past decade. The main idea is that natural signals have information contents much lower than their ambient dimensions,and as such, they can be represented by using only a few basis signals (also called atoms). In other words, a natural signal of length n, which in general needs n atoms to be represented, can be written as a linear combination of s atoms, where s ≪ n. To achieve a sparser representation, i.e., a smaller s, the number of atoms is chosen much larger than n. In this way, there are more choices to represent a signal and we can choose the sparsest possible combination. The set of atoms is called a dictionary. Here, two questions arise. One is basically how to choose the collection of atoms, and the other one is how to select the best set of atoms to represent a given signal. The best answer for the first question is to learn a set of atoms from some training signals. This is called dictionary learning, which has gained a lot of interest. The second question is answered by a large number of existing sparse recovery (or sparse coding) algorithms. In this thesis, we are going to study the sparse representation and especially the dictionary learning problem with more details.Moreover, we will also focus on high-dimensional data. To this end, new algorithms for both dictionary learning and sparse coding are proposed. The main features of the proposed algorithms are their simple structures and high quality performance. These algorithms are based on proximal algorithms. Proximal algorithms are first order tools for solving a broad range of optimization problems in signal processing and machine learning that have low computational complexity but a very good performance. This is a reason why our proposed algorithms suit high-dimensional data. Specifically, inspired by the smoothed ℓ0 norm (SL0) algorithm, new algorithms with improved performance and more robustness to noise are proposed. Convergence analysis is provided for these algorithms, and our extensive simulations confirm their superior performance over existing methods. In addition, new algorithms are proposed for learning low-mutual coherence dictionaries, which in contrast to previous methods, directly use the mutual coherence function. Besides, a new algorithm is proposed to learn dictionary from high-dimensional data. This algorithm is based on reducing the dimension of data and distributing them over multiple processors. Simulation results on synthetic as well as real data demonstrate the promising performance of the proposed algorithms
  9. Keywords:
  10. Sparse Representation ; Dictionary Learning ; Proximal Approximation ; High Dimention Data ; Sparse Data Recovery

 Digital Object List

 Bookmark

No TOC