Dictionary Learning Algorithms and Applications

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Combinatorial Optimization, Graph, and Network Algorithms".

Deadline for manuscript submissions: closed (15 May 2019) | Viewed by 15769

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Automatic Control and Computers, University Politehnica of Bucharest, Splaiul Independenței 313, 060042 București, Romania
Interests: numerical methods; signal processing; optimization; sparse representations

E-Mail Website
Guest Editor
Institute for Digital Communications, University of Edinburgh, ‎Edinburgh EH8 9YL, UK
Interests: digital communications; dictionary learning; machine learning

Special Issue Information

Dear Colleagues,

Sparse representations have found numerous applications in signal and image processing, coding, compression, classification, modeling and other fields. Their success relies on the parsimony principle: few members of an overcomplete basis can offer a large variety of models. The overcomplete basis, or dictionary, can be fixed or adapted to the application.

Dictionary learning is the technique of designing dictionaries based on samples from the process to be modeled. In many applications, learned dictionaries offer better performance than fixed ones. There are already well established algorithms for the standard problem, but the topic is still open for variations of the learning problem and especially for applications.

We invite you to submit high quality papers to this issue on “Dictionary learning algorithms and applications”, with subjects covering the whole range from theory to applications. The topics include, but are not limited to:

- Dictionary learning (DL) algorithms and toolboxes

- Theoretical properties of DL algorithms

- New formulations and solutions of the DL problem

- Structured dictionary learning

- Manifold dictionary learning

- Kernel dictionary learning

- Incoherent frames

- Classification using sparse representations and DL

- DL applications in signal processing, machine learning, and generally in all engineering fields

Prof. Bogdan Dumitrescu
Dr. Cristian Rusu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Dictionary learning
  • Sparse representations
  • Frames
  • Classification
  • Signal and image processing

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 279 KiB  
Article
Adaptive-Size Dictionary Learning Using Information Theoretic Criteria
by Bogdan Dumitrescu and Ciprian Doru Giurcăneanu
Algorithms 2019, 12(9), 178; https://0-doi-org.brum.beds.ac.uk/10.3390/a12090178 - 25 Aug 2019
Cited by 5 | Viewed by 3459
Abstract
Finding the size of the dictionary is an open issue in dictionary learning (DL). We propose an algorithm that adapts the size during the learning process by using Information Theoretic Criteria (ITC) specialized to the DL problem. The algorithm is built on top [...] Read more.
Finding the size of the dictionary is an open issue in dictionary learning (DL). We propose an algorithm that adapts the size during the learning process by using Information Theoretic Criteria (ITC) specialized to the DL problem. The algorithm is built on top of Approximate K-SVD (AK-SVD) and periodically removes the less used atoms or adds new random atoms, based on ITC evaluations for a small number of candidate sub-dictionaries. Numerical experiments on synthetic data show that our algorithm not only finds the true size with very good accuracy, but is also able to improve the representation error in comparison with AK-SVD knowing the true size. Full article
(This article belongs to the Special Issue Dictionary Learning Algorithms and Applications)
Show Figures

Figure 1

18 pages, 751 KiB  
Article
Aiding Dictionary Learning Through Multi-Parametric Sparse Representation
by Florin Stoican and Paul Irofti
Algorithms 2019, 12(7), 131; https://0-doi-org.brum.beds.ac.uk/10.3390/a12070131 - 28 Jun 2019
Cited by 4 | Viewed by 3610
Abstract
The 1 relaxations of the sparse and cosparse representation problems which appear in the dictionary learning procedure are usually solved repeatedly (varying only the parameter vector), thus making them well-suited to a multi-parametric interpretation. The associated constrained optimization problems differ only through [...] Read more.
The 1 relaxations of the sparse and cosparse representation problems which appear in the dictionary learning procedure are usually solved repeatedly (varying only the parameter vector), thus making them well-suited to a multi-parametric interpretation. The associated constrained optimization problems differ only through an affine term from one iteration to the next (i.e., the problem’s structure remains the same while only the current vector, which is to be (co)sparsely represented, changes). We exploit this fact by providing an explicit, piecewise affine with a polyhedral support, representation of the solution. Consequently, at runtime, the optimal solution (the (co)sparse representation) is obtained through a simple enumeration throughout the non-overlapping regions of the polyhedral partition and the application of an affine law. We show that, for a suitably large number of parameter instances, the explicit approach outperforms the classical implementation. Full article
(This article belongs to the Special Issue Dictionary Learning Algorithms and Applications)
Show Figures

Figure 1

14 pages, 3383 KiB  
Article
Salt and Pepper Noise Removal with Multi-Class Dictionary Learning and L0 Norm Regularizations
by Di Guo, Zhangren Tu, Jiechao Wang, Min Xiao, Xiaofeng Du and Xiaobo Qu
Algorithms 2019, 12(1), 7; https://0-doi-org.brum.beds.ac.uk/10.3390/a12010007 - 25 Dec 2018
Cited by 5 | Viewed by 4672
Abstract
Images may be corrupted by salt and pepper impulse noise during image acquisitions or transmissions. Although promising denoising performances have been recently obtained with sparse representations, how to restore high-quality images remains challenging and open. In this work, image sparsity is enhanced with [...] Read more.
Images may be corrupted by salt and pepper impulse noise during image acquisitions or transmissions. Although promising denoising performances have been recently obtained with sparse representations, how to restore high-quality images remains challenging and open. In this work, image sparsity is enhanced with a fast multiclass dictionary learning, and then both the sparsity regularization and robust data fidelity are formulated as minimizations of L0-L0 norms for salt and pepper impulse noise removal. Additionally, a numerical algorithm of modified alternating direction minimization is derived to solve the proposed denoising model. Experimental results demonstrate that the proposed method outperforms the compared state-of-the-art ones on preserving image details and achieving higher objective evaluation criteria. Full article
(This article belongs to the Special Issue Dictionary Learning Algorithms and Applications)
Show Figures

Figure 1

19 pages, 3478 KiB  
Article
Weak Fault Detection of Tapered Rolling Bearing Based on Penalty Regularization Approach
by Qing Li and Steven Y. Liang
Algorithms 2018, 11(11), 184; https://0-doi-org.brum.beds.ac.uk/10.3390/a11110184 - 08 Nov 2018
Cited by 2 | Viewed by 2944
Abstract
Aimed at the issue of estimating the fault component from a noisy observation, a novel detection approach based on augmented Huber non-convex penalty regularization (AHNPR) is proposed. The core objectives of the proposed method are that (1) it estimates non-zero singular values (i.e., [...] Read more.
Aimed at the issue of estimating the fault component from a noisy observation, a novel detection approach based on augmented Huber non-convex penalty regularization (AHNPR) is proposed. The core objectives of the proposed method are that (1) it estimates non-zero singular values (i.e., fault component) accurately and (2) it maintains the convexity of the proposed objective cost function (OCF) by restricting the parameters of the non-convex regularization. Specifically, the AHNPR model is expressed as the L1-norm minus a generalized Huber function, which avoids the underestimation weakness of the L1-norm regularization. Furthermore, the convexity of the proposed OCF is proved via the non-diagonal characteristic of the matrix BTB, meanwhile, the non-zero singular values of the OCF is solved by the forward–backward splitting (FBS) algorithm. Last, the proposed method is validated by the simulated signal and vibration signals of tapered bearing. The results demonstrate that the proposed approach can identify weak fault information from the raw vibration signal under severe background noise, that the non-convex penalty regularization can induce sparsity of the singular values more effectively than the typical convex penalty (e.g., L1-norm fused lasso optimization (LFLO) method), and that the issue of underestimating sparse coefficients can be improved. Full article
(This article belongs to the Special Issue Dictionary Learning Algorithms and Applications)
Show Figures

Figure 1

Back to TopTop