entropy-logo

Journal Browser

Journal Browser

Information Theory in Signal Processing and Image Processing

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (20 October 2022) | Viewed by 21889

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS 39762, USA
Interests: information theory; wireless networking; machine learning; optimization theory

E-Mail Website
Guest Editor
Department of Electrical anDepartment of Electrical and Computer Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan
Interests: information theory; signal processing; compressive sensing; control theory

E-Mail Website
Guest Editor
Institute of Communications Engineering, Department of Electrical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan
Interests: information theory; signal processing; wireless communication; machine learning

Special Issue Information

Dear Colleagues,

Longstanding interplays exist between information theory, signal processing, and image processing. Recently, such interplays have been significantly intense due to the huge advances in learning and optimization methods, such as deep learning, reinforcement learning, convex and non-convex optimization, and distributed optimization and learning. Accordingly, information-theoretic learning and optimization have led to significant and positive advances in signal and image processing owing to the large amount of available data acquired by advanced sensing devices and social media platforms. Although many efficient learning-based algorithms have been developed for complex problems in signal and image processing, there is a lack of theoretical foundations concerning how to exploit and evaluate the fundamental performance limits of these algorithms. Special Issue seeks to encourage new information-theoretic topics in the areas of signal processing, image processing and recognition, inference, and machine learning. More importantly, it will promote fundamental synergies across these areas of research.

Prospective authors are invited to submit original manuscripts on topics including, but not limited to the following:

  • Optimization for signal processing and image processing
  • Learning for signal processing and image processing
  • Information-theoretic methods for learning
  • Compressive sensing
  • High-dimensional statistics
  • Estimation and inference

Prof. Dr. Chun-Hung Liu
Prof. Dr. Jwo-Yuh Wu
Prof. Dr. Peter Y. Hong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information theory
  • signal processing
  • image processing
  • compressive sensing
  • high-dimensional statistics
  • statistical learning
  • machine learning
  • optimization

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3033 KiB  
Article
CSI-Former: Pay More Attention to Pose Estimation with WiFi
by Yue Zhou, Caojie Xu, Lu Zhao, Aichun Zhu, Fangqiang Hu and Yifeng Li
Entropy 2023, 25(1), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/e25010020 - 22 Dec 2022
Cited by 2 | Viewed by 2165
Abstract
Cross-modal human pose estimation has a wide range of applications. Traditional image-based pose estimation will not work well in poor light or darkness. Therefore, some sensors such as LiDAR or Radio Frequency (RF) signals are now using to estimate human pose. However, it [...] Read more.
Cross-modal human pose estimation has a wide range of applications. Traditional image-based pose estimation will not work well in poor light or darkness. Therefore, some sensors such as LiDAR or Radio Frequency (RF) signals are now using to estimate human pose. However, it limits the application that these methods require much high-priced professional equipment. To address these challenges, we propose a new WiFi-based pose estimation method. Based on the Channel State Information (CSI) of WiFi, a novel architecture CSI-former is proposed to innovatively realize the integration of the multi-head attention in the WiFi-based pose estimation network. To evaluate the performance of CSI-former, we establish a span-new dataset Wi-Pose. This dataset consists of 5 GHz WiFi CSI, the corresponding images, and skeleton point annotations. The experimental results on Wi-Pose demonstrate that CSI-former can significantly improve the performance in wireless pose estimation and achieve more remarkable performance over traditional image-based pose estimation. To better benefit future research on the WiFi-based pose estimation, Wi-Pose has been made publicly available. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

22 pages, 17525 KiB  
Article
Nighttime Image Stitching Method Based on Guided Filtering Enhancement
by Mengying Yan, Danyang Qin, Gengxin Zhang, Ping Zheng, Jianan Bai and Lin Ma
Entropy 2022, 24(9), 1267; https://0-doi-org.brum.beds.ac.uk/10.3390/e24091267 - 09 Sep 2022
Cited by 3 | Viewed by 1467
Abstract
Image stitching refers to stitching two or more images with overlapping areas through feature points matching to generate a panoramic image, which plays an important role in geological survey, military reconnaissance, and other fields. At present, the existing image stitching technologies mostly adopt [...] Read more.
Image stitching refers to stitching two or more images with overlapping areas through feature points matching to generate a panoramic image, which plays an important role in geological survey, military reconnaissance, and other fields. At present, the existing image stitching technologies mostly adopt images with good lighting conditions, but the lack of feature points in scenes with weak light such as morning or night will affect the image stitching effect, making it difficult to meet the needs of practical applications. When there exist concentrated areas of brightness such as lights and large dark areas in the nighttime image, it will further cause the loss of image details making the feature point matching unavailable. The obtained perspective transformation matrix cannot reflect the mapping relationship of the entire image, resulting in poor splicing effect, and it is difficult to meet the actual application requirements. Therefore, an adaptive image enhancement algorithm is proposed based on guided filtering to preprocess the nighttime image, and use the enhanced image for feature registration. The experimental results show that the image obtained by preprocessing the nighttime image with the proposed enhancement algorithm has better detail performance and color restoration, and greatly improves the image quality. By performing feature registration on the enhanced image, the number of matching logarithms of the image increases, so as to achieve high accuracy for images stitching. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

15 pages, 2899 KiB  
Article
Robust Multiple Importance Sampling with Tsallis φ-Divergences
by Mateu Sbert and László Szirmay-Kalos
Entropy 2022, 24(9), 1240; https://0-doi-org.brum.beds.ac.uk/10.3390/e24091240 - 03 Sep 2022
Cited by 2 | Viewed by 1419
Abstract
Multiple Importance Sampling (MIS) combines the probability density functions (pdf) of several sampling techniques. The combination weights depend on the proportion of samples used for the particular techniques. Weights can be found by optimization of the variance, but this approach is costly and [...] Read more.
Multiple Importance Sampling (MIS) combines the probability density functions (pdf) of several sampling techniques. The combination weights depend on the proportion of samples used for the particular techniques. Weights can be found by optimization of the variance, but this approach is costly and numerically unstable. We show in this paper that MIS can be represented as a divergence problem between the integrand and the pdf, which leads to simpler computations and more robust solutions. The proposed idea is validated with 1D numerical examples and with the illumination problem of computer graphics. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

13 pages, 479 KiB  
Article
Stochastic Model of Block Segmentation Based on Improper Quadtree and Optimal Code under the Bayes Criterion
by Yuta Nakahara and Toshiyasu Matsushima
Entropy 2022, 24(8), 1152; https://0-doi-org.brum.beds.ac.uk/10.3390/e24081152 - 19 Aug 2022
Viewed by 1199
Abstract
Most previous studies on lossless image compression have focused on improving preprocessing functions to reduce the redundancy of pixel values in real images. However, we assumed stochastic generative models directly on pixel values and focused on achieving the theoretical limit of the assumed [...] Read more.
Most previous studies on lossless image compression have focused on improving preprocessing functions to reduce the redundancy of pixel values in real images. However, we assumed stochastic generative models directly on pixel values and focused on achieving the theoretical limit of the assumed models. In this study, we proposed a stochastic model based on improper quadtrees. We theoretically derive the optimal code for the proposed model under the Bayes criterion. In general, Bayes-optimal codes require an exponential order of calculation with respect to the data lengths. However, we propose an algorithm that takes a polynomial order of calculation without losing optimality by assuming a novel prior distribution. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

19 pages, 1957 KiB  
Article
Waveform Design for Multi-Target Detection Based on Two-Stage Information Criterion
by Yu Xiao and Xiaoxiang Hu
Entropy 2022, 24(8), 1075; https://0-doi-org.brum.beds.ac.uk/10.3390/e24081075 - 03 Aug 2022
Viewed by 1075
Abstract
Parameter estimation accuracy and average sample number (ASN) reduction are important to improving target detection performance in sequential hypothesis tests. Multiple-input multiple-output (MIMO) radar can balance between parameter estimation accuracy and ASN reduction through waveform diversity. In this study, we propose a waveform [...] Read more.
Parameter estimation accuracy and average sample number (ASN) reduction are important to improving target detection performance in sequential hypothesis tests. Multiple-input multiple-output (MIMO) radar can balance between parameter estimation accuracy and ASN reduction through waveform diversity. In this study, we propose a waveform design method based on a two-stage information criterion to improve multi-target detection performance. In the first stage, the waveform is designed to estimate the target parameters based on the criterion of single-hypothesis mutual information (MI) maximization under the constraint of the signal-to-noise ratio (SNR). In the second stage, the objective function is designed based on the criterion of MI minimization and Kullback–Leibler divergence (KLD) maximization between multi-hypothesis posterior probabilities, and the waveform is chosen from the waveform library of the first-stage parameter estimation. Furthermore, an adaptive waveform design algorithm framework for multi-target detection is proposed. The simulation results reveal that the waveform design based on the two-stage information criterion can rapidly detect the target direction. In addition, the waveform design based on the criterion of dual-hypothesis MI minimization can improve the parameter estimation performance, whereas the design based on the criterion of dual-hypothesis KLD maximization can improve the target detection performance. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

17 pages, 6921 KiB  
Article
Structural Smoothing Low-Rank Matrix Restoration Based on Sparse Coding and Dual-Weighted Model
by Jiawei Wu and Hengyou Wang
Entropy 2022, 24(7), 946; https://0-doi-org.brum.beds.ac.uk/10.3390/e24070946 - 07 Jul 2022
Viewed by 1287
Abstract
Group sparse coding (GSC) uses the non-local similarity of images as constraints, which can fully exploit the structure and group sparse features of images. However, it only imposes the sparsity on the group coefficients, which limits the effectiveness of reconstructing real images. Low-rank [...] Read more.
Group sparse coding (GSC) uses the non-local similarity of images as constraints, which can fully exploit the structure and group sparse features of images. However, it only imposes the sparsity on the group coefficients, which limits the effectiveness of reconstructing real images. Low-rank regularized group sparse coding (LR-GSC) reduces this gap by imposing low-rankness on the group sparse coefficients. However, due to the use of non-local similarity, the edges and details of the images are over-smoothed, resulting in the blocking artifact of the images. In this paper, we propose a low-rank matrix restoration model based on sparse coding and dual weighting. In addition, total variation (TV) regularization is integrated into the proposed model to maintain local structure smoothness and edge features. Finally, to solve the problem of the proposed optimization, an optimization method is developed based on the alternating direction method. Extensive experimental results show that the proposed SDWLR-GSC algorithm outperforms state-of-the-art algorithms for image restoration when the images have large and sparse noise, such as salt and pepper noise. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

22 pages, 11602 KiB  
Article
Low Light Image Enhancement Algorithm Based on Detail Prediction and Attention Mechanism
by Yanming Hui, Jue Wang, Ying Shi and Bo Li
Entropy 2022, 24(6), 815; https://0-doi-org.brum.beds.ac.uk/10.3390/e24060815 - 11 Jun 2022
Cited by 6 | Viewed by 2223
Abstract
Most LLIE algorithms focus solely on enhancing the brightness of the image and ignore the extraction of image details, leading to losing much of the information that reflects the semantics of the image, losing the edges, textures, and shape features, resulting in image [...] Read more.
Most LLIE algorithms focus solely on enhancing the brightness of the image and ignore the extraction of image details, leading to losing much of the information that reflects the semantics of the image, losing the edges, textures, and shape features, resulting in image distortion. In this paper, the DELLIE algorithm is proposed, an algorithmic framework with deep learning as the central premise that focuses on the extraction and fusion of image detail features. Unlike existing methods, basic enhancement preprocessing is performed first, and then the detail enhancement components are obtained by using the proposed detail component prediction model. Then, the V-channel is decomposed into a reflectance map and an illumination map by proposed decomposition network, where the enhancement component is used to enhance the reflectance map. Then, the S and H channels are nonlinearly constrained using an improved adaptive loss function, while the attention mechanism is introduced into the algorithm proposed in this paper. Finally, the three channels are fused to obtain the final enhancement effect. The experimental results show that, compared with the current mainstream LLIE algorithm, the DELLIE algorithm proposed in this paper can extract and recover the image detail information well while improving the luminance, and the PSNR, SSIM, and NIQE are optimized by 1.85%, 4.00%, and 2.43% on average on recognized datasets. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

21 pages, 115089 KiB  
Article
Region Adaptive Single Image Dehazing
by Changwon Kim
Entropy 2021, 23(11), 1438; https://0-doi-org.brum.beds.ac.uk/10.3390/e23111438 - 30 Oct 2021
Cited by 2 | Viewed by 1975
Abstract
Image haze removal is essential in preprocessing for computer vision applications because outdoor images taken in adverse weather conditions such as fog or snow have poor visibility. This problem has been extensively studied in the literature, and the most popular technique is dark [...] Read more.
Image haze removal is essential in preprocessing for computer vision applications because outdoor images taken in adverse weather conditions such as fog or snow have poor visibility. This problem has been extensively studied in the literature, and the most popular technique is dark channel prior (DCP). However, dark channel prior tends to underestimate transmissions of bright areas or objects, which may cause color distortions during dehazing. This paper proposes a new single-image dehazing method that combines dark channel prior with bright channel prior in order to overcome the limitations of dark channel prior. A patch-based robust atmospheric light estimation was introduced in order to divide image into regions to which the DCP assumption and the BCP assumption are applied. Moreover, region adaptive haze control parameters are introduced in order to suppress the distortions in a flat and bright region and to increase the visibilities in a texture region. The flat and texture regions are expressed as probabilities by using local image entropy. The performance of the proposed method is evaluated by using synthetic and real data sets. Experimental results show that the proposed method outperforms the state-of-the-art image dehazing method both visually and numerically. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

23 pages, 1115 KiB  
Article
A Penalized Matrix Normal Mixture Model for Clustering Matrix Data
by Jinwon Heo and Jangsun Baek
Entropy 2021, 23(10), 1249; https://0-doi-org.brum.beds.ac.uk/10.3390/e23101249 - 26 Sep 2021
Viewed by 1374
Abstract
Along with advances in technology, matrix data, such as medical/industrial images, have emerged in many practical fields. These data usually have high dimensions and are not easy to cluster due to their intrinsic correlated structure among rows and columns. Most approaches convert matrix [...] Read more.
Along with advances in technology, matrix data, such as medical/industrial images, have emerged in many practical fields. These data usually have high dimensions and are not easy to cluster due to their intrinsic correlated structure among rows and columns. Most approaches convert matrix data to multi dimensional vectors and apply conventional clustering methods to them, and thus, suffer from an extreme high-dimensionality problem as well as a lack of interpretability of the correlated structure among row/column variables. Recently, a regularized model was proposed for clustering matrix-valued data by imposing a sparsity structure for the mean signal of each cluster. We extend their approach by regularizing further on the covariance to cope better with the curse of dimensionality for large size images. A penalized matrix normal mixture model with lasso-type penalty terms in both mean and covariance matrices is proposed, and then an expectation maximization algorithm is developed to estimate the parameters. The proposed method has the competence of both parsimonious modeling and reflecting the proper conditional correlation structure. The estimators are consistent, and their limiting distributions are derived. We applied the proposed method to simulated data as well as real datasets and measured its clustering performance with the clustering accuracy (ACC) and the adjusted rand index (ARI). The experiment results show that the proposed method performed better with higher ACC and ARI than those of conventional methods. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

19 pages, 21461 KiB  
Article
Significance Support Vector Regression for Image Denoising
by Bing Sun and Xiaofeng Liu
Entropy 2021, 23(9), 1233; https://0-doi-org.brum.beds.ac.uk/10.3390/e23091233 - 20 Sep 2021
Cited by 4 | Viewed by 1695
Abstract
As an extension of the support vector machine, support vector regression (SVR) plays a significant role in image denoising. However, due to ignoring the spatial distribution information of noisy pixels, the conventional SVR denoising model faces the bottleneck of overfitting in the case [...] Read more.
As an extension of the support vector machine, support vector regression (SVR) plays a significant role in image denoising. However, due to ignoring the spatial distribution information of noisy pixels, the conventional SVR denoising model faces the bottleneck of overfitting in the case of serious noise interference, which leads to a degradation of the denoising effect. For this problem, this paper proposes a significance measurement framework for evaluating the sample significance with sample spatial density information. Based on the analysis of the penalty factor in SVR, significance SVR (SSVR) is presented by assigning the sample significance factor to each sample. The refined penalty factor enables SSVR to be less susceptible to outliers in the solution process. This overcomes the drawback that the SVR imposes the same penalty factor for all samples, which leads to the objective function paying too much attention to outliers, resulting in poorer regression results. As an example of the proposed framework applied in image denoising, a cutoff distance-based significance factor is instantiated to estimate the samples’ importance in SSVR. Experiments conducted on three image datasets showed that SSVR demonstrates excellent performance compared to the best-in-class image denoising techniques in terms of a commonly used denoising evaluation index and observed visual. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

21 pages, 16926 KiB  
Article
Adaptive Block-Based Compressed Video Sensing Based on Saliency Detection and Side Information
by Wei Wang, Jianming Wang and Jianhua Chen
Entropy 2021, 23(9), 1184; https://0-doi-org.brum.beds.ac.uk/10.3390/e23091184 - 08 Sep 2021
Cited by 7 | Viewed by 1645
Abstract
The setting of the measurement number for each block is very important for a block-based compressed sensing system. However, in practical applications, we only have the initial measurement results of the original signal on the sampling side instead of the original signal itself, [...] Read more.
The setting of the measurement number for each block is very important for a block-based compressed sensing system. However, in practical applications, we only have the initial measurement results of the original signal on the sampling side instead of the original signal itself, therefore, we cannot directly allocate the appropriate measurement number for each block without the sparsity of the original signal. To solve this problem, we propose an adaptive block-based compressed video sensing scheme based on saliency detection and side information. According to the Johnson–Lindenstrauss lemma, we can use the initial measurement results to perform saliency detection and then obtain the saliency value for each block. Meanwhile, a side information frame which is an estimate of the current frame is generated on the reconstruction side by the proposed probability fusion model, and the significant coefficient proportion of each block is estimated through the side information frame. Both the saliency value and significant coefficient proportion can reflect the sparsity of the block. Finally, these two estimates of block sparsity are fused, so that we can simultaneously use intra-frame and inter-frame correlation for block sparsity estimation. Then the measurement number of each block can be allocated according to the fusion sparsity. Besides, we propose a global recovery model based on weighting, which can reduce the block effect of reconstructed frames. The experimental results show that, compared with existing schemes, the proposed scheme can achieve a significant improvement in peak signal-to-noise ratio (PSNR) at the same sampling rate. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

20 pages, 6411 KiB  
Article
Portrait Segmentation Using Ensemble of Heterogeneous Deep-Learning Models
by Yong-Woon Kim, Yung-Cheol Byun and Addapalli V. N. Krishna
Entropy 2021, 23(2), 197; https://0-doi-org.brum.beds.ac.uk/10.3390/e23020197 - 05 Feb 2021
Cited by 14 | Viewed by 2773
Abstract
Image segmentation plays a central role in a broad range of applications, such as medical image analysis, autonomous vehicles, video surveillance and augmented reality. Portrait segmentation, which is a subset of semantic image segmentation, is widely used as a preprocessing step in multiple [...] Read more.
Image segmentation plays a central role in a broad range of applications, such as medical image analysis, autonomous vehicles, video surveillance and augmented reality. Portrait segmentation, which is a subset of semantic image segmentation, is widely used as a preprocessing step in multiple applications such as security systems, entertainment applications, video conferences, etc. A substantial amount of deep learning-based portrait segmentation approaches have been developed, since the performance and accuracy of semantic image segmentation have improved significantly due to the recent introduction of deep learning technology. However, these approaches are limited to a single portrait segmentation model. In this paper, we propose a novel approach using an ensemble method by combining multiple heterogeneous deep-learning based portrait segmentation models to improve the segmentation performance. The Two-Models ensemble and Three-Models ensemble, using a simple soft voting method and weighted soft voting method, were experimented. Intersection over Union (IoU) metric, IoU standard deviation and false prediction rate were used to evaluate the performance. Cost efficiency was calculated to analyze the efficiency of segmentation. The experiment results show that the proposed ensemble approach can perform with higher accuracy and lower errors than single deep-learning-based portrait segmentation models. The results also show that the ensemble of deep-learning models typically increases the use of memory and computing power, although it also shows that the ensemble of deep-learning models can perform more efficiently than a single model with higher accuracy using less memory and less computing power. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

Back to TopTop