Next Article in Journal
Characterizing Internal Flow Field in Binary Solution Droplet Combustion with Micro-Particle Image Velocimetry
Next Article in Special Issue
HELPFuL: Human Emotion Label Prediction Based on Fuzzy Learning for Realizing Artificial Intelligent in IoT
Previous Article in Journal
Dynamic Leader Multi-Verse Optimizer (DLMVO): A New Algorithm for Parameter Identification of Solar PV Models
Previous Article in Special Issue
An Emotion Speech Synthesis Method Based on VITS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Adaptive Group Sparse Representation Model Based on Infrared Image Denoising for Remote Sensing Application

1
Innovation Academy for Microsatellites of Chinese Academy of Sciences, Shanghai 200120, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Authors to whom correspondence should be addressed.
Submission received: 22 March 2023 / Revised: 22 April 2023 / Accepted: 22 April 2023 / Published: 6 May 2023

Abstract

:
Infrared (IR) Image preprocessing is aimed at image denoising and enhancement to help with small target detection. According to the sparse representation theory, the IR original image is low rank, and the coefficient shows a sparse character. The low rank and sparse model could distinguish between the original image and noise. The IR images lack texture and details. In IR images, the small target is hard to recognize. Traditional denoising methods based on nuclear norm minimization (NNM) treat all eigenvalues equally, which blurs the concrete details. They are unable to achieve a good denoising performance. Deep learning methods necessitate a large number of train images, which are difficult to obtain in IR image denoising. It is difficult to perform well under high noise in IR image denoising. Tracking and detection would not be possible without a proper denoising method. This article fuses the weighted nuclear norm minimization (WNNM) with an adaptive similar patch, searching based on the group sparse representation for infrared images. We adaptively selected similar structural blocks based on certain computational criteria, and we used the K-nearest neighbor (KNN) cluster to constitute more similar groups, which is helpful in recovering the complex background with high Gaussian noise. Then, we shrank all eigenvalues with different weights in the WNNM model to solve the optimization problem. Our method could recover more detailed information in the images. The algorithm not only obtains good denoising results in common image denoising but also achieves good performance in infrared image denoising. The target in IR images attains a high signal for the clutter in IR detection systems for remote sensing. Under common data sets and real infrared images, it has a good noise suppression effect with a high peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM), with higher noise and a much more complex background.

1. Introduction

IR images are formed by using different temperatures of the target and background. The uncooled IR focal plane array imaging technology has some advantages such as lower weight and power consumption. It is widely used in IR detectors [1]. Meanwhile, the technique produces IR images with lower contrast, unclear edges, and complex noise under the imaging environment. IR images have a smaller signal-to-noise ratio (SNR) and have no clear texture and details [2]. To achieve a better result for IR target detection, we must complete noise reduction. The noise of IR images mainly includes uniform noise and Gaussian noise, which are caused by air radiation, the environment, and noise.
The traditional denoising algorithm contains inter-frame and single-frame noise reduction. Single-frame denoising includes transformation-domain filtering, and inter-frame denoising mainly adopts time-domain filtering [3]. Space filtering covers Gaussian filtering, average filtering, and median filtering. These filterings are unable to use the difference between pixel characters, which causes some details to be ambiguous. The denoising performance is worse under the complex background [4]. The frequency-domain methods include FFT filtering, butterworth filtering, bilateral filtering, and wavelet filtering. We can segment the noise spectrum and the useful signal spectrum with the above algorithms. As the noise spectrum is located all over the frequency, the signal still mixes the noise, and they cannot be completely segmented [5]. This article proposes the nonlocal mean filtering and uses nonlocal similarity to compose the Gaussian weight [6]. It can improve the resolution of the details and edges. Time-domain filtering contains the frame average filtering and weighted time-domain filtering. Several authors have risen the frame average filtering to protect the edge of the image [7]. This results in image trailing and blurring with the movement of the image. Other authors have considered the moving property and have completed the best match according to the moving trajectory [8]. Here, the trailing phenomenon was reduced. However, the method needs to finish the frame match with higher computation complexity. The traditional denoising algorithms are limited to concrete real IR images.
Thus far, researchers have fixed their eyes on sparse representation [9]. The background is represented by an overcomplete dictionary with some sparse coefficients. We can extract the eigenvalue of the useful signal to restore the edges and detail texture in IR images [10]. This articleraises the sparse 3D transform-domain collaborative filtering (BM3D) to image denoising. This is more suitable for images with white Gaussian noise (WGN) with high time costs. Other researchers have given the solution to replace the l 0 norm with l 1 norm minimization [11]. This can decrease the hardship of the problem with a limited result [12]. It was proposed that the sparse representation can attain the eigenvector for reconstructing the original image. This method can recognize details and edges with a high level of complexity. The authors of another study placed the non-local correlation into the sparse representation. They designed a proper sub-director and sub-sparse vector to improve recognition ability and to achieve a high peak signal-to-noise ratio (PSNR) [13]. Before denoising, a great deal of work must be conducted [14]. K-means singular value decomposition (KSVD) was used to solve the principal component analysis (PCA) problem. The method is not convex optimization, and it cannot obtain a globally optimal solution [15]. The over-complete dictionary to learn was adopted. Through a redundant dictionary, a better sparse effect could be achieved. It is more robust in a complex environment [16]. The nuclear norm model was used to represent sparsity. It is the slack approximate of the convex optimization and it has a better convergence effect on denoising, but it ignores and obscures some details. Article [17] considers the meaning of the eigenvalue and uses the WNNM model to strengthen the sparsity to achieve better convergence. The important details and texture are kept with the higher complexity. PSNR is better than 1 3 dB in some real images and test images compared with the nuclear norm model [18]. The nonlocal similarity and the whole sparsity were taken into consideration. The sparsity definitions of IR images were optimized for the denoising effect. But It is of higher complexity. The authors proposed a new denoising method called EMD–ITF that was based on empirical mode decomposition (EMD) and the improved thresholding function (ITF). An improved threshold is used to suppress noise and to improve the signal-to-noise ratio (SNR) [19]. The SNR of the denoising signal exceeds the original signal with 5 9 dB. Venish Suthar adopted a reliable method to identify compound faults in bearings when the availability of experimental data was limited [20]. This can detect compound faults with 100% ten-fold cross-validation accuracy. This is used in some forms of digital signal processing and is suitable for specific signals with noise.
Deep learning is widely used in visible light image denoising, hyperspectral image denoising, and high-resolution image denoising. It is rarely used in infrared data sets. A large number of annotated data sets is needed to utilize hyper-spectrum images and high-resolution images for denoising. DnCNN [21] integrates local and global features with residual dense blocks in a deeper convolutional neural network (CNN) in image recovery, where more robust characteristics are required. FFDNet [22] uses a non-uniform noise level map as the input and runs on down-sampled sub-images. The method achieves a better trade-off between computation ability and denoising performance in synthetic and real noisy images. An attention-guided denoising convolutional neural network (ADNet) contains a sparse block, a feature enhancement block, an attention block, and a reconstruction block [23]. The influences of the shallow layers on deep layers could be enhanced. However, with a high level of noisy images, it suffers from rapid performance degradation. The article [24] proposes a multi-stage image denoising CNN with the wavelet transform to further remove redundant features. The process refines the obtained features and reconstructs the clean image with improved residual dense architectures. The authors of another study proposed a trust-based security system [25]. This was utilized to balance the security, transmission performance, and energy efficiency. One article proposed an energy-cost-per-useful-bit (ECPUB) method [26]. ECPUB can evaluate the energy efficiency and facilitate the balance of network load. A trust management-based and low-energy-adaptive clustering hierarchy protocol outperformed it in prolonging the network lifetime and in balancing energy consumption [27]. As is known, the number of publicly-annotated infrared data sets is relatively small. The different levels of noise mostly need different CNN models. Deep learning methods usually have some limitations in denoising performance under high-noise environments.
Our article proposes the improved WNNM based on the group sparsity model to the IR single image frame with high WGN. The group has a more similar structure as a result of the adaptive clustering of groups. It has strengthened the sparsity of all groups. The WNNM model could achieve clearer details after multi-iterations. The simulation illustrates that the algorithm can effectively outperform some popular denoising methods in terms of PSNR and SSIM index in typical IR images and real IR sequences. As a result, we can achieve a higher local signal to the clutter ration of the small target in IR images. It is useful for us to detect small targets in IR detection systems for remote sensing.

2. Materials and Methods

2.1. Denoising Process

The traditional denoising methods such as average filtering and Gaussian filtering are using some special template to suppress noise [28]. With the diversity of the original image and noise, the single denoising method reduces the useful signal and blurs the image texture and details. Deep learning denoising necessitates the acquisition of large data sets of annotated infrared images, which are difficult to obtain. Low rank and sparse representation focus on restoring the original image based on the sparsity difference. It is relatively easy to distinguish noise from the original image. To achieve a higher signal-to-clutter ratio (SCR) and a better clarity of the IR image, we utilize the WNNM based on the group sparsity model in IR image denoising. It leads to a better denoising performance in IR images with higher noise and achieves the optimization objectively. Under high noise environments, the adaptive similar block searching is significant with a good restoration effect. Experimental results show that the algorithm can provide a good reconstruction image and high precision. Finally, it allows us to recognize the small target in IR images quickly. The flowchart of the proposed method is displayed in Figure 1.
The algorithm’s flow chart includes four steps. First, we transform the image denoising problem into a mathematical optimization problem based on robust principal component analysis (RPCA). The group sparse representation theory considers the input image to be composed of many groups. These groups are of nonlocal self-similarity. Each group could be transformed into a matrix with a low rank and a matrix with remarkable sparsity. Second, we apply the proposed WNNM algorithm to image denoising by exploiting the image nonlocal self-similarity. Sparse coefficients can be used to recover the original image. The optimization computation is updated by multi-iterations. Third, all groups in each iteration are attained by adaptive patch selection depending on the SSIM. When the iteration result has a higher similarity difference with the last iteration result, we select the pre-filtered image as a group for the iteration computation. Finally, we can evaluate the denoising performance among PSNR and SSIM with different methods in public data sets and real IR images. Our article has completed all of the work based on the process.

2.2. Sparse Representation Theory

Every image Y Y R n could be represented by the atomic basis. The expansion coefficients form the matrix X X R K . If n < K , there are some vectors that cannot be represented by atomic bases. The vector basis of α i is not complete. If n K , the space vectors are expressed by the vector basis of α i . So, the vector basis of α i is an over-complete basis and the expansion coefficients have a variety of combinations.
Based on the above theory, researchers propose to use the over-complete basis to form a learning dictionary [29]. The over-complete basis is highly redundant and it could be represented by a variety of coefficients. We could select the sparsest set of coefficients as the solution, assuming the learning dictionary D is
D = [ d 1 , d 2 , d 3 , d K ] R n × K
The input image can be described by
Y = D X
where X is the matrix formed by sparse vectors. It can be described in
X = [ x 1 , x 2 , x 3 , x K ] T
K is much more than n. The more sparse X is, the more concentrated the image energy is. We usually use l 0 norm to represent the sparsity and it means the non-zero numbers of the vectors or matrix. We utilize the sparse representation of non-local correlation to split the whole image into many IR patches. The patches are similar and can form a group. These groups can form the image matrix with low rank. As a result, the base function in every group is over-redundant. Through the optimization of sparse representation, we could attain the sparse solution to recover all groups. The image noise has been reduced in all IR groups. We need solve the problem in
m i n | | X | | 0 , Y = D X
We can transform to a Lagrange formula without limitations, which is expressed by
a r g m i n | | Y D X | | F 2 + γ | | X | | 0
The normalization parameter is γ . The main solution to solve the optimal function is the convex optimization approximation and greedy track based on image match. If noises exist, the l 0 norm has no means to represent the vector sparsity. It is an NP-hard problem that cannot lead to an optimal solution. We could use slack convex l 1 norm to finish the convex optimization approximation [30]. It is described in
a r g m i n ( | | Y D X | | F 2 + γ | | X | | 1 )
The image uses the atoms as the vector basis. The sparse vector forms the sparse solution. For separating the noises from IR images, optimization computation is needed.

2.3. Image Denoising Based on Group Sparse Representation

2.3.1. WNNM Model

The traditional sparse representation model requires learning in a dictionary. It is much more complex and it ignores the relations of sparse coding patches. As a result, ref. [31] made use of the robust principal component analysis (RPCA) model based on group sparsity representation. The nuclear norm is used in the RPCA model. It is displayed by
X ˜ = a r g m i n | | Y D X | | F 2 + γ | | X | |
Ref. [24] has expressed that the nuclear norm is the matrix rank. The larger eigenvalue stands for detailed information about images. In the NNM model, all eigenvalues are processed with a soft threshold shrinkage operator. It leads to over-smoothness in the restored image. So, the article presents the weighted nuclear norm model (WNNM) that could improve the denoising result for the uneven shrinkage. The original problem could be transferred to expressions such as
X ˜ = a r g m i n | | Y D X | | F 2 + γ | | X | | w ,
The primary sparse representation based on the WNNM model considers the local sparsity and it did not construct the relationship on the whole image. The computation is complex. Thus, the group sparse representation (GSR) is applied to the image’s sparse representation. Through the fusion of the local sparsity and similarity of these patches, ref. [32] forms the learning dictionary and improves the performance. In the GSR model, many overlapping image patches could be attained from single frame Y according to some searching steps. The patches are described by Y i , i = 1 , 2 , , n . The size of each patch is m × m . They form many kinds of vectors and constitute the new group S i . Then, the image matrix Y i ( Y i R m K ) is displayed in
Y i = { y i , 1 , y i , 2 , y i , K }
Y i includes all similar image patches. In single frame, all similar group matrices could be defined by
Y = { Y 1 , Y 1 , Y 1 , Y N }
The original function could be transformed by
X i ˜ = a r g m i n i = 1 n ( | | Y i D i X i | | F 2 2 + | | X i | | ) .
The X i stands for the coefficient matrix of each group Y i [33]. | | . | | F 2 means the Frobenius norm. | | . | | is the nuclear norm.
Based on the above GSR model theory, we consider putting it into the IR image denoising. The IR image with additive noise can be expressed by
Y = X + N
X is the original image and N is the added noise. The problem is thought to be the restoration of the original image without noise. The GSR model in image denoising is used to complete the optimization of all similar patches. We can use the WNNM based on GSR to solve the problem which is described in
X i ˜ = a r g m i n i = 1 n ( | | Y i X i | | F 2 2 + | | X i | | w i , ) .

2.3.2. Adaptive Nonlocal Similar Block Searching

The nonlocal self-similarity (NSS) prior refers to the fact that each given local patch in a natural image can find many similar patches across the whole image. These patches compose the low-rank matrix which has a sparse solution. The noise in the infrared image is relatively complex, with a certain sparsity feature. Among the common denoising algorithms, the main method to find the data block with a similar structure is the K-nearest neighbor (KNN) algorithm [34]. The similar blocks of a group can be obtained based on the Euclidean distance. Traditional methods are used to look for similar groups in the iteration process with a noisy image. Additionally, they use the KNN cluster or K-means cluster methods to attain all groups. The matrix is not sparse enough. The observation samples contain high-intensity noise, and the similar blocks obtained from the original observation data are not necessarily high in regard to real similarity. The adaptive patch searching method used in the WNNM model is based on the KNN cluster algorithm. To achieve the optimal solution, we must construct sparse groups in each iteration. Compared with the universal cluster methods, adaptive patch searching based on KNN has some advantages. The differences are displayed in Table 1.
The adaptive KNN in WNNM is to attain more similar patches for the low-rank matrix, which achieves a higher PSNR and SSIM for denoising images. PSNR with adaptive KNN in WNNM is about 0.1 dB∼0.3 dB higher than others with different noisy images. SSIM with adaptive KNN in WNNM is about 0.01∼0.06 higher than others with different noisy images.
The original observation data can be pre-filtered by definition in [10].
f ( y ) = Y f i l t e r B M 3 D
f ( y ) is the pre-filtered image through the BM3D pre-filter method. The method is maturely applied in denoising for a long time. It could suppress noise and achieve better restoration of image details. Then, the criterion of similar patch selection depends on the rule in [35].
τ = S S I M ( f ( y ) , X t ^ ) S S I M ( f ( y ) , X ^ t 1 )
S S I M is the definition of structural similarity between two variables. f is a small parameter through concrete tests. When τ < f , the pre-filtered image is used to obtain similar blocks in all groups. Otherwise, we select the last iteration result as the input of similar patches. They constitute many similar groups that could use the WNNM model for optimization. In a high-noise environment, this method can achieve a better denoising effect. The fusion of the WNNM model with the adaptive patch searching process is described in the following items:
  • Use the pre-filter to achieve the image with less noise;
  • Perform iterative calculations with the WNNM model;
  • According to the adaptive selection rule of similar patches, obtain all groups of an image from the iteration result or the pre-filter image;
  • Finish the optimization based on the KNN cluster.

2.3.3. Adaptive Weight Parameters Searching

Firstly, we look for similar patch vectors to construct the matrix Y i .
Y i = [ y i , 1 , y i , 2 , y i , 3 , y i , K ]
Secondly, you can obtain Y i = U i Δ i V i through the SVD of the original imge. The Δ i is expressed by
Δ i = d i a g δ i , 1 , δ i , 2 , δ i , 3 , δ i , n 0
δ i , j is the jth singular of the Y i . Thirdly, we can finish the SVD of the restored image X i . It is shown by X i = U i Δ i V i . The i is
i = d i a g ( σ i , 1 , σ i , 2 , σ i , 3 , σ i , n 0 )
σ i , j is the jth singular of the X i . Finally, the minimization of (13) is treated the same as the solution of (19). We can compute the max value by a soft threshold operator in (20).
m i n σ i , j > 0 ( δ i , j σ i , j ) 2 2 + w i , j σ i , j
σ i , j = m a x ( δ i , j w i , j , 0 )
The bigger eigenvalue represents the more important information and contains more details. So, the strategy of the weighted value is to shrink the large eigenvalue much more and the small much less. It could keep more details of the images. To avoid the non-convergence of SVD, we obtain clues from [36] and choose the special w i , j . It is computed by
w i , j = c 2.82 σ n 2 γ i + ϵ
The σ n is the added white Gaussian noise std and γ i is the std of the estimated matrix eigenvalue. The weight of each iteration is updated with an adaptable value.

2.3.4. Iteration Parameter Setting

We have completed the proper parameter setting based on a large number of experiments. Additionally, through the analysis of image character, we could confirm the stopping parameters with many simulations. Inspired by [37], the stopping parameter τ is defined in
| | X ˜ i t X ˜ i t 1 | | F 2 | | X ˜ i t 1 | | F 2 < τ .
t is the iteration times. The improved WNNM algorithm based on the GSR model is described as follows:
  • Initialize x ˜ 0 = y 0 .
  • For t = 1 : i t e r .
  • Iterative calculation y i = X i ˜ t 1 + γ y X ˜ i t 1 .
  • for i = 1 : N .
    Use adaptive similar image block strategy to obtain sparse group y i .
    w i , j t is the jth weight in ith group by (21).
    i can be obtained by SVD of y i .
    Δ i is calculated by a soft threshold value.
    The X i ˜ could be computed by Δ i .
  • Output the denoised image after aggregation x i ˜ .

2.4. Evaluation

Special data sets are used to obtain the IR images in this article. Based on these image sequences, we conduct many simulations with our method and compare the denoising effect with traditional algorithms. The IR image sequence of denoising has many kinds of evaluation standards. As per usual, the mean square error(MSE) is shown in (23). It represents the difference between the original image and the denoising image. The PSNR is defined by the division between the biggest gray and the MSE. A larger PSNR means more similarity to the original images [38].
M S E = i = 0 M j = 0 N ( ( I ( i , j ) I 0 ( i , j ) ) 2 M N .
P S N R = 10 L o g 255 2 M S E .
Another evaluation metric is the structure similarity index measurement (SSIM). It is defined by the product of the brightness factor, contrast factor, and structure factor. A bigger SSIM means a higher similarity between the images [38].
S S I M ( X , Y ) = l ( X , Y ) α c ( X , Y ) β s ( X , Y ) γ .
l ( X , Y ) = 2 u x u y + C 1 u x 2 + u y 2 + C 1 .
c ( X , Y ) = δ x δ y + C 2 δ x 2 + δ y 2 + C 2 .
s ( X , Y ) = δ ( x y ) + C 3 δ x δ y 2 + C 3 .
I ( X , Y ) is the brightness factor in (26), and c ( X , Y ) is the contrast factor in (27). s X , Y is the structure factor in (28). The u x , u y separately represent the average of the original image and the denoising image. δ x , δ y are the std of the image. δ x y means the covariance. To simplify the analysis, we regard the α , β , γ as 1. The SSIM still stands for structure similarity. Certainly, a bigger SSIM demonstrates a better denoising result.
Finally, we use the recovered image as an input for target detection. Inspired by [39], we define the local SCRG of the small target as
S C R G = ( S / C ) d ( S / C ) n
S is the mean difference between the local image and the small target. C is the standard deviation of the local image. ( ) n and ( ) d represent the parameters of input images with noise and output denoising images separately. Higher local SCRG is helpful for us to detect the IR small target for remote sensing.

3. Results

The simulation uses MATLAB R2016a software and runs on a personal computer with Intel core i7 CPU and 16 GB RAM. We test our method in data sets ( s e t 12) and IR sequences using an IR detector.
The typical gray images are widely used for simulation analysis. In Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12, the pixel size is 256 × 256. Adding Gaussian noise with different std, we adopt a pair of different parameters. The results of the denoising effect are not the same in many experiments. We adjust the parameters to attain better denoising results. Setting the std of WGS as 20 , 50 , 75 , 100 , separately, we choose the corresponding patch size 6 × 6, 7 × 7, 8 × 8, 9 × 9. Similar patches are designed with 60 , 70 , 80 , 100 . Additionally, the reference index τ and c are set to be ( 0.0013 ,   0.65 ) , ( 0.0012 ,   0.55 ) , ( 0.001 ,   0.75 ) , ( 0.0017 ,   0.55 ) . The searching windows could be 30 and the error ϵ = exp 15 . The adaptive similar block setting parameter f is 2 exp 4 . Finally, we complete all simulations based on our settings.
The typical images including ( 1 ) , ( 2 ) , and ( 3 ) are simulated and compared with the conventional spatial filtering and sparse representation denoising methods. When the noise std is 20 , 50 , 75 , 100 , image PSNR and SSIM results are compared with Gaussian filtering, mean filtering, BM3D [10], EPLL [40], NSCR [41], KSVD [14], FFDNet [22], ADNet [23], GSR-WNNM [42], and our proposed method.
Table 2 shows the denoising PSNR and SSIM of image ( 1 ) with different noise std. Table 3 shows the denoising PSNR and SSIM of image ( 2 ) with different noise std. Table 4 shows the denoising PSNR and SSIM of image ( 3 ) with different noise std.
Under the same noise std, the proposed algorithm outperforms traditional algorithms in terms of PSNR and SSIM. With the increase in the std of noise, the performances of traditional Gaussian filtering, mean filtering, BM3D, and other algorithms have significantly decreased, but the algorithm proposed in this paper can still have better PSNR and SSIM. Compared with classic deep learning algorithms, our method performs equally well with FFDNet and ADNet under low-noise environments. As the noise increases, the model training performance of deep learning decreases which results in a degradation in performance. Our method, on the other hand, still performs well. Under the condition of different noise std, using the Gaussian filter, mean filter, BM3D, EPLL, NSCR, KSVD, FFDNet, ADNet, GSR-WNNM, and the algorithm in this paper, the image ( 1 ) , ( 2 ) , ( 3 ) denoising effects are shown in the supplementary materials.
With the increase in the Gaussian noise std, the proposed algorithm could attain a better denoising effect and clearer texture details compared with other methods. The simulation results show that the improved algorithm proposed in this paper can adapt to lower SNR infrared images. It improves the PSNR of images, restores image details efficiently, and ensures a higher SSIM of images.
Moreover, we provide some annotations about the flight target in the IR images. We achieve these IR sequences with IR cool mid-wave detector C M S 6055 through outdoor experiments. It occupies the 3∼5 um mid-wave infrared band and produces 640 512 resolution image sequences. The single pixel size is 15 um. The target is not larger than 3 3 pixels. Each sequence has about 20 frames. With the different std noise, the effect of the denoising of image(a) is depicted in Figure 2, Figure 3, Figure 4 and Figure 5. The effect of the denoising of image(b) is shown in Figure 6, Figure 7, Figure 8 and Figure 9. The effect of the denoising of image(c) is displayed in Figure 10, Figure 11, Figure 12 and Figure 13. It can be seen that our method has a better denoising effect under different complexity backgrounds. Under a high-noise environment, we still have a better recovery effect compared to traditional methods and deep learning methods.
We obtained a comparison of PSNR and SSIM in Table 5, Table 6 and Table 7 with three different complexity image sequences. Compared with traditional algorithms, our algorithm achieves better PSNR and SSIM in all image sequences and has good environmental adaptability. Compared with typical deep learning algorithms, our algorithm has slightly lower PSNR and SSIM in low-noise environments compared to the FFDNet algorithm, which is equivalent to the ADNet algorithm. In high-noise environments, we have achieved higher PSNR and SSIM. The models trained by deep learning algorithms in high noise environments only include standard deviations of 0∼75, and there are limited data, making it difficult to train better models in higher noise environments. The algorithm in this paper is unaffected by the amount of data, and as the complexity of the environment increases, it still achieves good PSNR and SSIM, demonstrating that the algorithm in this paper has stronger environmental adaptability.
Finally, we compute the average local SCRG of the target in the IR sequences containing small targets. The results are shown in Table 8. It can be seen that the algorithm in this paper still improves the texture clarity of the image in high Gaussian noise environments. We have obtained a higher local SCRG of infrared small targets, which constructs the foundation for subsequent high-performance target detection. It has further verified the denoising performance of the algorithm.
Through the above results, we have proven that our method improves the PSNR, SSIM, and mean local SCRG of small targets among all test images compared with traditional methods under high-noise environments. Additionally, our method could be adaptable to a complex background and high-noise environments. It would lead to a better target detection effect for remote sensing.

4. Discussion

We have compared the performance of our algorithms in public data sets, and our algorithm achieved high PSNR and SSIM in various types of images, achieving clearer image restoration results. With the enhancement of noise, our algorithm still maintains good performance compared with deep learning methods. Then, we remove noise under complex backgrounds with our method in IR image sequences, which effectively improves the PSNR and SSIM of IR image sequences. Additionally, in IR images, we achieve the best local SCRG of the small target. To achieve a better denoising effect, the parameters should be adjusted to match the real environment. In the meantime, as image complexity increases, our method maintains a high performance across all metrics.
  • Compared with traditional template filtering and sparse representation algorithms, our method outperforms them in regard to PSNR and SSIM in real IR images and public data sets under complex backgrounds.
  • The deep learning methods could train an ideal model with a large amount of data sets with relatively low noise. It is slightly better than our method among all metrics. However, it does not obtain a good model with higher noise. Our method achieves better average PSNR and SSIM under high noise.
These results have verified that the method with the adaptive GSR model could achieve stable and balanced effects under complex environments.

5. Conclusions

The weighted nuclear norm minimization (WNNM) is a significant extension of the nuclear norm minimization (NNM) model. It utilizes the physical significance of the matrix singular value. Each singular value stands for the component information in images. A larger eigenvalue means more principal component and it needs to shrink less in the optimization process. WNNM treats all eigenvalues unevenly to achieve a better recovery of details. WNNM remains convex and has the analytical optimal solution. When the weights are in descending order, we present an iterative algorithm to solve it using a similar group searching method.
1.
The adaptive patch selection fusion in WNNM guarantees a better sparsity of the original matrix. It has strengthened the low-rank character, which is helpful in recovering the denoising image.
2.
Considering all the analyses, the improved denoising algorithm with the WNNM model based on adaptive GSR could improve PSNR and SSIM, especially under a high noise background. It has achieved better noise suppression and attained the best adaptability among all the algorithms in regard to IR image denoising.
3.
The algorithm is suitable for infrared image denoising as well as ordinary image denoising. The denoising process constructs a solid foundation in following IR target detection for remote sensing under a high-noise environment.
However, the fusion computation is complex, and it could not be realized in the high real-time process. In the future, we could pursue a faster computation strategy and seek a more universal parameter setting strategy to achieve a better optimal solution in solving the denoising problem.

Supplementary Materials

The following supporting information can be downloaded at: https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/app13095749/s1.

Author Contributions

Conceptualization, Z.Z. (Zhencai Zhu) and J.C.; methodology, J.C.; software, J.C. and L.Q.; validation, J.C., L.Q. and L.D.; formal analysis, Z.Z. (Zhenzhen Zheng); investigation, J.C.; resources, L.Q.; data curation, L.D.; writing—original draft preparation, J.C.; writing—review and editing, J.C.; visualization, J.C.; supervision, Z.Z. (Zhenzhen Zheng); project administration, Z.Z. (Zhencai Zhu); funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

The paper is supported by the project of the Military Commission Science and Technology Commission. The program is “real-time detection technology based on little and dim space group targets”. It is an important project which is devoted to searching the dim and small targets in remote sensing. The project contract is NO.2020-JCJQ-ZD-052-00. It is also supported by the Youth Innovation Promotion Association of the Chinese Academy of Sciences under Grant 2022296.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data sets were received from common image data sets. They are representative.

Acknowledgments

We obtained test data from common data sets and obtained many images with our IR detector.

Conflicts of Interest

We declare that we have no financial or personal relationships with people or organizations that could inappropriately influence our work. There is no professional or other personal interest of any nature or kind in regard to any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled ‘A Novel Adaptive Group Sparse Representation Model based on Infrared Image Denoising for Remote Sensing Application’.

Abbreviations

The following abbreviations are used in this manuscript:
IRInfrared
EMDEmpirical mode decomposition
ITFImproved thresholding function
NNMNuclear norm minimization
WNNMWeighted nuclear norm minimization
CNNConvolutional neural network
ADNetAttention-guided denoising convolutional neural network
ECPUBEnergy-cost-per-useful-bit
RPCARobust principal component analysis
WGNWhite Gaussian noise
SCRSignal-to-clutter ratio
DBTDetect before track
TBDTrack before detect
IPIInfrared patch-image
BM3D3D transform-domain collaborative filtering
PCAPrincipal component analysis
PSNRPeak signal-to-noise ratio
WGSWhite Gaussian noise
GSRGroup sparse representation
KNNK-nearest Neighbor
SVDSingular value decomposition
SSIMStructure similarity index measurement
SCRGSignal-to-noise ratio gain
NSCRNeural social collaborative ranking
KSVDK-means singular value decomposition
EPLLExpected patch log likelihood

References

  1. Mengy, L. Research on Infrared Small Target Detection under Various Complex Background. Master’s Thesis, Jilin University, Changchun, China, 2018. [Google Scholar]
  2. Yu, J.; Wang, Y.; Shen, Y. Noise reduction and edge detection via kernel anisotropic diffusion. Pattern Recogn. Lett. 2008, 29, 1496–1503. [Google Scholar] [CrossRef]
  3. Zou, Q.; Feng, L.; Wang, Y. Analysis and improved preprocessing method of space noise in infrared image. J. Opt. 2007, 28, 426–430. [Google Scholar]
  4. Yang, W.; Zhibin, P. Review of De-noise and enhancement technology for infrared image. Radio Eng. 2016, 46, 1–7. [Google Scholar]
  5. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef]
  6. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: San Diego, CA, USA, 2005; Volume 2, pp. 60–65. [Google Scholar]
  7. Bai, R.R.; Jin, X.L.; Xu, J. Video noise reduction based on motion estimation. Laser Infrared 2014, 4, 443–446. [Google Scholar]
  8. Gao, H.; Xie, Y.C.; Di, H.W. The realtime video denoising algorithm based on spatio temporal joint. Microcomput. Appl. 2011, 30, 36–38. [Google Scholar]
  9. Wang, Z. Study of Sparse Decompression of the Signal Based on MP Algorithm. Master’s Thesis, Lanzhou University, Lanzhou, China, 2010. [Google Scholar]
  10. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  11. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic decomposition by basis pursuit. SIAM Rev. 2001, 43, 129–159. [Google Scholar] [CrossRef]
  12. Jiang, P. Image Denoising Based Sparse Representation and Dictionary Learning. Master’s Thesis, Xidian University, Xi’an, China, 2011. [Google Scholar]
  13. He, P.L. Study of infrared image denoising algorithm based on sparse representation. Infrared 2018, 39, 27–32. [Google Scholar]
  14. Yang, B.; Li, S. Multifocus image fusion and restoration with sparse representation. IEEE Trans. Instrum. Meas. 2009, 59, 884–892. [Google Scholar] [CrossRef]
  15. Chartrand, R.; Wohlberg, B. A nonconvex ADMM algorithm for group sparsity with sparse groups. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; IEEE: Vancouver, BC, Canada, 2013; pp. 6009–6013. [Google Scholar]
  16. Wu, Z.; Wang, Q.; Jin, J.; Shen, Y. Structure tensor total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising. Signal Process. 2017, 131, 202–219. [Google Scholar] [CrossRef]
  17. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
  18. Gilboa, G.; Osher, S. Nonlocal linear image regularization and supervised segmentation. Multiscale Model. Simul. 2007, 6, 595–630. [Google Scholar] [CrossRef]
  19. Mohguen, W.; RaïsEl’hadiBekka. New denoising method based on empirical mode decomposition and improved thresholding function. In Proceedings of the 2016 International Conference on Communication, Image and Signal Processing (CCISP 2016), Dubai, United Arab Emirates, 18–20 November 2016; Volume 787, p. 012014. [Google Scholar] [CrossRef]
  20. Suthar, V.; Vakharia, V.; Patel, V.K.; Shah, M. Detection of Compound Faults in Ball Bearings Using Multiscale-SinGAN, Heat Transfer Search Optimization, and Extreme Learning Machine. Machines 2022, 11, 29. [Google Scholar] [CrossRef]
  21. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
  22. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process 2018, 27, 4608–4622. [Google Scholar] [CrossRef]
  23. Tian, C.; Xu, Y.; Li, Z.; Zuo, W.; Fei, L.; Liu, H. Attention-guided CNN for image denoising. Neural Netw. 2020, 124, 117–129. [Google Scholar] [CrossRef]
  24. Tian, C.; Zheng, M.; Zuo, W.; Zhang, B.; Zhang, Y.; Zhang, D. Multi-stage image denoising with the wavelet transform. Pattern Recogn. 2023, 134, 109050. [Google Scholar] [CrossRef]
  25. Fang, W.; Cui, N.; Chen, W.; Zhang, W.; Chen, Y. A trust-based security system for data collection in smart city. IEEE Trans. Industr. Inform. 2020, 17, 4131–4140. [Google Scholar] [CrossRef]
  26. Fang, W.; Zhu, C.; Yu, F.R.; Wang, K.; Zhang, W. Towards energy-efficient and secure data transmission in ai-enabled software defined industrial networks. IEEE Trans. Industr. Inform. 2021, 18, 4265–4274. [Google Scholar] [CrossRef]
  27. Fang, W.; Zhang, W.; Yang, W.; Li, Z.; Gao, W.; Yang, Y. Trust management-based and energy efficient hierarchical routing protocol in wireless sensor networks. Digit. Commun. Netw. 2021, 7, 470–478. [Google Scholar] [CrossRef]
  28. Zhang, C.; Wang, J.; Wang, X. An efficient de-noising algorithm for infrared image. In Proceedings of the 2005 IEEE International Conference on Information Acquisition, Sydney, NSW, Australia, 4–7 July 2005; IEEE: Sydney, NSW, Australia, 2005; p. 5. [Google Scholar]
  29. Huggins, P.S.; Zucker, S.W. Greedy basis pursuit. IEEE Trans. Signal Process 2007, 55, 3760–3772. [Google Scholar] [CrossRef]
  30. Kreutz-Delgado, K.; Murray, J.F.; Rao, B.D.; Engan, K.; Lee, T.W.; Sejnowski, T.J. Dictionary learning algorithms for sparse representation. Neural Comput. 2003, 15, 349–396. [Google Scholar] [CrossRef] [PubMed]
  31. Elad, M. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing; Springer: Berlin, Germany, 2010; Volume 2. [Google Scholar]
  32. Mallat, S.G.; Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef]
  33. Sun, W.F.; Peng, Y.H. An improved non-local means de-noising approach. Acta Electonica Sin. 2010, 38, 923. [Google Scholar]
  34. Larose, D.T.; Larose, C.D. k-nearest neighbor algorithm. In Discovering Knowledge in Data: An Introduction to Data Mining; Wiley: Hoboken, NJ, USA, 2014; Volume 2, pp. 149–164. [Google Scholar]
  35. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  36. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2012, 22, 700–711. [Google Scholar] [CrossRef]
  37. Mo, Y. Investigation of Image Denoising Based on Structure Prior and Sparse Representation. Master’s Thesis, Xidian University, Xi’an, China, 2019. [Google Scholar]
  38. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Istanbul, Turkey, 2010; pp. 2366–2369. [Google Scholar]
  39. Bing, W. Research on the Detection of Small and Dim Targets in Infrared Images. Ph.D. Thesis, Xi Dian University, Xi’an, China, 2008. [Google Scholar]
  40. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: Barcelona, Spain, 2011; pp. 479–486. [Google Scholar]
  41. Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 2012, 22, 1620–1630. [Google Scholar] [CrossRef]
  42. Li, Y.; Gui, G.; Cheng, X. From group sparse coding to rank minimization: A novel denoising model for low-level image restoration. Signal Process. 2020, 176, 107655. [Google Scholar] [CrossRef]
Figure 1. The flow chart of adaptive GSR model in IR denoising.
Figure 1. The flow chart of adaptive GSR model in IR denoising.
Applsci 13 05749 g001
Figure 2. Denoising results with noise std20 in image(a).
Figure 2. Denoising results with noise std20 in image(a).
Applsci 13 05749 g002
Figure 3. Denoising results with noise std50 in image(a).
Figure 3. Denoising results with noise std50 in image(a).
Applsci 13 05749 g003
Figure 4. Denoising results with noise std75 in image(a).
Figure 4. Denoising results with noise std75 in image(a).
Applsci 13 05749 g004
Figure 5. Denoising results with noise std100 in image(a).
Figure 5. Denoising results with noise std100 in image(a).
Applsci 13 05749 g005
Figure 6. Denoising results with noise std20 in image(b).
Figure 6. Denoising results with noise std20 in image(b).
Applsci 13 05749 g006
Figure 7. Denoising results with noise std50 in image(b).
Figure 7. Denoising results with noise std50 in image(b).
Applsci 13 05749 g007
Figure 8. Denoising results with noise std75 in image(b).
Figure 8. Denoising results with noise std75 in image(b).
Applsci 13 05749 g008
Figure 9. Denoising results with noise std100 in image(b).
Figure 9. Denoising results with noise std100 in image(b).
Applsci 13 05749 g009
Figure 10. Denoising results with noise std20 in image(c).
Figure 10. Denoising results with noise std20 in image(c).
Applsci 13 05749 g010
Figure 11. Denoising results with noise std50 in image(c).
Figure 11. Denoising results with noise std50 in image(c).
Applsci 13 05749 g011
Figure 12. Denoising results with noise std75 in image(c).
Figure 12. Denoising results with noise std75 in image(c).
Applsci 13 05749 g012
Figure 13. Denoising results with noise std100 in image(c).
Figure 13. Denoising results with noise std100 in image(c).
Applsci 13 05749 g013
Table 1. Comparison between different patch searching methods in WNNM.
Table 1. Comparison between different patch searching methods in WNNM.
MethodClassAdvantagesDisadvantages
KNNunsupervised machine learning methodhigh precision, insensitive to outlier, no input data assumptionhigh computational complexity and spatial complexity
Adaptive KNNunsupervised machine learning methodhigh precision, suitable for high noise environment, more similar structure, high PSNR and SSIMhigh computational complexity and spatial complexity
Table 2. Comparison results of PSNR and SSIM with different denoising methods in image ( 1 ) .
Table 2. Comparison results of PSNR and SSIM with different denoising methods in image ( 1 ) .
Noise Std205075100
MetricsPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Mean25.40.7423.830.5522.150.4220.540.33
Gaussian26.220.7624.190.5622.330.4320.570.34
BM3D33.770.8729.690.8127.510.7625.870.72
EPLL33.010.8528.800.826.990.7524.950.7
NSCR33.860.8729.560.8227.270.7725.310.74
KSVD33.190.8627.970.7725.090.6723.690.61
FFDNet34.030.8730.310.8328.310.79//
ADNet33.920.8730.380.82616.960.18//
GSR-WNNM34.070.8730.250.8228.260.7926.860.76
Proposed34.10.8830.390.8328.40.826.880.78
Table 3. Comparison results of PSNR and SSIM with different denoising methods in image ( 2 ) .
Table 3. Comparison results of PSNR and SSIM with different denoising methods in image ( 2 ) .
Noise Std205075100
MetricsPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Mean22.590.7621.690.6320.660.5319.490.45
Gaussian23.490.8022.240.6520.930.5519.630.46
BM3D30.350.9225.820.8223.910.7522.520.69
EPLL30.480.9325.760.823.720.7322.120.67
NSCR30.620.9225.650.8223.540.7622.210.71
KSVD29.980.9125.310.7922.900.7220.880.64
FFDNet31.190.8327.310.7224.260.66//
ADNet31.290.83326.890.72117.270.205//
GSR-WNNM30.9750.92326.2260.82924.2840.77522.8580.724
Proposed31.0780.92626.2410.83024.3500.77822.8970.731
Table 4. Comparison results of PSNR and SSIM with different denoising methods in image ( 3 ) .
Table 4. Comparison results of PSNR and SSIM with different denoising methods in image ( 3 ) .
Noise Std205075100
MetricsPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Mean24.7230.71023.3600.57521.8680.46820.3750.386
Gaussian25.5960.74723.8170.60822.0680.50020.4710.412
BM3D31.4120.89126.7400.76724.8180.69423.4740.637
EPLL30.8610.86326.3230.74524.4270.69023.2290.649
NSCR30.8370.87125.5960.70823.2570.60521.9180.535
KSVD31.3270.88426.3360.75124.3640.68323.1700.637
FFDNet31.310.93326.80.8524.90.70//
ADNet31.160.88026.7690.76916.7550.342//
GSR-WNNM31.5090.88926.9260.77624.910.70323.5030.651
Proposed31.6690.89526.9370.77524.9160.70523.6390.653
Table 5. Comparison results of PSNR and SSIM with different denoising methods in Image(a).
Table 5. Comparison results of PSNR and SSIM with different denoising methods in Image(a).
Noise Std205075100
MetricsPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Mean32.1340.86727.220.54124.210.35321.8250.239
Gaussian32.440.8627.060.5423.910.3521.490.24
BM3D42.6450.97738.5670.96236.2770.94234.3140.917
EPLL41.5630.96635.110.95832.3010.89930.6530.819
NSCR38.6900.92133.4140.84830.5160.77828.2830.711
KSVD42.9700.98239.3650.97737.4560.96936.0630.959
FFDNet44.590.98340.590.97438.010.964//
ADNet42.350.97539.510.96416.480.046//
GSR-WNNM41.760.97438.1580.95636.6780.94335.0190.915
Proposed42.4510.97538.9380.96338.040.97137.1210.971
Table 6. Comparison results of PSNR and SSIM with different denoising methods in Image(b).
Table 6. Comparison results of PSNR and SSIM with different denoising methods in Image(b).
Noise Std205075100
MetricsPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Mean32.0320.86427.1510.54124.1790.35721.880.243
Gaussian32.400.8627.000.5423.900.3621.560.24
BM3D42.1970.97737.9350.95835.5770.93533.7770.908
EPLL40.7070.9634.5010.91532.0140.8930.4170.84
NSCR42.5190.98238.4120.97036.6570.96035.0600.949
KSVD37.9550.91733.0980.84330.3100.77528.0320.693
FFDNet43.620.98339.470.96932.210.954//
ADNet41.770.97638.40.94916.50.049//
GSR-WNNM41.5810.97437.2960.94935.9160.93134.5490.907
Proposed41.9870.97538.6720.95937.2680.96236.3820.961
Table 7. Comparison results of PSNR and SSIM with different denoising methods in Image(c).
Table 7. Comparison results of PSNR and SSIM with different denoising methods in Image(c).
Noise Std205075100
MetricsPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Mean31.3990.81827.0290.54024.0390.36621.8240.255
Gaussian31.890.8326.900.5423.820.3721.580.26
BM3D37.4840.93533.3690.87531.6510.84130.4040.812
EPLL36.6480.931.9990.85530.1770.8228.6800.7
NSCR37.1500.93233.2970.88131.9380.86230.9970.849
KSVD34.9940.86931.1010.77729.1140.71527.1530.650
FFDNet38.20.94134.20.88832.110.861//
ADNet37.320.93233.650.87516.520.062//
GSR-WNNM37.0270.92533.2490.86631.8610.84230.8590.816
Proposed37.150.92833.3880.87232.1560.86431.4160.858
Table 8. Comparison results of mean local SCRG of the small target with different denoising methods.
Table 8. Comparison results of mean local SCRG of the small target with different denoising methods.
MethodImage(a)Image(b)Image(c)
Noise Std205075100205075100205075100
Mean1.161.311.241.721.321.461.792.322.482.392.332.41
Gaussian1.081.381.451.651.371.391.631.912.225.082.812.84
BM3D2.42.332.422.632.492.802.892.951.174.244.374.36
EPLL1.521.741.731.912.192.342.392.471.111.043.525.43
NSCR2.442.612.632.732.492.412.522.781.183.666.917.19
KSVD1.591.712.332.431.412.052.662.841.291.844.015.46
FFDNet2.332.662.78/2.412.512.70/1.112.832.64/
ADNet2.422.642.88/2.492.712.76/1.111.872.89/
GSR-WNNM2.632.742.862.881.652.492.5852.591.051.166.816.99
Ours2.442.812.872.892.572.712.763.691.224.156.937.41
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.; Zhu, Z.; Hu, H.; Qiu, L.; Zheng, Z.; Dong, L. A Novel Adaptive Group Sparse Representation Model Based on Infrared Image Denoising for Remote Sensing Application. Appl. Sci. 2023, 13, 5749. https://0-doi-org.brum.beds.ac.uk/10.3390/app13095749

AMA Style

Chen J, Zhu Z, Hu H, Qiu L, Zheng Z, Dong L. A Novel Adaptive Group Sparse Representation Model Based on Infrared Image Denoising for Remote Sensing Application. Applied Sciences. 2023; 13(9):5749. https://0-doi-org.brum.beds.ac.uk/10.3390/app13095749

Chicago/Turabian Style

Chen, Juan, Zhencai Zhu, Haiying Hu, Lin Qiu, Zhenzhen Zheng, and Lei Dong. 2023. "A Novel Adaptive Group Sparse Representation Model Based on Infrared Image Denoising for Remote Sensing Application" Applied Sciences 13, no. 9: 5749. https://0-doi-org.brum.beds.ac.uk/10.3390/app13095749

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop