Next Article in Journal
Positron Emission Tomography Image Segmentation Based on Atanassov’s Intuitionistic Fuzzy Sets
Next Article in Special Issue
Calibration of Rotary Encoders Using a Shift-Angle Method
Previous Article in Journal
Numerical Analysis of Tube Heat Exchanger with Trimmed Star-Shaped Fins
Previous Article in Special Issue
Bound-State Soliton and Noise-like Pulse Generation in a Thulium-Doped Fiber Laser Based on a Nonlinear Optical Loop Mirror
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deconvolutional Deblurring Algorithm Based on Dual-Channel Images

1
Aerospace Information Research Institute, Chinese Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China
2
School of Optoelectronics, University of Chinese of Academy Sciences, No. 19 (A) Yuquan Road, Shijingshan District, Beijing 100039, China
3
Department of Key Laboratory of Computational Optical Imaging Technology, Chinese Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China
4
School of Instrumentation and Optoelectronic Engineering, Beijing University of Aeronautics and Astronautics, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Submission received: 30 March 2022 / Revised: 9 May 2022 / Accepted: 10 May 2022 / Published: 11 May 2022
(This article belongs to the Collection Optical Design and Engineering)

Abstract

:
Aiming at the motion blur restoration of large-scale dual-channel space-variant images, this paper proposes a dual-channel image deblurring method based on the idea of block aggregation, by studying imaging principles and existing algorithms. The study first analyzed the model of dual-channel space-variant imaging, reconstructed the kernel estimation process using the side prior information from the correlation of the two-channel images, and then used a clustering algorithm to classify kernels and restore the images. In the kernel estimation process, the study proposed two kinds of regularization terms. One is based on image correlation, and the other is based on the information from another channel input. In the image restoration process, the mean-shift clustering algorithm was used to calculate the block image kernel weights and reconstruct the final restored image according to the weights. As the experimental section shows, the restoration effect of this algorithm was better than that of the other compared algorithms.

1. Introduction

During the imaging process, atmospheric disturbances, camera aberrations, and relative motion can all cause blurred images and degrade the image quality. To eliminate the adverse effects of these factors, the current mainstream deblur algorithms mainly use the idea of deconvolution to restore a blurred image. The common single-image deblurred algorithms usually use the image information provided by the recovered image itself to estimate the blur kernel, and the prior information of this estimation algorithm mostly consists of the statistical results of general image features, such as intensity distribution, transform domain features, etc. Fergus et al. proposed a blind deblurring algorithm based on the statistical results of image gradients in 2006 [1], and Levin et al. proposed a method based on sparse gradients in 2007 [2]. At present, similarly to single image blind deblur methods, the algorithm is based on the variational Dirichlet, proposed by Xu et al. in 2015. The Xu method has a good restoration effect [3]. Most of the current methods for single image restoration that use blind deconvolution are based on the basic ideas of the abovementioned two algorithms from [1,2], which use the statistical features of the image’s sparse characteristics in the intensity distribution for kernel estimation. The abovementioned blind deblur algorithms for single images have a better performance in deblurring (example: shape and detail) for cases where the features of the recovered image and the selection of the prior information are relatively close; for example, the image intensity distribution and the statistical prior distribution are in the same heavy-tailed distributions [1]. However, the larger the difference between the distributions, the worse the effect of recovery is. Therefore, the method of obtaining more targeted image information, which would improve the deblur restoration effect, is the main research direction for blind deconvolution algorithms at present.
With the advancement of technology, current imaging devices, such as smartphones and civilian drones, are usually equipped with multiple lenses to obtain images with different optical characteristics. When images of the same target are obtained from multiple lenses, there is more information derived from multiple-input channels than a single image provides. Researchers have explored optimizing the restoration of image deblurring using the prior side information provided by images [4,5,6,7,8,9,10,11,12]. In 2003, Liu and Abbas et al. conducted deblur research using images with different exposure rates with time-series characteristics [4], using the clear area of a short exposure image to replace the blurred area in the same position of the long exposure, and finally obtained a clear image and improved the dynamic range, which is the theoretical origin of the commonly used HDR algorithm [5]. In 2005, Rava-cha et al. proposed using the spatial correlation of multi-frame blurred images to estimate the blur kernel [6]; Yuan et al. used long and short exposure images of the same objectives to perform a blind restoration of multi-frame images in 2007 [7]. The Yuan method had a higher calculation efficiency and better restoration. However, the imaging acquisition process is complicated, and images with different exposure parameters need to be obtained for the same target. There are higher requirements for imaging condition in this method. In 1993, the framework of the multi-channel-based degraded signal restoration algorithm was proposed by Katsaggelos et al. [8]. In 2005, Sroubek et al. proposed a multi-frame image restoration technology [9] that used the inherent correlation of multi-frame images as a priori information, which improved the restoration effect, and the method was optimized in 2007 [10] and 2012 [11]. In [11], the subspace theory was used to enhance the calculation effect, but, because the application model of the solution algorithm is easy (applying total variation (TV) as regularization), and the correlation of the whole image is not fully utilized, it is not good when dealing with large-format image motion blur with spatial variation.
To solve the above problems, this paper proposes a restoration method named the dual-channel block-cluster deblur algorithm (DCBCD), for the problem of space-variant blurred image restoration, which combines the characteristics of the traditional multi-channel restoration image algorithm and the single-channel blind deconvolution restoration algorithm. The main content of this paper has three parts: (1) the improved model of the dual-channel input deblur method, named DCBCD is proposed; (2) the relative side prior information (RSPI) and non-dimensional weight-balanced Gaussian measure (WG-NGM), which uses kernel estimation, are outlined; (3) a clustering-based algorithm is proposed, to restore the space-variant blurred images.

2. Basic Model of DCBCD

2.1. The Dual-Channel Image Module

During the imaging process of dual input channels, it is assumed that the images are blurred with their blur kernel, which is from the different input channels. The blur model of a two-channel image can usually be written as:
Y 1 T o t a l = X H 1 T o t a l + n 1 Y 2 T o t a l = X H 2 T o t a l + n 2
where “ ” denotes the discrete convolution operator, Y 1 T o t a l and Y 2 T o t a l are the two-channel input images, X is the ideal image, H 1 T o t a l and H 2 T o t a l are different blur kernels, and n 1 and n 2 denote the additive noises incurred in the imaging processes of channel-1 and channel-2, respectively; both of which are zero-mean Gaussian noises by default. Taking the example of recovering an image of channel-1, the deconvolutional algorithm model under the dual-channel framework can be written in the following form:
H 1 T o t a l = arg min H 1 T o t a l H 1 H I t e r T o t a l X 1 T o t a l Y 1 T o t a l 2 2 ( E s t i m a t e   K e r n e l ) X 1 = arg min X T o t a l X H 1 T o t a l X I t e r T o t a l Y 1 T o t a l 2 2 ( E s t i m a t e   O r i g i n a l   i m a g e )
where         2 is the norm operator, and H 1 , X denote the pool of allowed H 1 T o t a l and X i t e r T o t a l . X 1 is the final restoration image of channel-1. H I t e r T o t a l is the iter-kernel in the process of channel-1, and X i t e r T o t a l is the iter-image in the process of channel-1. Therefore, it is the same with channel-2.

2.2. The Struction of DCBCD

Usually, Equation (2) is an ill-posed problem, which cannot be directly solved. Moreover, if the image is a large-scale space-variant or space-invariant image, the restoration results obtained by the method of derivation to obtain the inverse matrix are very poor. Therefore, this paper adopted a split-block strategy, to obtain the kernels of each selected block in the image for dealing with space-variant blurred images. Subsequently, the proposed method classifies similar kernels using a clustering algorithm. Then it assigned different weights to the clusters of kernels to determine the weights of the pixels in each recovered image block. Finally, each pixel value of the final deblurred image was the normalized sum of the corresponding position pixel value from each image block with different weights.
As shown in Figure 1, the work of Equation (2) was divided into three parts in this paper: (1) preprocess; (2) kernel estimation; (3) image restoration.
  • Step 1: Preprocessing: De-noising, blocking, and building a pyramid of the input images
De-noise: In this paper, the image de-noising preprocess was performed using the currently widely used BM3D [12] algorithm. In the process of de-noising, balancing the image sharpness and signal-to-noise ratio after de-noising usually depends on empirical choices. In this paper, the blur kernel estimation process was mostly calculated in the gradient domain. Therefore, the presence of noise affected the sparsity of the gradient domain. Thus, the parameter settings for de-noising tended to enhance the de-noising effect, rather than protecting the image texture.
Block: In the process of image blocking, the size of the image blocks must be divided according to the size of the image and the forward step. The choice of block size and the length of the step are usually based on the nature of the image itself. If the similarity of the blur kernel-space change of the image is high, a relatively large step size can be chosen, to reduce the number of blocks and accelerate the calculation speed. If the space change shape is complex, a smaller block can be chosen, to improve the estimation accuracy [13].
Built pyramid: In common image deblur algorithms [2], an image pyramid is usually created by down-sampling in the process of kernel estimation. The kernel is estimated layer by layer, and the initial value of the next kernel is obtained by the up-sampling of the above layer result. The algorithm for estimating the kernel is the same for each layer in all blocks; therefore, the algorithm for the kernel estimation is presented in Section 3, as an example for any layer of any pair of image blocks.
  • Step 2: Block layer kernel estimation (introduced in Section 3)
Section 3 introduces the kernel estimation method in any block layer of the image pyramid. For space-invariant image blurred images, where the whole image has the same blur pattern, calculating the RSPI block by block is equivalent to calculating the RSPI for the whole image. The specific reasons were described in the section on RSPI below. The split-block step can be omitted if dealing with space-invariant images. The blurring kernel of the whole image is estimated directly by building an image pyramid with the original image size as the initial value layer by layer.
With the dual-channel blurred images input, the RSPI between the sliced images was calculated using the total information from selected image blocks as a priori information. The image blur kernel was estimated and restored block by block.
This paper introduced the RSPI and WG-NGM to improve the accuracy of the block layer kernel estimation in the processes of block layer iter-kernel and block layer iter-image estimation.
The estimated kernel can be used to perform a deconvolution operation, to obtain a deblurred image. The image deblur model in the spatial domain was obtained from Equation (2):
X 1 = a r g m i n   X 1 X 1 H 1 T o t a l X i t e r T o t a l - Y 1 T o t a l 2 2 + γ X R ( X i t e r T o t a l )
γ X is the regularization coefficient for an iter-restored image as a constant value, X o p t is the pool of allowed X o p t , X i t e r T o t a l is the iter-image in the process, and R ( X i t e r T o t a l ) is the regularization term function for image restoration.
  • Step 3: Blurred block image restoration (introduced in Section 4)
In Section 4, this paper focuses on a mean-shift clustering method, to collocate blocks of images that have been deblurred.
In this paper, we used the clustering method to determine the pixel values of overlapping regions when stitching together images. Depending on the weights of the kernels, the weights of the pixels in the corresponding image blocks are different. The final pixel value of the final deblurred image’s overlapping part is determined by deriving a weighted average.

3. The Block Layer Kernel Estimation

The main improvements of the proposed method in this section, which deals with the space-variant blurred images, are the modified regularization terms in the kernel estimation process.
Specifically, in the kernel estimation algorithm for each layer of the image block pyramid, this paper makes the following contributions:
  •  In the block layer iter-kernel estimation process, using the RSPI of each pair of blocks to calculate the correlation coefficient, the integrity of the total information of the selected block was preserved to the greatest extent. Furthermore, the interference caused by the information from the other parts of the global image was avoided.
  •  In the block layer iter-image estimation process, the advanced modified regularization term is proposed with the dual-channel images information introduced in this paper. With the improvements, the accuracy of the clock-layer kernel estimation was improved.
For the block layer kernel estimation problem, the estimation model from Equation (2) can be written in the following iterative form:
H = a r g m i n X h h i t e r X i t e r - Y L 2 2 + λ x T ( X i t e r ) + λ h F ( h i t e r )
where H is the result of the kernel estimation in this block layer, and Y L is the blurred image in this block layer, with it representing the alternate iterative minimization of the block layer iterative image X i t e r and the block layer iterative kernel. λ X and λ h are regularization coefficients for iterative image estimation and kernel estimation in each block layer. T ( X i t e r ) and F ( h i t e r ) are regularization terms for the block layer iterative image estimation and layer-kernel estimation models, respectively. The choice of these coefficients is usually based on experience, depending on the design purpose of the regularization term and the characteristics of the image itself.
The design requirement of the regularization term F ( h i t e r ) in the block layer iter-image estimation process aims to improve the reliability of the block layer iter-image. Regularization terms, such as the TV operator, are usually chosen to emphasize the gradient part of the blurred image. However, such features are derived from the gradient of the blurred image itself and lack reference side information to which to refer. Therefore, if the dual-channel blurred images are the input content with weak coprime, the two images will be more correlated. Furthermore, the more reliable information can be preserved using RSPI as the regularization term. In Section 3.1, this paper proposed a block layer iter-kernel estimation model with an RSPI regularization term for optimizing the block layer iter-kernel estimation process.
The design requirements of the regularization term T ( X i t e r ) in the block layer iter-image estimation process usually need to consider the feature of the gradient image from the block layer iter-image. This paper introduced the side information from the input dual-channel corresponding image, to modify the regularization term NGM, which is used in [14]. The modified regularization term WG-NGM receives information from two-channel input images, making it more reliable than the NGM.
Therefore, through the improvements above, and by continuously iterating alternately to convergence, the optimal solution of the block layer kernel H can be obtained.
Finally, with the repetition of the up-sample and iterative solving process, the kernels of the blurred image blocks could be estimated.

3.1. The RSPI of Total Images

Generally, when the imaging results of the two channels are considered to be weak coprime, the morphological difference of the kernel is a constant, which changes its direction and size without changing its basic topology [9]. Therefore, assuming that the input two-channel image is weakly coprime, the ideal de-noising model of the blurred image can be obtained through the de-noise operation:
D e n o i s e ( Y 1 T o t a l ) = D Y 1 T o t a l = X a 1 h ^ D e n o i s e ( Y 2 T o t a l ) = D Y 2 T o t a l = X a 2 h ^ H 1 T o t a l = a 1 h ^ H 1 T o t a l = a 2 h ^
The h ^ is the ideal shape of the kernel, a 1 is the deformation coefficient of kernel-1, a 2 is the deformation coefficient of kernel-2, and D e n o i s e ( ) = D is the de-noise function.
The two equations in (5) can be combined to obtain:
( 1 / a 1 ) D Y 1 T o t a l 1 h ^ = x = ( 1 / a 2 ) D Y 2 T o t a l 1 h ^
The final result is:
( D ( Y 1 T o t a l ) D ( Y 2 T o t a l ) ) 1 h ^ = 0
With quadratic items:
R ( h ^ ) = ( D ( Y 1 T o t a l ) D ( Y 2 T o t a l ) h ^ 2 2
Even if using the quadratic term without de-noiseing: ( ( Y 1 T o t a l ) ( Y 2 T o t a l ) h ^ 2 2 as a regularization term, the result of the kernel estimation would be better than the regularization term of kernel’s norm-2: h t e m p 2 [9]. Therefore, the correlation of the whole image is calculated with the condition of lower noise, and the result is more accurate [11]. Considering the convenience of using the sparsity of the gradient image to calculate the correlation coefficient, the correlation coefficient can finally be written as follows:
R = D ( Y 1 T o t a l ) , D ( Y 2 T o t a l ) T D ( Y 1 T o t a l ) , D ( Y 2 T o t a l )
” signifies the function of the gradient operator, and R is the RSPI of the whole image.
In particular, it should be noted that if the blurred image is a space-variant blurred image, the step of split-block in the pre-processing is not required. The experimental part of this paper was devoted to testing the restoration of the space-invariant images, except for Group 4.
Furthermore, the problem that the algorithm in [11] faced is that the correlation coefficient just contains part of the information of the whole image, which is called the “effective part”, and would inevitably lose global information [15]. In the process of kernel estimation, if the kernel is calculated by the block method with the space-invariant blurred image, there is no need to consider the change of kernel morphology in the overlapping regions. This can be directly performed in simple equal-sized blocks. When the change of overlapping regions is not considered, the block operation actually introduces more edges that do not exist, and these non-existent edges will destroy the image’s smooth properties; this will lead to the appearance of unexpected ringing, which in turn affects the recovery of the image. As such, if the blurred image is a space-invariant blurred image, there will be a better restoration using the whole image to calculate the RSPI.

3.2. Block Layer Kernel Estimation: Applied RSPI and Weight-Balanced NGM (WG-NGM)

The kernel estimation problem is a non-convex underdetermined problem, so the method of using a gradient-like optimization algorithm to solve it often cannot obtain good results [16]. At present, the mainstream methods are based on the idea of the expectation-maximization (EM) algorithm [17], which estimates the iterative kernel and iterative image alternately. Based on the above framework, this paper proposed two regularization terms, which used RSPI and image correlation to improve the block layer iter-kernel and block layer iter-image estimation in space-variant image deblurring. The new method with the two regularization terms was found to enhance the accuracy of kernel estimation, as shown in Table 1 and Figure 2. The specific processes and methods for kernel estimating are described below.
As previously mentioned, the proposed method builds an image pyramid by down-sampling and uses the same method to estimate the blur kernel at each layer.
As Figure 2 shows, the blur kernel estimation model Equation (4) for block layer images is divided into Equations (16) and (11), according to iterative image estimation and iterative kernel estimation, respectively. In addition, the iterative image and iterative kernel are estimated alternately using the EM method.
The block layer kernel estimation model in the gradient domain is obtained from Equation (2) as:
H 1 T o t a l = arg min H 1 T o t a l H h i t e r T o t a l   ( X T o t a l ) Y T o t a l 2 2 + λ h ϕ ( h i t e r T o t a l )
where λ h is the regularization coefficient of kernel estimation, H denotes the pool of allowed block layer kernel H 1 T o t a l , ϕ ( h i t e r T o t a l ) is the regularization term function for kernel estimation as F ( h i t e r ) , and h i t e r T o t a l is the temp iter-kernel in the process (same as H T o t a l ). Owing to its higher calculation efficiency [18], the kernel estimation model Equation (10) is solved in the sparsity gradient domain.

3.2.1. Block Layer Iter-Kernel Estimation with RSPI

First, the correlation coefficient of the two-channel image of this block-layer is calculated as RSPI: R b l o c k = b l o c k ( D ( Y 1 T o t a l ) , D ( Y 2 T o t a l ) T D ( Y 1 T o t a l ) , D ( Y 2 T o t a l ) ) (function block() means the area of the block in the whole images), and then the RSPI is applied to improve the block layer iter-kernel estimation model:
h 1 b l o c k l a y e r i t e r = arg min h b l o c k l a y e r i t e r H i t e r ( h t e m p X 1 b l o c k l a y e r i t e r - Y 1 b l o c k ker i t e r t e m p 2 2 + λ h R b l o c k 2 h t e m p 2 2 ) s . t . h t e m p p i x e l = 1   a n d   1 > h t e m p p i x e l 0
where 1 signifies channel-1, h 1 b l o c k l a y e r i t e r represents the block layer iter-kernel, H i t e r denotes the pool of allowed block layer kernel h 1 b l o c k l a y e r i t e r , h t e m p represents the temp value of h 1 b l o c k l a y e r i t e r in the iterative kernel estimation process, X 1 B l o c k l a y e r i t e r is the block layer gradient of the ideal image, Y 1 b l o c k ker i t e r t e m p is the temp iteration blurred image’s block layer gradient image in the process, and h t e m p p i x e l is each pixel of the h t e m p .
Expanding this expression yields:
h 1 b l o c k l a y e r i t e r = arg min h H ( h t e m p T ( X 1 b l o c k l a y e r i t e r T X 1 b l o c k l a y e r i t e r + λ h R b l o c k )                                                                 - h t e m p T x 1 b l o c k l a y e r i t e r T Y 1 b l o c k ker i t e r t e m p )                                               s . t . h t e m p p i x e l = 1   a n d   h t e m p p i x e l 0
It is easy to see that there are many ways to solve this problem, as [16] shows. However, the usual method of maximum a posteriori probability estimation (MAP) does not accommodate both the normalization and non-negativity conditions of the kernel in a single objective function. Therefore, this paper found a probability distribution that satisfies the multivariate Dirichlet distribution by variational estimation, to fit the real distribution of the kernel based on statistical theory. In this paper, the variational Dirichlet (VD) method was used to solve Equation (12) [3]; the gradient projection algorithm [3] was used to solve this problem. This method typically uses a given step-size minimum as the iteration termination threshold. Based on experience of selection, it is usually decided to terminate the iteration when the maximum number of iterations is 20 or when the relative change in the objective function is less than 10 - 5 . The Dirichlet function for fitting the kernel is better than the other methods in its speed and accuracy of kernel estimation; for example, the gradient-like method and other functions (multivariate mixed Gaussian function etc.) to fit the kernel [3]. With the results obtained from the VD method, the block layer iter-kernel can be obtained.
Usually, the norm-2 of the kernel h t e m p 2 is used as the regularization term in the estimation of the layer iter-kernel estimation [14]. Furthermore, it can be seen in Equation (11) that the computation of the h t e m p 2 regularization term adds the unit matrix to the diagonal of the h t e m p 2 result of the block layer iterated image. The essence of the RSPI operator involves introducing the two-channel image correlation to the h t e m p 2 result of the block layer iterated image. Therefore, the use of RSPI instead of the h t e m p 2 as the regularization term can effectively improve the kernel estimation accuracy.
As shown in Table 1, three sets of simulated blur images were obtained using images from the GoPro image library. The sizes of kernel were selected in sizes of (6, 6), (25, 25), (35, 35) pixels and blur angles of (15, −60), (0, 90), and (70, −150) degrees, respectively. The similarity index SSIM was chosen to examine the similarity between the RSPI and the h t e m p 2 results in the kernel estimation process. The RSPI and h t e m p 2 were used as the layer iter-kernel regularization terms ϕ ( h i t e r T o t a l ) in Equation (10):
arg min h H ( h t e m p T ( X 1 b l o c k l a y e r i t e r T X 1 b l o c k l a y e r i t e r + λ h I )     - h t e m p T X 1 b l o c k l a y e r i t e r T Y 1 b l o c k ker i t e r t e m p )
arg min h H ( h t e m p T ( X 1 b l o c k l a y e r i t e r T X 1 b l o c k l a y e r i t e r + λ h R b l o c k )           - h t e m p T X 1 b l o c k l a y e r i t e r T Y 1 b l o c k ker i t e r t e m p )
and used with the same NGM term in the layer iter-image (without blocking because of un-space-variation).
It can be seen that the RSPI results were more similar than the norm-2  h t e m p 2 results, except that the blurred kernel vectors were in quadrants 1 and 3, respectively (the advantage was, therefore, not obvious).

3.2.2. Block Layer Iter-Image Estimation with WB-NGM

As mentioned above, the ordinary regularization term NGM was improved upon with the WB-NGM. As shown in Figure 2, the λ 2 X 2 b l o c k l a y e r i t e r p i x e l from channel-2 of the proposed WG-NGM method was found to effectively improve the texture restoration effect of the iterative image and improve the restoration accuracy of the iterative kernel.
The calculation method of WB-NGM is as follows:
T ( X i t e r ) = X 1 b l o c k l a y e r i t e r p i x e l λ 1 X 1 b l o c k l a y e r i t e r p i x e l + λ 2 X 2 b l o c k l a y e r i t e r p i x e l + E Y 1 b l o c k i m g i t e r t e m p
where E Y 1 b l o c k i m g i t e r t e m p indicates the mathematical expectation operation performed on the channel-1 iterative blurred block layer image gradient map Y 1 b l o c k i m g i t e r t e m p in the block layer iter-image estimation process, and the value is the mean of the gradients of all pixels of the Y 1 b l o c k i m g i t e r t e m p . X 1 b l o c k l a y e r i t e r p i x e l and X 2 b l o c k l a y e r i t e r p i x e l are the pixel values of the selected location in the gradient images of the block layer iter-images from channel-1 and channel-2. λ 1 and λ 2 (determined by experience) are weighting factors to control the ratio of two pixels. If the located pixel’s gradient value X 1 b l o c k l a y e r i t e r p i x e l is larger than the E Y 1 b l o c k i m g i t e r t e m p , the pixel is retained; otherwise, it is discarded. By updating the threshold and iterative formula repeatedly, only small parts of the pixels are retained in the final result, which improves the accuracy of the blur kernel estimation, while ensuring sparsity.
The WG-NGM could be applied to rewrite the block layer iter-image model as:
X 1 b l o c k l a y e r i t e r = a r g m i n ( X 1 b l o c k l a y e r t e m p p i x e l h 1 b l o c k l a y e r i t e r - Y 1 b l o c k i m g i t e r t e m p 2 2                                                                     + λ x X 1 b l o c k l a y e r i t e r p i x e l λ 1 X 1 b l o c k l a y e r i t e r p i x e l + λ 2 X 2 b l o c k l a y e r i t e r p i x e l + E Y 1 b l o c k i m g i t e r t e m p )
To solve this problem, this paper used an iterative shrinkage-threshold (abbreviated as IST) algorithm [19], which has a good restoration effect for the non-uniform kernel. The WG-NGM can be thought of as a norm-1 regularization term. The IST algorithm typically uses the given maximum number of iterations or the difference between the target values of the two iterations as the threshold critical. In this paper, the maximum number of iterations was set to 10 times, based on the design experience in the references, and the minimum threshold was 10 - 2 .
In order to verify that the introduced multi-channel information could improve the estimation of iterative images, two images from the Levin image set were selected for testing in this paper.
As shown in Figure 3, it was found that the iterative results estimated by the WG-NGM term performed better than those estimated by the NGM term (the different scales caused the step-like performance in the image pyramid, as Figure 3 shows, because the smaller images had a better performance). The data were displayed for approximately 90 iterations (the pyramid was divided into nine layers, and the number of iterations estimated for each layer of the iterative image was ten, so there were 90 iterations), which was in line with the results of the Group-1 experiment below.
The same parameters were used to create a pyramid of images of the same size as the baseline. The performance of the WG-NGM and NGM algorithms was compared separately to the iterative images for 90 iterations. PSNR and SSIM were chosen as the test indicators.
As the figure shows, the WG-NGM method performed better in terms of PSNR, and the final three results in the SSIM were compared, as follows:
The value of SSIM is usually small, so the difference in the final stages in the Figure 3 appears to be small. Table 2 lists the comparison of the SSIM values of the last three terms of the two methods. It can be noticed that WG-NGN performed better.
Therefore, it was demonstrated that the WG-NGM term can effectively improve the accuracy of the existing algorithm.

4. Blocked Image Restoration Based on Clustering Algorithm

The main improvement of the proposed method in this section, which deals with space-variant blurred images, is the block strategy, which is based on the mean-shift method to restore large-scale blurred images.
In space-variable blurred images, the kernel of each block is different [13]. Therefore, it is necessary to fully consider the selection of pixel values in the overlapping area in the process of restoring image block stitching. It is hard to obtain a good kernel estimation result, due to the weak intensity of the texture edge in some parts of images. Furthermore, if the blocks are derived from different blurred transition regions, the results of estimating the blur kernels have no valuable reference significance area. Therefore, if the mean value of the corresponding pixels in the relevant blocks is used as the corresponding pixel value of the restored image directly, the result is inaccurate [13]. To solve these problems, this paper used a kernel classification technique based on the mean-shift clustering algorithm [13] to distinguish high-weight kernels from low-weight kernels. In the final step of stitching, the value of the corresponding pixel was determined according to the quality weight of the blur kernel. The restoration of the final image has three main components:
  •  Block image restoration
As mentioned above, if the blurred images are all divided into N groups horizontally and M groups vertically, for a total of N × M blocks, the number of corresponding kernels is also N × M . First, this paper used the kernel of each block to restore the corresponding block of the images separately. For each blur kernel K e r n e l i (i means the order of the kernel, and i N × M ), the alternating direction method of multipliers (ADMM) [20] algorithm was applied to restore the block of blurred images corresponding to that kernel.
2.
 Mean-shift cluster weight calculation
In the process of image restoration, the determination of the pixel values of the overlapping parts of the image blocks is an important part of image deblurred. The method used in this paper requires all the weights of the image blocks. The weights are used in determining the rate of value in the final recovered image from each image block. Generally, the more blur kernels that have a similar blur form, the more pixels in the whole image that are blurred according to that blur pattern. Therefore, this paper used a mean-shift clustering algorithm to classify the blur kernels and finally obtain a clustering result where the blur kernels are classified according to similarity as the flow shown in Table 3.
W e i g h t i denotes the weight of the i-th kernel, i N × M is the order number of the kernel, n , m are the length and width of the kernel (unit: pixel), and K e r n e l p i x e l i 2 means the sum of squares for all pixel values in the kernel, C s is the cluster of all the kernels (representative weight value as W e i g h t S C for all of them) whose absolute values of the difference between their weight values are less than a threshold (names B a n d in follow, which was determined by experience), D j is the weight vector for the difference of the j-th kernel’s weight in the cluster C s , the pool of j is the element’s order number in C s , and W e i g h t n e w is the new center.
3.
 Each pixel’s value from the sum of the normalized weights
For computational convenience, all image blocks should be complemented first. For example, if the original image size was 800 × 800 and the image block size was 200 × 200, fill in zeros around the image block according to its position on the original image, until it is the same size as the original image; 800 × 800 as layer-recovered images. Then, the value of the pixels in the final restoration image was determined by all values from the same location pixels in all layer-recovered images. The weight of layer-recovered images was calculated using the ratio of the total element number of the corresponding kernel’s class to the total number of kernels. For example, if there are two kernels that have similar forms in one cluster and the total number of kernels is 8, then the weight of the pixels in the blocks recovered by the two kernels is 0.25.
The calculation of the pixels’ value of the final restoration image was performed as follows:
X p i x e l = k Ω c λ k ( X l a y e r k p i x e l )
X p i x e l denotes the selected pixel’s value for the final restoration image X . λ k means the weights of the k-th kernel, X l a y e r k p i x e l signifies the same selected location pixel value as the k-th layer-recovered image, and Ω c means all of the layers recovered images.

5. Experimental Results

To verify the restoration effect of the proposed algorithm, three groups of experiments were used for verification. The Group-1 experiments were simulation experiments with simulation data; the Group-2 experiments involved real-shot image restoration with space-invariant images; and Group-3 were real-shot experiments with space-variant images. The three groups of experiments verified the practicability of the algorithm by comparing the restoration results of the Levin Library, sample images provided by other literature, and our images. Among them, the simulated images sets were quantitatively analyzed through a variety of indicators, and the real-shot images sets were generally selected to use the method of visual observation to evaluate the restoration quality. Moreover, the results from [11] (code download on http://zoi.utia.cas.cz/download_toolbox (accessed on 28 May 2020) with permission) and [14] (code used with GNU) were calculated using the copyright permission when the software was downloaded. Group-4 used space-variance blurred images of the target plate to test the recoverability of this method.
All programs were running on the student version of MATLAB 2017 under the licensed Windows 11 platform. The CPU was an Intel Core i7 8750H, and 32G memory with Intel 750 SSD. Peak signal-to-noise ratio (PSNR, unit: dB) and structural similarity index (SSIM) were calculated using MATLAB’s functions, with default parameters.
Group 1: In this group, the non-space-variant blurred images of the Levin image library were used to compare and test the restoration ability of the algorithm for non-space-variant blurred images. This group mainly verified the estimation ability of the algorithm to the kernel, so we chose to compare all 32 results from the total of four original images and eight kernels. The comparisons were obtained from the studies by Xu et al. [14,21,22,23]. The codes of those papers were downloaded from their official websites on GitHub, with a GPL or GNU open-source license. The comparison method also used the PSNR and SSIM to measure the results from the two algorithms. The total images, and kernels are shown in Figure 4, and the results are shown in Figure 5.
Furthermore, the results of [21,22,23] were provided by their official GitHub site. The results were not obtained by the authors of this paper running the program on this machine, except running times or special declarations.
Figure 4a shows the original images and Figure 4b the self-designed simulation blur kernel. Figure 5 shows all the results obtained when simulations were performed in two-by-two teams in the order of images and blur kernels. It can be seen that the proposed algorithm performed well overall, in terms of the PSNR and SSIM.
Discussion: As shown in Figure 5, with the PSNR, this algorithm not only produced a result up to 35+, but the mean and the concentration of the distribution interval of the PSNR of this algorithm were higher than the other algorithms. Moreover, the lower bound of this algorithm in the PSNR was higher than the others.
The proposed algorithm also performed well in for the SSIM. It led, both in terms of the maximum and mean values, and the lower bound of the interval.
This shows that the recovery results of this method are more stable overall, with less variation in the results for images with different blurring patterns. Moreover, with stability and balance, this method provided the highest performance results and high lower bounds. It is also worth mentioning that, since the Levin library is a space-invariant blurred image simulation library, this algorithm does not fall short in terms of time consumption. The time cost of the remaining algorithms was essentially close, except for the frequency domain-based L.P’s algorithm, which has a significant advantage over the present algorithm as Table 4 shows.
Group 2: In this group, two sets of test images from the literature [11] were used as the test objects. The first set of photos in this group were real images without the original, and the second set were real-shot images.
Set 1 Discussion: The images in this set are real images provided from the open-source algorithm by Mr. Sroubek’s official website. There is no original image, so the PSNR and SSIM cannot be used to test the restoration. Fortunately, in this paper, it is not too difficult to obtain meaningful visually observed results.
Looking at the details of the license plate in Figure 6, it can be seen that both Xu’s algorithm and YC. B’s algorithm [22] can roughly identify the character B. The proposed method could identify the number. The main reason for this difference is that both the Xu’s and YC. B’s algorithms had a certain ringing effect, so in the recovery results there are some horizontal stripes in the middle of the plate, which unexpectedly enhance the recognition of the letter B. Therefore, on balance, the proposed algorithm had a better performance in removing ringing and recovering the details of the numbers.
Set 1:
Figure 6. Sroubek’s sample restored experiment set1. On the left, the top image is the blurred image1, and the blurred image 2 is at the bottom. In the top from left to right, is the Sroubek’s result, Xu’s result, and F.Wen’s result; the bottom list, from left to right, shows the proposed method’s result, YC.B’s result, L.P’s result.
Figure 6. Sroubek’s sample restored experiment set1. On the left, the top image is the blurred image1, and the blurred image 2 is at the bottom. In the top from left to right, is the Sroubek’s result, Xu’s result, and F.Wen’s result; the bottom list, from left to right, shows the proposed method’s result, YC.B’s result, L.P’s result.
Applsci 12 04864 g006
The proposed method provided a good recovery result. Although slightly worse in terms of horizontal image features, it avoided more ringing in the whole image. In addition, the proposed method also recovered the numbers in the license plate clearly, which shows that this method has better generalizability, rather than good recovery for specific targets.
Set 2 Discussion: From the directed observation in Figure 7, the proposed algorithm performed better. For example, the checkered texture of the clothes on the left shoulder, and the texture of the face in Figure 8. Moreover, the image recovered by the proposed method is sharper and has more details than the original image size.
Both the PSNR and SSIM mean that the algorithm has a leading location in Table 5. It is worth noting that the F. Wen’s algorithm achieved better results, both in the Levin library and in Sroubek’s image library. Therefore, the next set of experiments, represented by the F. Wen’s method, tested the effectiveness of the present algorithm on a color image.
Set 2:
Figure 7. Sroubek’s sample restored experiment set 2. In the top line, it is the original image; the second line shows the input blurred images 1 and 2; the third line, from the left to right, are the results of Sroubek, Xu, and the proposed method; the fourth line, from the left to right, are the results of F.Wen, YC.B, and L.P.
Figure 7. Sroubek’s sample restored experiment set 2. In the top line, it is the original image; the second line shows the input blurred images 1 and 2; the third line, from the left to right, are the results of Sroubek, Xu, and the proposed method; the fourth line, from the left to right, are the results of F.Wen, YC.B, and L.P.
Applsci 12 04864 g007
Figure 8. Sroubek’s sample restored experiment set 2 details. At the top, from left to right, are the results of Sroubek, Xu, and the proposed method; The next line, from left to right, shows the results of F.Wen, YC.Bai, and L.P.
Figure 8. Sroubek’s sample restored experiment set 2 details. At the top, from left to right, are the results of Sroubek, Xu, and the proposed method; The next line, from left to right, shows the results of F.Wen, YC.Bai, and L.P.
Applsci 12 04864 g008
Table 5. Restoration quality evaluation table of Sroubek’s set2.
Table 5. Restoration quality evaluation table of Sroubek’s set2.
IndicatorsProposedSroubekXuYC.BaiF.WenL.P
PSNR27.6425.4726.2323.2113.6226.81
SSIM0.85720.83170.85720.83480.74690.7947
Group-3: The eighth blur image of the bell, one of the most difficult image to deblur in the Kohler library, was chosen to test the proposed method’s ability for blurred color image restoration, and shown in Figure 9 and Table 6. Since the results of these images are only provided in the open-source code of F. Wen’s program, without other methods, this experiment only examined the proposed algorithm’s results, which were compared to Xu’s and F.Wen’s results.
Discussion: The F. Wen method’s official code provided the recovery results for this image directly. Using the recovered images provided by the code was as objective and fair way to compare with the proposed method’s result. Looking at the whole image, the algorithm in this paper consistently performed well in dealing with the ringing effect. In the detail section, whether it was the pointer in the table or the numbers and letters, the results recovered by this algorithm were clear in detail. The blur kernels of the image show that the curvature of the blur kernels estimated by this paper’s algorithm was more rounded, and the energy distribution of the blur kernels at the ends is greater. It avoided the more horizontal bar seen with the F. Wen algorithm. In terms of PSNR and SSIM, the algorithm in this paper also maintained the lead. Moreover, from the experimental results provided in F.Wen’s study, F.Wen’s method is the one of best methods for space-invariant color deblurring in both the Levin and Kohler libraries. Combined with the experimental results of Group-2, it was proven that the proposed method has a better performance in processing black and white images and color images.
Group-4: Group-4 aimed to verify the restoration effect of large-scale space-variant images. Images were taken with an industrial camera “Imaging-Source DFK290”. The imaging parameters of the two images were set the same: the exposure time was 1/66 s, the ISO was 240, the length and width of a single CMOS pixel were both 3 microns, the focal length was 5 mm, and the object distance was 75 cm. The resolution of the image was 1920 × 1080 (unit: pixel), and the area of the target plate was 800 × 550 pixels in the image used. The images were preprocessed for registration and brightness matching before use. The object of the target plate was designed using ISO122333 [24] standards. The measurement algorithm implemented the calculation by selecting the program (SFR_1.41) compiled by MITRE Corporation in 2006.
This group of experiments comprised real-shot experiments, so the subjective evaluation method was used to evaluate the restoration quality of the images. However, some existing image quality assessment (IQA) indicators were also used, such as the natural image quality evaluator (NIQE: the smaller the value, the smaller the gap with the natural image, and the better the image quality [25]), and the average gradient (the larger the value, the clearer the image texture). These two indicators were, thus, able to evaluate the image restoration quality from other perspectives.
The comparison methods were L.P’s and Xu’s methods. The main reason for not choosing other methods was because other algorithms were more complicated to set up and had less reference value for the results run on the local machine. Moreover, the L.P’s algorithm was more advanced, as seen in previous comparisons of works with space-variation images. Thus, the open-source program required only one major parameter, kernel size, so the results of L.P’s method were more reliable if computed on this machine.
The results are shown below in the order:
Discussion: The experimental results are shown in Figure 10 and Figure 11. First, by looking at the morphology of the blur kernels, it can be seen that the blur kernels obtained by the split block method show different blurring states, depending on the region. Therefore, if the blur kernel of the whole image is estimated according to the premise of space-invariance, it can be seen that the morphology of the kernel from the comparison methods was similar, and the results also show the main blurred motion pattern of the whole image.
However, since the blurring of the image is non-uniform, the comparison algorithms were able to obtain good results in relative regions, where the blurring patterns match well. However, a poor result was obtained in regions where the blurring patterns were widely disparate. Therefore, in Figure 12, this paper selected the parts where the blur kernel morphology differed relatively more from the comparison algorithm for comparison.
In Figure 11, in the region of the green box on the top right corner of the figure, it can be seen that the blur state estimated by the proposed method is characterized by a predominantly horizontal motion with a vertical orientation, while the morphology estimated by the comparison method is an oblique blur. This difference performance of the detail restoration was caused by the different ability of the ringing effect control in the recovery results.
Within the circular region in the lower part of the image, the corresponding blurred region is the intersection of the four blurred kernels in this method; therefore, the recovered image of this block reflects the common morphology of the four regional blurred kernels. It can be seen that an oblique motion is the main common feature of all four kernels. Thus, the results of this algorithm are only slightly better than L.P.’s in terms of ringing effects and better than those of Xu’s algorithm in terms of detail.
In the lower-left corner of the image, the blur shows a greater range of movement in the horizontal direction, according to the estimation results of this algorithm. However, the energy distribution of the blur kernels is mainly in the vertical direction. Therefore, Xu’s method has a serious loss of detail in the image, to ensure control ability of the ringing. The results of L. P’s method are not sufficient in the lateral direction. In data “6” and “8”, it shows bad ringing control in the oblique direction, and the same is shown in the location of the line cluster above.
The measurements of the indicators in Table 7 also show the higher sharpness and better restoration processed by this method.
As such, because of its split-block strategy, the proposed method is significantly more effective in dealing with blurring problems with space-variant than space-invariant algorithms.
Using the suggested software (SFR_1.41) to calculate MTF parameters. Although the MTF graph shows that each comparison algorithm has some advantages over the proposed method in some intervals, the proposed method performs better in most areas of the medium and high frequencies region. The L.P’s algorithm performs better in the region around 0.5, but lower in the region from 0.4~0.5 and 0.6~1. While the result from Xu was a little higher in the end region of 0.9~1, but lower in the region 0.3~0.9. In the MTF, the method is more stable, and the result is shaper.

6. Conclusions

In this paper, the advantages and disadvantages of the existing blind deconvolution deblur algorithms for single-channel images and multi-channel images were first analyzed, and then the characteristics of large-scale space-variant blurred images were studied. Based on synthesizing single-channel blind deconvolution and multi-channel blind deconvolution algorithms, and through theoretical derivation and experimental verification, a practical dual-channel blind deconvolution algorithm was proposed. By dividing the two-channel blurred image into blocks, the algorithm cleverly uses all the information of the image block as the correlation coefficient, takes into account the correlation of the local image and the utilization rate of the global information, and also deals with the space-variant problem of space-variant images. In the kernel estimation step, the correlation coefficient and correlation information of the dual-channel image is used to reconstruct the two iterative parts of the kernel. In the restoration part of the block image, the clustering algorithm is used to calculate the weight of the kernel, to restore and reconstruct the entire image. The final experimental results showed that the algorithm fulfilled the design expectations, and the improvement to the existing algorithm is obvious.
In the process of derivation, it was found that the algorithm still has space for further development. First, the number of methods for further enhancement can be increased. According to the derivation of [11], the more channels involved in the deblurring process, the better the restoration effect may be. A specific theoretical derivation and implementation need to be performed in the next step. Second, the block clustering restoration part of the image can also be optimized, and the weight of the corresponding pixels in the block image can be more accurately determined by introducing regularization.

Author Contributions

Conceptualization, Z.T., Y.B. and Q.L.; methodology, M.H.; software, Y.B., P.H.; investigation, Z.T., M.H. and Q.L.; resources, Z.T.; data curation, Y.B.; writing—original draft preparation, Y.B.; writing—review and editing, Y.B. and Z.T.; supervision, Z.T., M.H. and Q.L.; project administration, Z.T., M.H. and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by: the National Natural Science Foundation of China (No. 61635002); the National Defense Science and technology innovation fund of the Chinese Academy of Sciences (No. CXJJ-20S016); Science Technology Foundation Strengthening Field Fund Project (No. 2020-JCJQ-JJ-492).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fergus, R.; Singh, B.; Hertzmann, A. Removing camera shake from a single photograph. ACM 2006, 25, 787–794. [Google Scholar]
  2. Levin, A. Image and Depth from a Conventional Camera with a Coded Aperture. Siggraph 2007, 2007, 70-es. [Google Scholar] [CrossRef]
  3. Zhou, X.; Mateos, J.; Zhou, F. Variational Dirichlet Blur Kernel Estimation. IEEE Trans. Image Process. 2015, 24, 5127–5139. [Google Scholar] [CrossRef] [PubMed]
  4. Liu, X.; Gamal, A.E. Synthesis of high dynamic range motion blur free image from multiple captures. Circuits Syst. I Fundam. Theory Appl. IEEE Trans. 2003, 50, 530–539. [Google Scholar]
  5. Li, Z.; Deshpande, A.; Xin, C. Denoising vs. deblurring: HDR imaging techniques using moving cameras. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010. [Google Scholar]
  6. Rav-Acha, A.; Peleg, S. Two motion-blurred images are better than one. Pattern Recognit. Lett. 2005, 26, 311–317. [Google Scholar] [CrossRef]
  7. Yuan, L.; Sun, J.; Quan, L.; Shum, H.Y. Image deblurring with blurred noisy image pairs. In ACM SIGGRAPH 2007 Papers, Proceedings of the SIGGRAPH07: Special Interest Group on Computer Graphics and Interactive Techniques Conference, San Diego, CA, USA, 5–9 August 2007; Association for Computing Machinery: New York, NY, USA, 2007. [Google Scholar]
  8. Katsaggelos, A.K.; Lay, K.T.; Galatsanos, N.P. A general framework for frequency domain multi-channel signal processing. IEEE Trans. Image Process. 1993, 2, 417–420. [Google Scholar] [CrossRef] [PubMed]
  9. Sroubek, F.; Flusser, J. Multichannel blind deconvolution of spatially misaligned images. IEEE Trans. Image Process. 2005, 14, 874–883. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Sroubek, F.; Flusser, J.; Cristobal, G. Multiframe Blind Deconvolution Coupled with Frame Registration and Resolution Enhancement; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  11. Sroubek, F.; Milanfar, P. Robust Multichannel Blind Deconvolution via Fast Alternating Minimization. IEEE Trans. Image Process. 2012, 21, 1687–1700. [Google Scholar] [CrossRef] [PubMed]
  12. Djurovi, I. BM3D filter in salt-and-pepper noise removal. EURASIP J. Image Video Process. 2016, 2016, 13. [Google Scholar] [CrossRef] [Green Version]
  13. Alam, M.Z.; Qian, Q.; Gunturk, B.K. Space-variant blur kernel estimation and image deblurring through kernel clustering. Signal Process. Image Commun. 2019, 76, 41–55. [Google Scholar] [CrossRef]
  14. Xu, Z.; Fugen, Z.; Bei, X.Z. Blind deconvolution using a nondimensional Gaussianity measure. In Proceedings of the 2014 IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013. [Google Scholar]
  15. Harikumar, G.; Bresler, Y. Perfect Blind Restoration of Images Blurred by Multiple Filters: Theory and Efficient Algorithms. IEEE Trans. Image Process. 1999, 8, 202–219. [Google Scholar] [CrossRef] [PubMed]
  16. Li, X.C.; Li, H.K.; Song, B. Application of energy functional regularization model in image restoration. J. Image Graph. 2014, 19, 1247–1259. (In Chinese) [Google Scholar]
  17. Yang, L.; Ji, H. A Variational EM Framework With Adaptive Edge Selection for Blind Motion Deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  18. Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An iterative regularization method for total variation-based image restoration. Siam J. Multiscale Model. Simul. 2005, 4, 460–489. [Google Scholar] [CrossRef]
  19. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. Siam J. Imaging Hences 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  20. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  21. Wen, F.; Ying, R.; Liu, Y.; Liu, P.; Truong, T.-K. A Simple Local Minimal Intensity Prior and An Improved Algorithm for Blind Image Deblurring. IEEE Trans Circuits Syst. Video Technol. 2021, 31, 2923–2937. [Google Scholar] [CrossRef]
  22. Bai, Y.; Cheung, G.; Liu, X.; Gao, W. Graph-Based Blind Image Deblurring From a Single Photograph. IEEE Trans. Image Process. 2018, 28, 1404–1418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Pan, L.; Hartley, R.; Liu, M.; Dai, Y. Phase-Only Image Based Kernel Estimation for Single Image Blind Deblurring. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  24. ISO 122333; 2017 Photography Electronic Still Picture Imaging Resolution and Spatial Frequency Responses. ISO: Geneva, Switzerland, 2017.
  25. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
Figure 1. Structure flow chart: This figure describes the flow and components of the proposed algorithm. The input is two blurred images with different blurring patterns, but with the same object. The first step is pre-processing, which focuses on block splitting and de-noising. The second part estimates the blur kernel, block by block. The third step is the classification of the blur kernels using a clustering algorithm and the final restoration.
Figure 1. Structure flow chart: This figure describes the flow and components of the proposed algorithm. The input is two blurred images with different blurring patterns, but with the same object. The first step is pre-processing, which focuses on block splitting and de-noising. The second part estimates the blur kernel, block by block. The third step is the classification of the blur kernels using a clustering algorithm and the final restoration.
Applsci 12 04864 g001
Figure 2. The block layer kernel estimation flow chart. According to the EM algorithm idea, iterative kernels are estimated alternatively with the iterative image, which needs to be fixed in the iterative image when estimating the kernel and vice versa.
Figure 2. The block layer kernel estimation flow chart. According to the EM algorithm idea, iterative kernels are estimated alternatively with the iterative image, which needs to be fixed in the iterative image when estimating the kernel and vice versa.
Applsci 12 04864 g002
Figure 3. Levin image restoration experiment. Left is PSNR and right is SSIM. This paper used the same parameters and solution methods to test the performance of the two regular terms of WG-NGM and NGM under the same conditions.
Figure 3. Levin image restoration experiment. Left is PSNR and right is SSIM. This paper used the same parameters and solution methods to test the performance of the two regular terms of WG-NGM and NGM under the same conditions.
Applsci 12 04864 g003
Figure 4. The Levin library; (a) the original images from the Levin library; (b) kernels.
Figure 4. The Levin library; (a) the original images from the Levin library; (b) kernels.
Applsci 12 04864 g004
Figure 5. The result; Left is the SSIM; Right is PSNR. The red line is the average. The blue box is the main area of concentration for the median 50% of the data. The black horizontal line indicates the highest value in the median region. The red cross indicates the highest value for the entire set. If there is no red cross, then the highest value is at the black horizontal line above.
Figure 5. The result; Left is the SSIM; Right is PSNR. The red line is the average. The blue box is the main area of concentration for the median 50% of the data. The black horizontal line indicates the highest value in the median region. The red cross indicates the highest value for the entire set. If there is no red cross, then the highest value is at the black horizontal line above.
Applsci 12 04864 g005
Figure 9. Result of the Kohler library. At the top, from left to right, are the results of the proposed method, Xu, and F.Wen, and the estimated kernel; below are the blurred image and details from the line above.
Figure 9. Result of the Kohler library. At the top, from left to right, are the results of the proposed method, Xu, and F.Wen, and the estimated kernel; below are the blurred image and details from the line above.
Applsci 12 04864 g009
Figure 10. The input blurred images and the restoration. The top line is the blurred images 1 and 2; The middle line is the results of Xu’s and L.P’s methods as space-invariant. The bottom line is the deblurred image using the proposed method as space-variant.
Figure 10. The input blurred images and the restoration. The top line is the blurred images 1 and 2; The middle line is the results of Xu’s and L.P’s methods as space-invariant. The bottom line is the deblurred image using the proposed method as space-variant.
Applsci 12 04864 g010
Figure 11. (a) The kernels from Xu’s, L.P’s and the proposed method. The kernel results of the comparison algorithms were the results of the sample images which were processed as space-invariant images. The proposed result was obtained by processing the sample images as space-variant. Each region was estimated individually using a split block method. Based on the length and width of the image and the results of several experiments, this method chose the block size as 250 × 250 pixels, with a step size of 200 × 200 pixels. If the width or length was insufficient, edges could be added to the left and right or top and bottom edges of the image. The edges were added by duplication.; (b) MTF. The MTF measurement was measured with the edged image in the central part of the image (measured by SFR_1.41).
Figure 11. (a) The kernels from Xu’s, L.P’s and the proposed method. The kernel results of the comparison algorithms were the results of the sample images which were processed as space-invariant images. The proposed result was obtained by processing the sample images as space-variant. Each region was estimated individually using a split block method. Based on the length and width of the image and the results of several experiments, this method chose the block size as 250 × 250 pixels, with a step size of 200 × 200 pixels. If the width or length was insufficient, edges could be added to the left and right or top and bottom edges of the image. The edges were added by duplication.; (b) MTF. The MTF measurement was measured with the edged image in the central part of the image (measured by SFR_1.41).
Applsci 12 04864 g011
Figure 12. Details of results from the green box in Figure 9: From left to right: blurred image, Xu‘s result, L.P’s result, the proposed result.
Figure 12. Details of results from the green box in Figure 9: From left to right: blurred image, Xu‘s result, L.P’s result, the proposed result.
Applsci 12 04864 g012
Table 1. Example of the method with RSPI.
Table 1. Example of the method with RSPI.
Length of KernelAngleSSIM
RSPILen = 615, −600.9818
Norm_20.9764
RSPILen = 250, 900.9580
Norm_20.9416
RSPILen = 3570, −1500.9365
Norm_20.9334
Table 2. Final results of SSIM.
Table 2. Final results of SSIM.
WG-NGM0.54980.55090.5519
NGM0.54150.54740.5488
Table 3. The flow of the mean-shift cluster method in kernel categorization.
Table 3. The flow of the mean-shift cluster method in kernel categorization.
Mean-Shift Cluster Method in Kernel Categorization
Step 1:
 Calculate the pixel sum of squares for each kernel as its weight:
                 W e i g h t i = n , m K e r n e l p i x e l i 2
Step 2:
Taking   the   W e i g h t S   as   the   center   value   of   the   cluster C s
C s = { W e i g h t S C } ; W e i g h t S C K e r n e l ( W e i g h t < B a n d )
Step 3:
Taking   the   W e i g h t S   as   the   center   value   of   C s
 Calculate the sum of all the vectors:
              D s u m = j S C D j ; D j = W e i g h t j W e i g h t S
Step 4:
Calculate   the   new   center   W e i g h t n e w   with   the   D s u m :
              W e i g h t n e w = W e i g h t S + D s u m
Add   all   the   kernels   which   weigh   less   than   D s u m in   the   cluster   C s
Step 5:
 If
W e i g h t n e w W e i g h t s < D s u m
 do:
  two clusters can be merged
 else
  A new cluster should be obtained
Step 6:
  Run step 3
Until   D s u m < t (t as selected by the experiment)
Step 7:
 Run step 2
Until all the kernels are visited
Table 4. Restoration quality evaluation table of times.
Table 4. Restoration quality evaluation table of times.
IndicatorsProposedSroubekXuYC.BF.WenL.P
Times (Unit: s)27.3343.0217.1927.2423.1912.96
Table 6. Restoration quality evaluation table of Kohler’s bell.
Table 6. Restoration quality evaluation table of Kohler’s bell.
Evaluation ObjectIndicatorsF.WenXuProposed
Restoration resultsPSNR11.861011.574111.9056
SSIM0.36830.37900.3825
Table 7. Final results of the IQA.
Table 7. Final results of the IQA.
Test ImageEvaluation CriterionXu L.PProposed Algorithm
PlateNIQE4.12174.65783.9489
Average Gradient5.75765.38575.8725
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bai, Y.; Tan, Z.; Lv, Q.; He, P.; Huang, M. A Deconvolutional Deblurring Algorithm Based on Dual-Channel Images. Appl. Sci. 2022, 12, 4864. https://0-doi-org.brum.beds.ac.uk/10.3390/app12104864

AMA Style

Bai Y, Tan Z, Lv Q, He P, Huang M. A Deconvolutional Deblurring Algorithm Based on Dual-Channel Images. Applied Sciences. 2022; 12(10):4864. https://0-doi-org.brum.beds.ac.uk/10.3390/app12104864

Chicago/Turabian Style

Bai, Yang, Zheng Tan, Qunbo Lv, Peidong He, and Min Huang. 2022. "A Deconvolutional Deblurring Algorithm Based on Dual-Channel Images" Applied Sciences 12, no. 10: 4864. https://0-doi-org.brum.beds.ac.uk/10.3390/app12104864

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop