Next Article in Journal
A Generalization of the Importance of Vertices for an Undirected Weighted Graph
Previous Article in Journal
A Simulation on Relation between Power Distribution of Low-Frequency Field Potentials and Conducting Direction of Rhythm Generator Flowing through 3D Asymmetrical Brain Tissue
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiframe Super-Resolution of Color Images Based on Cross Channel Prior

1
Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China
2
Innovation Academy of Microsatellite of Chinese Academy of Sciences, Shanghai 201210, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
5
China Academy of Sciences Institute of Optoelectronics, Beijing 100194, China
*
Author to whom correspondence should be addressed.
Submission received: 29 March 2021 / Revised: 28 April 2021 / Accepted: 2 May 2021 / Published: 19 May 2021
(This article belongs to the Section Computer)

Abstract

:
Color images have a wider range of applications than gray images. There are two ways to extend the traditional super-resolution reconstruction method to color images: Super resolution reconstructs each channel of the color image individually; Change the RGB color bands into YCrCb color bands, then super-resolution reconstructs the luminance component and interpolates the chrominance components.These algorithms cannot effectively utilize the property that the edges and textures are similar in the RGB channels, and the results of those methods may lead to color artifacts. Aiming to solve these problems, we propose a new super-resolution method based on cross channel prior. First, a cross channel prior is proposed to describe the similarity of gradient in RGB channels. Then, a new super-resolution method is proposed for color images via combination of the cross channel prior and the traditional super-resolution methods. Finally, the proposed method reconstructs the color channels alternately. The experimental results show that the proposed method could effectively suppress the generation of color artifacts and improve the quality of the reconstructed images.

1. Introduction

Image super-resolution (SR) is an effective image enhancement technology. It uses mathematical methods to increase image resolution without changing the imaging system hardware. It has great advantages in terms of technology and cost, and is widely used in scientific research and engineering [1,2]. Traditional SR methods are mainly based on gray images. Compared with gray images, color images could provide more information and are widely used in digital television, remote sensing, medical imaging, and cultural relic protection and display [3,4,5]. Color image SR is gradually becoming a new direction of SR research.
In 1984, Tsai and Huang [6] proposed the concept of multiframe super-resolution (MFSR) reconstruction, proved the theoretical feasibility of the SR algorithm and successfully applied it to the processing of Landsat satellite images. Kim, Bose and Valenzaule [7,8] improved Tsai’s algorithm by considering image noise, image blurring, and image registration which extended the application of SR algorithm. Rhee and Kang [9] used discrete cosine transform (DCT) instead of discrete Fourier transform (DFT) in SR algorithm, which improved the computational efficiency of the algorithm. After more than 30 years of development, the current SR methods can be divided into two categories: reconstruction-based and learning-based methods.
The learning-based SR technology of a single image has received widespread attention in recent years [10]. By establishing a training set of high-resolution (HR) images corresponding to low-resolution (LR) images, a suitable data dictionary [11], neural network [12], or adversarial network [13] can be trained to achieve the effect of SR reconstruction. This type of methods has good SR effects and high efficiency, but the training process often requires a lot of computing resources, and the reconstruction results are limited to the training set. In contrast, MFSR methods use the sub-pixel displacement between the LR images to recovery the lost high-frequency information, which would improve the true (optical) resolution of the image [14].
The reconstruction-based methods can be divided into two categories: frequency domain methods and spatial domain methods. The frequency domain method uses the aliasing that exists in the LR images to reconstruct the HR image, the principle is straightforward, the computational complexity of the algorithm is low, and it is easy to implement in hardware. Therefore, the frequency methods can only handle the global translational motion and cannot utilize the image prior information. The spatial domain methods mainly include iterative back projection (IBP) [15,16,17], the convex set projection method (POCS) [18,19,20], the maximum a posteriori (MAP) probability estimation method [21,22,23] and other methods. These methods use the sub-pixel information existing between LR images to provide additional information to reconstruct images. The algorithms reconstruct well and are mainly used in scientific research, satellite remote sensing and other fields.
The above-mentioned SR algorithm research mainly concentrates on grayscale images. For color images with a wider range of applications, there is not much targeted research, mainly involving independently applying SR reconstruction technology to the three RGB channels of color images for reconstruction [24,25,26]. Since the independent reconstruction process ignores the correlation between the color channel components of the color image, the SR reconstruction of the color image is prone to artifacts. Some scholars have proposed a new method to extend the single-channel SR reconstruction method to color images based on the characteristics of human eyes being more sensitive to brightness information [27]. Yang et al. [11] converted the color image from the RGB color space to the YCrCb color space, performed SR reconstruction of luminance information and performed simple interpolation processing on chrominance components. Xu et al. [28] improved Yang’s method with a new conversion method and an improved TV regularization method. The new conversion method allows the luminance channel to contain more texture information, and the improved TV regularization method can effectively suppress image artifacts. This type of method combines the intensity change information of each component of the image as a whole for processing, which effectively reduces the amount of calculation while ensuring the basic quality of the image, and at the same time overcomes the problem of ignoring channel correlation in independent calculations. However, separating image brightness and chrominance information will inevitably break the correlation between color channels. Therefore, the reconstruction results of this type of method still face the problem of color artifacts in the reconstruction results.
Starting from the analysis of the correlation between the channels of the color image, combining the imaging model and prior information in the traditional grayscale SR reconstruction algorithm, this paper proposes an effective color image SR reconstruction algorithm. The main contributions of this article are listed below,
  • Introduce a new image prior knowledge in multiframe SR algorithm for color images, which can describe the correlation between each channel of the image well.
  • Propose a new SR algorithm based on the introduced cross-channel prior, which effectively utilizes the correlation between color channels and improves the reconstructed image quality.
  • Design experiments to verify the proposed algorithm and compare it with other algorithms to show the effect.
Through simulation experiments and real data testing, our algorithm can effectively utilize color image channel correlation and the reconstruction results are better than the presented algorithms.

2. Materials and Methods

2.1. Observation Model

A set of LR images can be obtained from an HR image through a set of operations such as rotation, displacement, bluring, down-sampling, and mixed noisy. The mathematical expression of the observation model is
y i c = S K M i x c + N i , i = 1 , 2 , , s , c = r , g , b , ,
where y i c and x c denote the cth band(R,G,or B) of the ith LR image and the cth band of the HR image. S represents the down-sampling matrix, K represents the blurring matrix, M i represents the warping matrix, and N i represents the additive noisy.

2.2. Multiframe Super-Resolution Methods for Grayscale Image

The purpose of the MFSR method is to reconstruct the HR image from a set of LR images. It is an ill-conditioned problem based on the observation model and cannot be solved directly. Therefore, a regularization method is used in this problem.
x ˜ = arg min x i = 1 s y i S K M i x 2 2 + λ Γ x ,
where y i S K M i x 2 2 is the data fidelity terms, which measure the correlation between the solution and the data, and Γ ( x ) is the regularization cost function, which imposes a penalty on the estimated value x to obtain a stable solution. The parameter λ is a scalar value that controls the tradeoff between the data fidelity term and the regularization terms.
The selection of the regularization term is often related to the a priori information of the image, and different regularization terms will lead to different reconstruction results. Here, we use the total variation (TV) regularization [29] to test our algorithm. The TV regularization, which is based on the assumption that natural images have small TV norm, is widely employed in image SR methods. The expression of the TV regularization is as follows,
Γ x = x t v = Δ h x 1 + Δ v x 1 ,
where Δ h and Δ v respectively represents the horizontal and vertical gradient of the image.
Given the current estimation of the blur kernel K and the motion matrix M i , we can estimate the HR image by Conjugate Gradient (CG) method.

2.3. Cross-Channel Prior

The current MFSR algorithm can reconstruct HR grayscale images; therefore, the main idea of the algorithm in this paper is to enhance the SR results of color images by using cross channel prior.
The color image consists of RGB color channels. The pixel values of the three color channels are different, but the shape information is similar. i.e., the edges of the object are presented in all color channels in the same location the same shape. This corss-channel prior can be mathematically approximated as,
x r x r x g x g x b x b ,
where x r , x g , x b represent the RGB channels of the image x.

2.4. Proposed Method

By adding the cross-channel prior, our algorithm can be formulated as the following optimization problem,
x ˜ c = min x c i = 1 s y c i S K M i x c 2 2 + λ c x c t v + l c β c l x c · x l x l · x c 1 ,
where the first and second terms are the same as the terms of the SR methods applied to the grayscale images, and the third term represents the cross-channel prior. The parameters λ c , β c l with c , l r , g , b are weights for the image prior and the cross-channel prior, respectively.

2.5. Deconvolution Algorithm

The conjugate gradient method is used to solve the RGB channels of the image alternately, where each channel is solved while other two channels kept constant. The deconvolution algorithm for the RGB channels are similar, so we take the R-channel of the image as an example to illustrate the deconvolution algorithm. First, since x g and x b are fixed parameters, the cross-channel prior terms can be written as,
l r Δ h x r · x l Δ h x l · x r 1 + Δ v x r · x l Δ v x l · x r 1 = l r D l Δ h x r D h l x r 1 + D l Δ v x r D v l x r 1 ,
where D l denotes the diagonal matrix with the diagonal taken from the parameter x l , and D h l and D v l are the diagonal matrices with Δ h x l and Δ v x l as the diagonal elements, respectively.
Since the l 1 norm cannot be solved directly using the gradient method, the majorization–minimization (MM) method [30] is used in this paper to approximate the l 1 norm. By introducing an auxiliary vector w, the l 1 norm can be written as:
A x 1 = A x 2 w + A x 2 2 w ,
where w > 0 , and the equation is established when w = A x .
Therefore, the solution Equation (5) can be optimized in the following form,
i = 1 s S K M i T S K M i + λ c [ Δ h T W h Δ h + Δ v T W v Δ v ] + β c l l = g , b D l Δ h D h l T W l h D l Δ h D h l + D l Δ v D v l T W l v D l Δ v D v l x r t + 1 = i = 1 s S K M i T y i r ,
where x r t + 1 is the ( t + 1 ) th estimate to be computed, and the auxiliary matrix
W h = d i a g Δ h x r t 2 ,
W v = d i a g Δ v x r t 2 ,
W l h = d i a g D l Δ h D h l x r t 2 ,
and
W l v = d i a g D l Δ v D v l x r t 2 ,
are calculated by the tth estimate x. The termination condition of our iteration is set as:
x t + 1 x t 2 2 x t 2 2 ε .

3. Results

In this paper, we test our proposed algorithm by comparing it with various state-of-the-art SR algorithms. The algorithm results are evaluated by the objective evaluation methods peak signal to noise ratio (PSNR) and structural similarity (SSIM) [31].

3.1. Simulation Images

Four HR images with the size 256 × 256 are used as original images for simulation experiments. The four original images are show in Figure 1. Each original image is used to create eight synthetic images through image degradation described in Formula (1). The degradation process is as follows. The HR images are first shifted with random sub-pixel displacement and rotated with random angle to creat the image sequence of eight images. Then the image sequence is blurred by a Gaussian kernel of 5 × 5 size and unit variance, and downsampled by a factor of 2 in both the vertical and horizontal directions. Finally, the LR image sequence is obtained by adding 20 dB Gaussian noise. In order to evaluate our proposed algorithm, four representative methods are used in our experiment:
  • Grayscale image SR performed on RGB color channels independently.
  • color image SR performed on YCrCb color space [27].
  • color image SR performed on chrominance regularization [32].
  • color image SR performed on cross-channel prior.
The method in [32] is a learning-based method with color constraint; here, we replace it with an MFSR method using the same color constraint. All methods use the same grayscale SR algorithm [26] to eliminate the error caused by different convolution methods.
The PSNR is the most common and widely used objective evaluation index for images, which evaluates the image quality through the pixel error between images. Given a test image x ^ with size M × N and its original image x, the PSNR is defined as:
P S N R = 10 log 10 L 2 M S E M S E = 1 3 M N n = 1 N m = 1 M c = r , g , b x ( n , m , c ) x ^ ( m , n , c ) 2 , ,
where L is the peak value in the image data and if it a 8-bit image, the L is 255. The evaluated results of PSNR is shown in Table 1. It is obvious that our proposed method outperforms other SR methods.
Smoothing the image will improve the image PSNR, but it will lead to the loss of image texture details. Therefore, it is not accurate to evaluate the algorithm results from the PSNR alone. SSIM [31] is another widely used objective evaluation standard. SSIM can measure image similarity in terms of brightness, contrast, and structure, which are more sensitive to edge information. Hence, the combined use of the SSIM and the PSNR can accurately evaluate the quality of the reconstructed image.
The SSIM is defined as follows:
S S I M = ( 2 μ x μ x ^ + C 1 ) ( 2 σ x x ^ + C 2 ) ( μ x 2 + μ x ^ 2 + C 1 ) ( σ x 2 + σ x ^ 2 + C 2 ) ,
where μ x and μ x ^ are the means and σ x and σ x ^ are the standard deviations of the original image x and the test image x ^ , respectively. σ x x ^ is the covariance between x and x ^ , and C 1 and C 2 are constants. The SSIM is equal to 1 only if x = x ^ . The comparison of the SSIM of different SR methods is show in Table 2. It appears that our algorithm provides better results than other SR methods.
Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 show our experiment’s results of different SR methods. It is clear from the images that our algorithm works better than the other comparison algorithms.

3.2. Real Data

In the real data experiment, a dataset is used to verify our proposed algorithm. The dataset is a sequence of 40 images of size 115 × 138 obtained from the MDSP dataset [33]. The real data is SR with a magnification factor of 3, and the blur kernel is a Gaussian kernel with variance of 1 and size of 5 × 5 . The SR results of the real data are show in Figure 10.
The traditional bicubic interpolation method has little effect on image SR. The SR method with RGB channels reconstruct independently can restore the lost details by SR and correct color image chromatic aberration due to the robustness of the SR algorithm. The results of SR method 2 are seriously affected by chromatic aberration because that only the luminance channel is reconstructed when SR is performed in the YCbCr color space. When the chromatic aberration is not severe, the SR method 3 can use the chrominance regularization prior to enhance the SR effect, but when the chromatic aberration is severe, the chrominance regularization prior used in SR method 3 will cause the image resolution to be unclear. It is obvious from the figure that our proposed method with cross-channel prior has the best SR results.

4. Conclusions

In this paper, we propose a multiframe SR algorithm for color images based on a new cross-channel prior. The results of the simulation show that our proposed algorithm can effectively suppress the noise and preserve edge details with a good reconstruction effect. The real data experiments show that the proposed algorithm can also suppress chromatic aberration and achieve the state-of-the-art SR results.

Author Contributions

Conceptualization, B.X. and Z.Y.; methodology, S.S.; software, S.S.; validation, S.S.; formal analysis, S.S.; investigation, S.S.; resources, S.S.; data curation, S.S.; writing—original draft preparation, S.S.; writing—review and editing, S.S. and Z.Y.; visualization, S.S.; supervision, Z.Y.; project administration, B.X. and Z.Y.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China under Grant 2017YFB0502902 and Shanghai Science and Shanghai Science and Technology Development Funds under Grant 18QA1404000.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Park, S.C.; Park, M.K.; Kang, M.G. Super-resolution image reconstruction: A technical overview. IEEE Signal Process. Mag. 2003, 20, 21–36. [Google Scholar] [CrossRef] [Green Version]
  2. Khattab, M.M.; Zeki, A.M.; Alwan, A.A.; Badawy, A.S.; Thota, L.S. Multi-Frame Super-Resolution: A Survey. In Proceedings of the IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Madurai, India, 13–15 December 2018. [Google Scholar]
  3. Tang, B.; Sapiro, G.; Caselles, V. Color image enhancement via chromaticity diffusion. IEEE Trans. Image Process. 2001, 10, 701–707. [Google Scholar] [CrossRef] [PubMed]
  4. Mousavi, H.S.; Monga, V. Sparsity-Based Color Image Super Resolution via Exploiting Cross Channel Constraints. IEEE Trans. Image Process. 2017, 26, 5094–5106. [Google Scholar] [CrossRef]
  5. Saafin, W.; Vega, M.; Molina, R.; Katsaggelos, A.K. Compressed sensing super resolution of color images. In Proceedings of the 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary, 29 August–2 September 2016; pp. 1563–1567. [Google Scholar] [CrossRef]
  6. Tsai, R.; Huang, T. Multiframe images restoration and registration. Adv. Comput. Vis. Image Process. 1984, 1, 317–339. [Google Scholar]
  7. Kim, S.P.; Bose, N.K.; Valenzuela, H.M. Recursive reconstruction of high resolution image from noisy undersampled multiframes. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1013–1027. [Google Scholar] [CrossRef]
  8. Bose, N.K.; Kim, H.C.; Valenzuela, H.M. Recursive implementation of total least squares algorithm for image reconstruction from noisy, undersampled multiframes. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Minneapolis, MN, USA, 27–30 April 1993; Volume 5, pp. 269–272. [Google Scholar] [CrossRef]
  9. Rhee, S.; Moon Gi, K. DCT-based regularized algorithm for high-resolution image reconstruction. In Proceedings of the International Conference on Image Processing (Cat. 99CH36348), Kobe, Japan, 24–28 October 1999; Volume 3, pp. 184–187. [Google Scholar] [CrossRef]
  10. Gong, R.; Wang, Y.; Cai, Y.; Shao, X. How to deal with color in super resolution reconstruction of images. Opt. Express 2017, 25, 11144–11156. [Google Scholar] [CrossRef] [PubMed]
  11. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image super-resolution via sparse representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef]
  12. Kappeler, A.; Yoo, S.; Dai, Q.; Katsaggelos, A.K. Video Super-Resolution With Convolutional Neural Networks. IEEE Trans. Comput. Imaging 2016, 2, 109–122. [Google Scholar] [CrossRef]
  13. Bhattacharya, S.; Sukthankar, R.; Shah, M. A framework for photo-quality assessment and enhancement based on visual aesthetics. In Proceedings of the 18th ACM International Conference on Multimedia, Firenze, Italy, 25–29 October 2010; pp. 271–280. [Google Scholar]
  14. Wronski, B.; Garcia-Dorado, I.; Ernst, M.; Kelly, D.; Krainin, M.; Liang, C.K.; Levoy, M.; Milanfar, P. Handheld multi-frame super-resolution. ACM Trans. Graph. 2019, 38, 28. [Google Scholar] [CrossRef] [Green Version]
  15. Irani, M.; Peleg, S. Improving resolution by image registration. CVGIP Graph. Model. Image Process. 1991, 53, 231–239. [Google Scholar] [CrossRef]
  16. Song, H.; He, X.; Chen, W.; Sun, Y. An improved iterative back-projection algorithm for video super-resolution reconstruction. In Proceedings of the Symposium on Photonics and Optoelectronics, Chengdu, China, 19–21 June 2010; pp. 1–4. [Google Scholar]
  17. Seema, R.; Bailey, K. Multi-frame Image Super-Resolution by Interpolation and Iterative Backward Projection. In Proceedings of the 2nd International Conference on Signal Processing and Communication (ICSPC), Coimbatore, India, 29–30 March 2019; pp. 36–40. [Google Scholar] [CrossRef]
  18. Patti, A.J.; Altunbasak, Y. Artifact reduction for set theoretic super resolution image reconstruction with edge adaptive constraints and higher-order interpolants. IEEE Trans. Image Process. 2001, 10, 179–186. [Google Scholar] [CrossRef] [Green Version]
  19. Fan, C.; Wu, C.; Li, G.; Ma, J. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images. Sensors 2017, 17, 362. [Google Scholar] [CrossRef]
  20. Ma, Z.; Ren, G. Projection onto the convex sets model based on non-downsampling contourlet transform and high-frequency iteration. Electron. Lett. 2020, 56, 1054–1056. [Google Scholar] [CrossRef]
  21. Shen, H.; Zhang, L.; Huang, B.; Li, P. A MAP approach for joint motion estimation, segmentation, and super resolution. IEEE Trans. Image Process. 2007, 16, 479–490. [Google Scholar] [CrossRef]
  22. Belekos, S.P.; Galatsanos, N.P.; Katsaggelos, A.K. Maximum a posteriori video super-resolution using a new multichannel image prior. IEEE Trans. Image Process. 2010, 19, 1451–1464. [Google Scholar] [CrossRef]
  23. Nascimento, T.P.d.; Salles, E.O.T. Multi-Frame Super-Resolution Combining Demons Registration and Regularized Bayesian Reconstruction. IEEE Signal Process. Lett. 2020, 27, 2009–2013. [Google Scholar] [CrossRef]
  24. Liu, C.; Sun, D. On Bayesian Adaptive Video Super Resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 346–360. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Wanderley, D.S.; Petraglia, M.R.; Gomes, J.G.R.C. Color image super-resolution based on Wiener filters. In Proceedings of the International Telecommunications Symposium (ITS), Sao Paulo, Brazil, 17–20 August 2014; pp. 1–5. [Google Scholar] [CrossRef]
  26. Villena, S.; Vega, M.; Molina, R.; Katsaggelos, A.K. Bayesian super-resolution image reconstruction using an L1 prior. In Proceedings of the 6th International Symposium on Image and Signal Processing and Analysis, Salzburg, Austria, 16–18 September 2009; pp. 152–157. [Google Scholar]
  27. Herold, I.; Young, S.S. Super resolution for color imagery. In Proceedings of the IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 10–12 October 2017. [Google Scholar]
  28. Xu, J.; Chang, Z.; Fan, J.; Zhao, X.; Wu, X.; Wang, Y.; Zhang, X. Super-resolution via adaptive combination of color channels. Multimed. Tools Appl. 2015, 76, 1553–1584. [Google Scholar] [CrossRef]
  29. George, S.N. Multi-frame image super resolution using spatially weighted total variation regularisations. IET Image Process. 2020, 14, 2187–2194. [Google Scholar] [CrossRef]
  30. Bioucasdias, J.M.; Figueiredo, M.A.T.; Oliveira, J. Total Variation-Based Image Deconvolution: A Majorization-Minimization Approach. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Toulouse, France, 14–19 May 2006; Volume 2, pp. 861–864. [Google Scholar]
  31. Sara, U.; Akter, M.; Uddin, M.S. Image Quality Assessment through FSIM, SSIM, MSE and PSNR—A Comparative Study. J. Comput. Chem. 2019, 7, 8–18. [Google Scholar] [CrossRef] [Green Version]
  32. Xu, Z.; Ma, Q.; Yuan, F. Single color image super-resolution using sparse representation and color constraint. J. Syst. Eng. Electron. 2020, 31, 266–271. [Google Scholar] [CrossRef]
  33. Milanfar, P. MDSP Super-Resolution and Demosaicing Datasets. Available online: http://www.soe.ucsc.edu/~milanfar/software/sr-datasets.html (accessed on 23 April 2021).
Figure 1. Four HR images that often used in SR experiments. (a) airplane, (b) parrot, (c) boat, (d) kid, (e) butterfly, (f) face, (g) child, (h) bird.
Figure 1. Four HR images that often used in SR experiments. (a) airplane, (b) parrot, (c) boat, (d) kid, (e) butterfly, (f) face, (g) child, (h) bird.
Symmetry 13 00901 g001
Figure 2. SR results of image airplane by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Figure 2. SR results of image airplane by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Symmetry 13 00901 g002
Figure 3. SR results of image parrot by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Figure 3. SR results of image parrot by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Symmetry 13 00901 g003
Figure 4. SR results of image boat by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Figure 4. SR results of image boat by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Symmetry 13 00901 g004
Figure 5. SR results of image kid by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Figure 5. SR results of image kid by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Symmetry 13 00901 g005
Figure 6. SR results of image butterfly by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Figure 6. SR results of image butterfly by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Symmetry 13 00901 g006
Figure 7. SR results of image face by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Figure 7. SR results of image face by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Symmetry 13 00901 g007
Figure 8. SR results of image child by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Figure 8. SR results of image child by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Symmetry 13 00901 g008
Figure 9. SR results of image bird by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Figure 9. SR results of image bird by different SR methods. (a) the first frame of the LR images, (b) traditional bicubic interpolation, (c) reconstruct the RGB color channels independently, (d) SR with YCbCr color space, (e) SR using chrominance regularization, (f) our proposed method.
Symmetry 13 00901 g009
Figure 10. Comparison of the reconstruction results of real images. The images are reconstructed by the comparison algorithms and our proposed algorithm respectively. (a,f,k) traditional bicubic interpolation and it details, (b,g,l) reconstrut the RGB color channels independently and the details, (c,h,m) SR with YCbCr color space and the details, (d,i,n) SR using chrominance regularization and the details, (e,j,o) our proposed method and the details.
Figure 10. Comparison of the reconstruction results of real images. The images are reconstructed by the comparison algorithms and our proposed algorithm respectively. (a,f,k) traditional bicubic interpolation and it details, (b,g,l) reconstrut the RGB color channels independently and the details, (c,h,m) SR with YCbCr color space and the details, (d,i,n) SR using chrominance regularization and the details, (e,j,o) our proposed method and the details.
Symmetry 13 00901 g010
Table 1. Comparison of the PSNR of different SR methods.
Table 1. Comparison of the PSNR of different SR methods.
ImageBicubicSR Method 1SR Method 2SR Method 3SR Method 4
airplane23.344825.232923.647725.817526.7506
parrot27.944329.237627.781929.967030.8983
boat24.413427.498425.851729.489230.8083
kids25.100025.387324.093727.845428.8605
butterfly22.358124.573824.392725.726226.8961
face27.594529.314225.996629.313430.5205
child26.446328.340324.422228.341630.7550
bird28.177831.167529.007532.179633.6357
Table 2. Comparison of the SSIM of different SR methods.
Table 2. Comparison of the SSIM of different SR methods.
ImageBicubicSR Method 1SR Method 2SR Method 3SR Method 4
airplane0.77570.79650.76380.80650.8956
parrot0.93190.93310.93510.93960.9631
boat0.83070.86200.89200.90520.9388
kids0.83750.83060.84250.85750.9067
butterfly0.90390.93510.94370.94830.9631
face0.81540.84410.82630.84400.8757
child0.86190.88470.90270.88480.9328
bird0.93800.94720.93610.95190.9646
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shi, S.; Xiangli, B.; Yin, Z. Multiframe Super-Resolution of Color Images Based on Cross Channel Prior. Symmetry 2021, 13, 901. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13050901

AMA Style

Shi S, Xiangli B, Yin Z. Multiframe Super-Resolution of Color Images Based on Cross Channel Prior. Symmetry. 2021; 13(5):901. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13050901

Chicago/Turabian Style

Shi, Shen, Bin Xiangli, and Zengshan Yin. 2021. "Multiframe Super-Resolution of Color Images Based on Cross Channel Prior" Symmetry 13, no. 5: 901. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13050901

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop