Next Article in Journal
A New Empirical Model of NmF2 Based on CHAMP, GRACE, and COSMIC Radio Occultation
Next Article in Special Issue
Underwater Image Restoration Based on a Parallel Convolutional Neural Network
Previous Article in Journal
Towards Automatic Segmentation and Recognition of Multiple Precast Concrete Elements in Outdoor Laser Scan Data
Previous Article in Special Issue
Reconstructing Cloud Contaminated Pixels Using Spatiotemporal Covariance Functions and Multitemporal Hyperspectral Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Virtual Restoration of Stained Chinese Paintings Using Patch-Based Color Constrained Poisson Editing with Selected Hyperspectral Feature Bands

1
College of Geoscience and Surveying Engineering, China University of Mining and Technology, Beijing, Beijing 100083, China
2
School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
3
Beijing Key Laboratory for Architectural Heritage Fine Reconstruction & Health Monitoring, Beijing University of Civil Engineering and Architecture, Beijing 102616, China
4
Capital Museum, Beijing 100045, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(11), 1384; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111384
Submission received: 9 May 2019 / Revised: 4 June 2019 / Accepted: 6 June 2019 / Published: 10 June 2019
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)

Abstract

:
Stains, as one of most common degradations of paper cultural relics, not only affect paintings’ appearance, but sometimes even cover the text, patterns, and colors contained in the relics. Virtual restorations based on common red–green–blue images (RGB) which remove the degradations and then fill the lacuna regions with the image’s known parts with the inpainting technology could produce a visually plausible result. However, due to the lack of information inside the degradations, they always yield inconsistent structures when stains cover several color materials. To effectively remove the stains and restore the covered original contents of Chinese paintings, a novel method based on Poisson editing is proposed by exploiting the information inside the degradations of selected three feature bands as the auxiliary information to guide the restoration since the selected feature bands captured fewer stains and could expose the covered information. To make the Poisson editing suitable for stain removal, the feature bands were also exploited to search for the optimal patch for the pixels in the stain region, and the searched patch was used to construct the color constraint on the original Poisson editing to ensure the restoration of the original color of paintings. Specifically, this method mainly consists of two steps: feature band selection from hyperspectral data by establishing rules and reconstruction of stain contaminated regions of RGB image with color constrained Poisson editing. Four Chinese paintings (‘Fishing’, ‘Crane and Banana’, ‘the Hui Nationality Painting’, and ‘Lotus Pond and Wild Goose’) with different color materials were used to test the performance of the proposed method. Visual results show that this method can effectively remove or dilute the stains while restoring a painting’s original colors. By comparing values of restored pixels with nonstained pixels (reference of their same color materials), images processed by the proposed method had the lowest average root mean square error (RMSE), normalized absolute error (NAE), and average differences (AD), which indicates that it is an effective method to restore the stains of Chinese paintings.

1. Introduction

Chinese paintings, as one of the most important forms of artistic expression of traditional Chinese culture, are the most valuable treasure of human civilization. However, these ancient paintings suffer from various undesirable patterns and face extinction threats due to natural environmental factors or improper human preservation measures. Stains are one of most common types of degradation and refer to the formation of spots from paper contamination. Stains not only affect the appearance of the painting, but sometimes even cover the text, patterns, and colors on the paper cultural relics. Scholars performed a sampling survey in the Zhengzhou Museum (the first group of Class A museums in China) and found that 96% of ancient paintings were contaminated by stains, of which 68% were severely contaminated and 28% were slightly contaminated [1]. To restore their original appearance, traditional artificial resolution attempts adopt physical or chemical methods to remove these stains [2,3]. However, these approaches need repeated trials, and the repair process has many risks, where any carelessness may cause permanent irreversible damage to the ancient paintings.
Virtual restoration can reduce human interference and provide a reference for physical repair [4]. This refers to the removal of undesirable degradations on the collected images of a painting to restore its original appearance, which contributes to improving the artistic value and facilitating subsequent interpretation. It has become an important field in the digital preservation of cultural relics [5,6], and most studies adopt standard 3-channel (red–green–blue (RGB)) images [7,8,9]. With the development of imaging spectroscopy, hyperspectral images have been introduced for the restoration [2,10,11]. Considering the types of image used, virtual restoration is classified into two categories in this paper: RGB-based methods without any other auxiliary information source and hyperspectral-based methods.
RGB-based methods refer to re-estimating the color components of degraded regions with inpainting algorithms based on the assumption that the pixels in these regions share similar geometrical structures or statistical properties with those of a painting’s undegraded parts [12]. Inpainting algorithms mainly include the following two types: diffusion-based and exemplar-based algorithms [12,13]. For diffusion-based inpainting algorithms, Giakoumis et al. [14] applied the controlled anisotropic diffusion algorithm to inpaint the cracks on digitized paintings. Wu et al. [7] replaced orthogonalized diffusion with cross diffusion to improve the filling order of the curvature driven diffusion (CDD) algorithm to remove fissures in murals. These types of algorithms could be well suited for the removal of small areas of degradation, but when applied to large regions, they always lead to a blurry image. For exemplar-based inpainting algorithms, Pei et al. [9] proposed an annular scan synthesis method to gradually fill the region from outside the boundary to the inside to remove stains and crevices on murals, but this method requires adding some auxiliaries to connect the breach. To ensure the connection of structures, recent studies have preferred using priority-defined exemplar-based algorithms that give the filling order of target pixels to conduct the inpainting progress along the structure propagation [15,16,17,18,19]. For example, the Criminisi algorithm [20], which defines the pixel priority with a confidence term and data term, has proven its effectiveness in the restoration of murals [16,17]. Recently, sparse representation-based inpainting methods were also introduced to solve the restoration problems [8,21]. Although exemplar-based and sparse representation-based methods can inpaint large regions with the known texture parts of the image, due to the lack of information inside the degradations, they still face a huge challenge when applied to images with complex structures. Regarding stains, although the covered pigment information remains, the presence of stains leads to changes in the color of the image, which makes it difficult to use residual information in restoration. Additionally, if the stained areas are directly removed and inpainted with existing methods used in the literature [9,17], it is difficult to accurately restore the original internal structures.
Hyperspectral-based methods might provide additional information for stain removal, since hyperspectral data have a wide spectrum range and the covered pigments remain. However, current studies have mainly focused on the visual enhancement of blur patterns and texts in cultural relics using the mining information from hyperspectral data [22,23,24,25]. Regarding virtual restoration, Hou et al. [2] performed maximum noise fraction (MNF) transformation on the hyperspectral data to concentrate the stain information in some component and then performed inverse MNF transformation after abandoning the component. Although stains on ancient paintings can be removed, the restoration results depend on whether the stain component mainly contains stain information. Furthermore, studies have shown that the stain/artifact cover effect decreases in longer wavelengths when compared to that of the RGB image of hyperspectral data [11,26]. Based on this feature, Kim et al. [11] proposed replacing the gradient of visible bands with that of invisible near-infrared bands capturing fewer artifacts, and then reconstructed the image with the newly generated gradients. This approach could effectively remove ink-bleed, corrosion, and foxing of old documents. However, compared to ancient documents, paintings are more complex, with various colors and structural information. This method is difficult to restore the original appearance with stain removal.
To effectively remove the stains on Chinese paintings, this paper proposes restoring a painting’s original appearance with the aid of selected hyperspectral feature bands. We demonstrate a similar feature of the cover effect decreasing stains in longer wavelengths when compared to that of the RGB image of hyperspectral data for Chinese paintings. Based on this feature, we propose a patch-based color constrained Poisson editing method to remove the stains of Chinese paintings. The main contributions of the proposed method are twofold. On the one hand, the introduction of feature bands provides the prior information for the restoration, which overcomes the limits of RGB-based methods due to the lack of information inside the degradations. On the other hand, the construction of color constraints based on the searched match make the Poisson editing suitable for the removal of stains. It is worth noting that although hyperspectral data are with hundreds of bands, in this paper, we focus on restoration of the RGB image (R: 640.31 nm, G: 549.79 nm, B: 460.20 nm) of hyperspectral data, since they are the most natural visualization of the paintings.
The rest of this paper is organized as follows. In Section 2, we propose the algorithms used for virtual restoration, including the preprocessing, feature band selection, and patch-based color constrained Poisson editing. Section 3 describes the experimental results and the comparison with other restoration methods. Section 4 discusses the parameter setting and the effect of feature band selection. Finally, the conclusions drawn from the study are presented in Section 5.

2. Methods

We found that Chinese paintings share similar features with those of old documents, where the cover effect of stains decreases in longer wavelength images when compared to that of RGB image. This is demonstrated by three examples in Figure 1 (hyperspectral images of paintings are from the Digital Heritage and Virtual Restoration Laboratory of Beijing University of Civil Engineering and Architecture); although the stains are obvious at the wavelengths of the RGB image, they are almost invisible at the wavelength of 877.4 nm, where the covered information can be exposed. Based on the feature that the cover effect of stains decreases in longer wavelength images, in this paper, we propose the patch-based color constrained Poisson editing method to reconstruct the manually marked stain regions of RGB image with the selected hyperspectral feature bands that are less affected by stains.
The workflow is summarized in Figure 2. First, the collected hyperspectral images were preprocessed, which consists of radiometric correction and image denoising. Second, the rules were established to select three feature bands capturing fewer stains. Then, the feature bands combined with the RGB image were used to construct the color constraint. On the one hand, the feature bands that can expose the covered information were used to search for the optimal patch from the nonstained region for each pixel located in the manually marked stain region. The patch from the RGB image corresponding to the optimal patch location was used to construct the constraint value for the pixel in the stain region based on the assumption that the relative spatial locations of optimal patch will be coincident within the RGB image and feature bands. On the other hand, each stain region of feature bands was segmented into blocks by quadtree to determine the constraint location. Finally, Poisson editing was used to reconstruct the stain region of the RGB image with the established color constraint to remove the stains, in which the contaminated RGB image was regarded as the target image and the feature bands were regarded as the source image. The details are described in the following sections.

2.1. Data Acquisition and Preprocessing

These paintings processed in this paper were from the Capital Museum of the China, and the hyperspectral data of these paintings were captured by the Digital Heritage and Virtual Restoration Laboratory of Beijing University of Civil Engineering and Architecture. The hyperspectral image was captured with the Themis Vision Systems VNIR400H, a pushbroom scanning hyperspectral imaging system. This system capture images at 1040 wavelength bands from 377.45 nm (ultraviolet) to 1033.10 nm (near infrared), with sampling intervals of 0.6 nm and a spectral resolution of 2.8 nm. This system takes two halogen lamps as the light source and has an instantaneous field angle of 30°. When the hyperspectral image of painting was being captured, the painting was placed on an object support facing the camera, which is handled vertically. The distance from the painting to the camera was about 0.8 m, and each collected image is 1392 × 1000 pixels. To avoid other light sources disturbing the measurement, only two halogen lamps were used to illuminate the painting in the collection. As the painting is usually large, it is impossible to collect the whole painting within an image. The collection of a painting is completed with more than one image, and the collected images overlap about 30% with their adjacent images. In this paper, we selected one or two stain-contaminated images from each painting as the experimental data.
The captured image is always disturbed by the uneven intensity distribution of the light source and by dark current noises. To reduce these effects, radiometric correction was performed with the collected standard white board data and dark current data according to the following formula:
R = R r a w R d a r k R w h i t e R d a r k ,
where R is the calibrated image, R r a w represents the collected original image, R d a r k indicates the dark current data, and R w h i t e is the standard white board image. After seriously noisy band removal, the bands from 433.57 to 974.96 nm (851 bands in total) were chosen as the input data for our approach. To further reduce image noise and improve image quality, MNF transformation was applied, in which the former ten components with large eigenvalue were chosen to perform inverse MNF transformation since the transformed component shows steadily decreasing image quality with increasing component number [27].

2.2. Feature Band Selection

The selected feature bands not only serve as the source image to provide the gradient in the subsequent fusion but are also used to search for the optimal matched patch to construct the color constraint. Thus, the feature bands need to be minimally affected by the stains in order to expose more information covered by the stains. Although the cover effect of stains decreases in longer wavelengths, pigments tend to have a high reflectance in longer wavelengths, resulting in different types of pigments or paper having less separability. For example, in Figure 1e,f, the paper and blue color material seem very similar at the wavelength of 877.4 nm when compared with the wavelength of 497.3 nm. The reducing separability of different color materials may cause difficulty in searching for a match for these bands. Thus, separability of different color materials is another factor to consider when selecting the feature bands.
If the band is less affected by the stains, it means that it shares similar gray values for the same color material with and without stain cover. As shown in Figure 3, the gray values are obviously different below a wavelength of 700 nm for the same color material due to the stain cover, but they became much closer as the wavelength increases. Thus, this paper proposes the use of gray value difference between samples with and without stain cover to determine the band sections. Due to the high correlation between adjacent bands for hyperspectral data, we used the band average method [28] and took the average value for every five adjacent bands to reduce the computational burden. Furthermore, each band was normalized to a 0–1 range with its maximum and minimum values to unify the subsequent quantization. With the assumption, the band sections were estimated using Equation (2):
Δ R i j = R i j ˜ R i j ,
where   R i j ˜ and R i j indicate the average gray value of the j-th color material with and without stain cover at band i, respectively, and Δ R i j indicates the difference between them.
Using the difference in Equation (2), the less affected band section O B j i for the j-th color material was obtained by setting a defined threshold δ and taking Δ R i j δ . δ , which was empirically set to 0.05 throughout the experiments presented here. The stain region might cover multiple color materials, so to ensure that all color materials below the stain were properly accounted for, we find the intersection of the band sections O B j i for the various color materials j with Equation (3), where m indicates the number of color material types:
B i = O B 1 i O B 2 i O B m i .
To ensure that the selected feature bands had separability for different color materials and had fewer relations to provide more information, we used the between-class variance and correlation coefficient to determine the band combinations with randomly selected samples. A total of 800 pixels were selected for each color material and were distributed in both stained and nonstained regions. The established rule to determine the band combinations from the band sections O B i is :
O B P = S i 1 + S i 2 + S i 3 C C i 1 i 2 + C C i 1 i 3 + C C i 2 i 3 ,
where OBP are the calculated values from the combinations of three different bands according to the rule. S i 1 , S i 2 , and S i 3 indicate the between-class variance at bands i1, i2, and i3, respectively. They describe the separability for different color materials, and the larger the value, the better the separability. C C i 1 i 2 , C C i 1 i 3 , and C C i 2 i 3 indicate the correlation coefficient between these three bands where the smaller the correlation coefficient, the lower the data redundancy. Additionally, the between-class variance was computed as shown in Equation (5), where m j is the sample number of color material j, μ i j   is   the mean of samples of color material j at band i, and μ i is the mean of all the samples at band i. The correlation coefficient was computed as shown in Equation (6), where R i   and   R k indicate the samples’ gray value at bands i and k, respectively:
S i = j = 1 m m j ( μ i j μ i ) ( μ i j μ i ) T ,
C C i k = R i μ i R k μ k ( ( R i μ i ) 2 ) 1 2 ( ( R k μ k ) 2 ) 1 2 .
With these definitions, the three band combinations that maximize the OBP were selected as the feature bands.

2.3. Patch-Based Color Constrained Poisson Editing to Remove Stains

2.3.1. The Principle of Poisson Editing

Poisson editing is a classical algorithm that reconstructs the image based on the gradient domain [29], and this idea has been widely used in seamless image composition and cloud removal in remote sensing images [30,31]. It interpolates inward along the boundary of the target image and enforces the spatial variation of the reconstructed image consistent with that of the source image. The colors of the reconstructed image change slowly from the boundary of the target image to the source image. The idea is as shown in Figure 4. The RGB image with stains is regarded as target image IT, and the selected feature bands are regarded as the source image IS. The aim was to reconstruct the pixel values f of stain regions Γ according to the target image’s pixel value f located on the stain region’s boundary Γ and according to the gradient V from the source image. To obtain an accurate and optimized reconstruction result (i.e., the solution of the unknown function f ), the problem is formulated as:
min f Γ | f V | 2 , w i t h   f | Γ = f | Γ ,
where = ( x , y ) is the gradient operator and can be computed from the following finite difference formula:
f x , y x = f x + 1 , y f x , y f x , y y = f x , y + 1 f x , y
Equation (7) aims to derive the result f with a gradient that is as close to the guidance vector field V as possible, where V was computed from the gray scale image that was converted from the source image. The solution to Equation (7) is the unique solution of the following Poisson equation with Dirichlet boundary conditions:
Δ f = div   V   over   Γ , w i t h f | Γ = f | Γ ,
where Δ = 2 x 2 + 2 y 2 is Laplacian operator and div   V = v 1 x + v 2 y indicates the divergence of the gradient vector V = ( v 1 , v 2 ) . The specific solution process can be referenced from the literature [29].
When Poisson editing was directly introduced into stain removal, although it could achieve the purpose of removing the stains since the source image captured fewer stains, it could not restore the original color covered by the stains because of the large color difference, since the target image and source image corresponded to different wavelengths, as shown in Figure 5b. Although the stains could be removed or diluted, the original color is not effectively restored. For example, the stained region in the top left corner of Figure 5a should have been filled with blue color material pixels in Figure 5b but was filled with black color material pixels. Bie et al. [32] introduced new energy terms to improve Poisson editing by drawing the specific color strokes and could control the color of the composited images. This approach inspired us. However, to restore the paintings’ original color, we could not casually specify the color of the stained regions. To solve this problem, we searched for an optimal matched patch in the nonstained region for each pixel in the stained region of the feature bands and selected some constructed pixels from the RGB image based on the searched patch as a color constraint to reconstruct the image.

2.3.2. Color Constraint Construction

To restore the original appearance covered by the stains, this paper first used the gray values of the feature bands to search for the optimal matched patch in the nonstained region for each pixel in the stained region, as shown in Figure 6a. Although exhaustive patch searching can obtain a global optimum, the computational cost is high, especially for large images. Considering the search time, this paper chose the Patchmatch algorithm [33] to search for the optimal matched patch (with a patch size of 7 × 7) in the feature bands, and the search was based on the similarity measure of the sum of squared differences (SSD). Each pixel in the stained region could obtain a matched patch and each pixel overlaps several neighboring patches, as shown in Figure 6a. The target pixel, marked with a red star, overlapped the neighboring green patch in the stained region. Next, we obtained the locations that the neighboring patch’s match overlapped with the target pixel, as marked by the red circles in the nonstained region in Figure 6a. Then, the averaged color votes [33] of the RGB image at these locations were used to generate the color constraint of this pixel, as shown in Figure 6b. However, if all of the constructed pixels based on the searched patch were selected as the color constraint, the reconstructed image would rely solely on the constructed color constraint, and the gradient from the source image would not contribute. This means that once a match error occurs, it is hard to correct this error with the gradient, as shown in Figure 7. The pixels marked by the red box in Figure 7a are the constructed pixels based on the searched patch, and we can see that this region, which should be filled with paper pixels, is filled with a red color material due to a match error. Figure 7b is the reconstructed image based on the full constructed pixels as the color constraint, namely based on Figure 7a, where we see this error continue. To balance the color constraint and gradient, we selected some constructed pixels as the final constraints, as shown in Figure 7c. Although the selected color constraint pixels (marked with white blocks) were still located in the incorrectly matched areas, the error phenomenon was reduced for the reconstructed Figure 7d, based on partial color constraint. In this paper, a partial color constraint was selected through quadtree segmentation, where the stained region of the feature bands was segmented into blocks containing 2 0 , 2 2 ,   2 4 ,   2 6 ,   2 8 , 2 10 pixels, and then a quarter of each quantity of blocks (discussed in further detail below) was randomly selected as the final constraint as shown in Figure 7c.

2.3.3. Image Reconstruction with the Color Constraint

This paper aimed to reconstruct stained regions, and the values for the nonstained regions remained unchanged. As mentioned above, the original Poisson editing could not restore the paintings’ original color since the target image and feature bands have different colors. Thus, this paper improved Poisson editing by constructing the color constraint f′, and the new method can be formulated as:
min f Γ | f V | 2 + λ ( f f ) 2 , w i t h   f | Γ = f | Γ ,
where   λ is used to balance the color constraint and is set to 0.1 in this paper. The implementation of Equation (10) can be discretized on the pixel grid. Let p indicate the pixel located in a region contaminated by a stain; N p indicates the 4-connected neighbor set of pixel p, and pixel q is one of its 4-connected neighbors (i.e.,   q N p ). p , q indicates a pixel pair, and v p q is the gradient of the source image, which is computed as v p q = I S p I S q . Let f p   be the value of the image function f at pixel p. The aim of this paper is to obtain   f p . After discretization, Equation (10) can be converted into Equation (11):
min f p , q Γ 0 ( f p f q v p q ) 2 + λ ( f p f p ) 2 ,           w i t h     f | Γ = f | Γ ,
Equation (11) can be further generalized into Equation (12), and N p indicates the neighbor numbers in N p . Equation (12) can be solved with conjugate gradient methods:
N p + λ f p q ϵ N p Γ f q = q ϵ N p Γ f q + v p q + λ f p .

3. Results and Analysis

3.1. Visual Analysis

This paper focused on the removal of stains, and the stain regions used to locate the pixels which need to be reconstructed are currently achieved by manual labels. In order to test the effectiveness of the proposed method in this study, two ancient paintings were used. One was ‘Fishing’ drawn by Ni Tian, a painter of the Qing Dynasty, as shown in Figure 8a, and the other was ‘Crane and Banana’ drawn by Sima Zhong, a painter of the Qing Dynasty, as shown in Figure 9a. Due to the presence of stains, the appearance of both paintings has turned yellow or brown. The painting ‘Fishing’ mainly includes the following substances: black color material, blue color material, and white color material in the figure’s clothes; red color material in the figure’s legs; brown color material in the boats; and the support is paper. The stains are presented in the form of small dots and are distributed everywhere across the ancient painting, mainly covering the brown color material in the boat, the paper, and the blue color material in the figure’s clothes. The painting ‘Crane and Banana’ mainly includes blue color material and black color material in the banana and the paper. Additionally, the stains mainly covered the blue color material and black color material, and they were large and irregular.
According to the established rules, the feature bands were selected as shown in Figure 8c and Figure 9c. Figure 8c corresponds to the wavelengths of 710.57, 713.76, and 723.33 nm, and Figure 9c corresponds to the wavelengths of 668.75, 671.98, and 710.57 nm for the bands R, G, and B, respectively. With the aid of feature bands, the manually marked stain regions shown in Figure 8b and Figure 9b are effectively removed or diluted, as presented in Figure 8d and Figure 9d. Furthermore, visually, this method restores the original color covered by the stains.
We also compared our method with the methods proposed by Kim et al. [11] and Criminisi et al. [20]. The method proposed by Kim et al. was used to remove the artifacts of old documents, and the method proposed by Criminisi et al. was considered to remove the degradations of murals [16,17,18]. From the results shown in Figure 8e and Figure 9e, we see that the method proposed by Kim et al. is suitable for the removal of small stains but does not work for large stains. When compared with our method, it still retains the stains’ traces, as shown in the red dashed line marked region in Figure 8e and Figure 9e. Regarding the Criminisi method, although the stains are clearly removed, the original painting content is changed, as shown by the red dashed line marked region in Figure 8f and Figure 9f. We see that the structure of the boat was destroyed in the painting ‘Fishing’, and the veins of the leaves at the top left corner disappeared in the painting ‘Crane and Banana’. In general, our method seems to work better when used to remove stains from visual inspection.

3.2. Quantitative Analysis

To verify the effectiveness of our method, we needed to quantitatively evaluate the restored results. However, this was difficult, since there were no ground truth data. In this paper, we referred to the quantitative analysis method of shadow removal [34,35], which selected the samples to compare the gray values between the restored pixels in the shaded regions and adjacent pixels in the nonshaded regions for the same land cover, since they should share similar values.
The samples to verify the methods are shown in Figure 8a and Figure 9a, respectively. For the painting ‘Fishing’, the color boxes of maroon, red, and magenta indicate the reference samples of the brown color material, paper, and blue color material with 1092, 1651, and 100 pixels, respectively. The color boxes of blue, green, and cyan indicate the restored samples of the brown color material, paper, and blue color material with 1079, 2121, and 100 pixels, respectively. For the painting ‘Crane and Banana’, the color boxes of magenta and red indicate the reference samples of blue color material and black color material with 1372 and 600 pixels, respectively. Likewise, the color boxes of cyan and green indicate the restored samples of blue color material and black color material with 1772 and 806 pixels, respectively. As there could be some nonstained pixels in the extracted stain regions, these pixels were not only selected as the reference sample of the original image but also selected as the restored sample of the restored image, as shown in the sample of black color material at the upper right corner of Figure 9a.
In this paper, we applied the root mean square error (RMSE) [34], normalized absolute error (NAE) and average differences (AD) [36] to quantitatively evaluate the restored results. The definition of these metrics in this paper is shown in Equation (13):
R M S E j = k = 1 M j [ R x k , y k , b R x s , y s , b ] 2 M j N A E j = k = 1 M j R x k , y k , b R x s , y s , b k = 1 M j R x k , y k , b A D j = k = 1 M j R x k , y k , b R x s , y s , b M j
where R x k , y k , b indicates the pixel value of restored material j at band b, R x s , y s , b   indicates the mean of adjacent pixels in the nonstained area for material j (regarded as reference samples), and M j   is the sample number of restored material j.
The quantification results are shown in Table 1 and Table 2, respectively. Among these metrics, AD is used to evaluate either underestimation or overestimation of reconstructed values, and larger values of these metrics indicate larger reconstruction errors. It can be seen that the proposed method has the lowest values in each band in most cases for the given metrics compared with other methods, which indicates that they are most similar to the ‘reference samples’ and further proves the effectiveness of our method. The method proposed by Kim ranks second place, and it is primarily caused by forcing the values of reconstructed image close to those of the original contaminated image during the reconstruction process, thereby causing the stains to remain. The method proposed by Criminisi has the largest values in most cases because the processed stain regions always cover several color materials, and it is difficult to accurately infer the inner contents with the known pixels located in the border of stain regions when removing the stains and then filling in these lacuna regions. When the processed regions only cover one color material, take the stain, for example, which covers the blue color material for painting ‘Fishing’, the method proposed by Criminisi could yield lower values, but they are still lager than those of our method and not as accurate as the proposed method.

3.3. Results for Other Study Areas

In order to verify the effectiveness of the proposed method, this paper additionally selected data from four scenes for further experiments, and the restored results are shown in Figure 10. Figure 10a,b and Figure 8 and Figure 9 are respectively from the same Chinese paintings, but visually, the types of stains are different. Additionally, Figure 10c,d is from other Chinese paintings, where the types of pigments covered by the stains are different. One is ‘the Hui Nationality Painting’ from the unknown painter, and the other is ‘Lotus Pond and Wild Goose’ from the painter Zhoujing. It is seen that this method can remove or dilute the stains and maintains the original appearance of the painting to a large extent.

4. Discussion

4.1. Parameter Setting

The blocks formed by quadtree segmentation are a parameter used to balance the color constraint adjustment and the gradients of source image. The more blocks that are selected for the color constraint, the more the reconstructed image relies on the color constraint; conversely, if fewer blocks are selected as the color constraint, the more the reconstructed image relies on the gradient, as shown by the red box marked region in Figure 11. We can see that the reconstructed image with partial pixels as the color constraint appears more natural, and the seams between the different adjacent pixels were less obvious when compared with the reconstructed image with all pixels as the color constraint. This paper explored the effect of different selected block numbers according to the selected samples in Section 3.2. The block numbers were set as full pixels, 50%, 25%, and 12.5%, respectively. Since the position of the constraint block was randomly generated when reconstructing the image, we performed 50 tests on the paintings ‘Fishing’ and ‘Crane and Banana’, respectively, to reduce the influence of this factor on the reconstruction result, and then averaged the RMSE, NAE and AD of the 50 tests.
The quantitative analysis results were as shown in Table 3 and Table 4. We could see that the values of reconstructed results were largest in most cases when full constructed pixels were used as constraints, especially for the case where there were matching errors in ‘Fishing’. Therefore, it is an effective method to improve the image reconstruction accuracy with part constructed pixels as the constraint. In addition, too small constructed pixels as the color constraint could also decrease the accuracy of the reconstructed image, and the highest accuracy was usually obtained when the block numbers were set as 25%. Therefore, we recommend 25% as an appropriate value to balance the color constraint adjustment and the gradients of source image.

4.2. The Effect of Feature Band Selection

Feature bands are a key factor affecting image reconstruction. First, they need to capture fewer stains since the color constraint construction is based on the feature bands. If the selected feature bands were severely contaminated by the stains, the material would change its original color, thereby affecting the search for the optimal match. For example, consider Figure 12, where the severely contaminated RGB image was chosen as the feature bands. We can see that the constructed pixels marked by the red box, based on the searched patches, were incorrect, thereby affecting the image reconstruction. Furthermore, the selected bands had better be with separability. We used the band combinations with the minimum OBP to reconstruct the image, and the quantitative results are as shown in Table 5 and Table 6. Compared to the results of Table 1 and Table 2, we can see that the values of metrics in the three bands are larger, which indicates that the reconstructed images with the minimum OBP have lower reconstruction accuracy. This is because the searched matches based on the different feature band image might be different and the feature bands with the minimum OBP might obtain patches with less accuracy or wrong patches due to the less separability between different color materials. Overall, feature band images with fewer stains and better separability can contribution to better reconstructing the image.

5. Conclusions

This paper proposed the patch-based color constrained Poisson editing method to remove stains from Chinese paintings according to the stains’ features. Compared with the inpainting algorithms based on RGB, this method can use the additional information from the selected feature bands and does not destroy the paintings’ structures when the stains covered several colored materials. Compared with Kim’s method, this method constructed the color constraint with the searched patch, which could effectively remove or dilute the stains while restoring the paintings’ original color. Furthermore, this method was tested with several paintings and could achieve good performance. In general, this study is a new attempt in virtual restoration based on hyperspectral data. Despite the advantages of this method, due to the limited data, further experiments are needed in the future to explore the features of the stains and the possibility of hyperspectral data-based restoration. In addition, the information in the three feature bands is limited. In the future, image enhancement methods in the literature [11] may be considered to integrate the information of multiple bands to aid restoration.

Author Contributions

All the authors designed the study and discussed the basic structure of the manuscript. P.Z. carried out the experiments, analyzed the data, and finished the manuscript; M.H, S.L., X.Z., and W.W. proposed suggestions to improve the quality of the paper; W.W. analyzed the data.

Funding

The research work conducted in this paper was supported by the research fund of the National key research and development program (No.2017YFB1402105), the National Natural Science Foundation of China (No. 41171304), and the Fundamental Research Funds for Beijing University of Civil Engineering and Architecture (No. X18024), and funded by the Beijing Advanced Innovation Center for Future Urban Design (No. UDC2016030200).

Acknowledgments

We would like to thank Professor Songnian Li from Ryerson University, Canada for his constructive suggestions that helped to improve the manuscript.

Conflicts of Interest

The authors declare that they have no conflicts interests.

References

  1. Jin, H. The Investigation and Research of Preservation and Protection of Paper Cultural Relics of Zhengzhou Museum; Zhengzhou University: Zhengzhou, China, 2015. [Google Scholar]
  2. Hou, M.; Zhou, P.; Lv, S.; Hu, Y.; Zhao, X.; Wu, W.; He, H.; Li, S.; Tan, L. Virtual restoration of stains on ancient paintings with maximum noise fraction transformation based on the hyperspectral imaging. J. Cult. Herit. 2018, 34, 136–144. [Google Scholar] [CrossRef]
  3. Chen, X.; Zhu, Q.; Zhang, N.; Chen, Q. Technology for cleaning and restoration of painting and calligraphy relics-the application of minimum intervention principle in paper conservation. Sci. Conserv. Archaeol. 2017, 29, 56–64. [Google Scholar] [CrossRef]
  4. Fu, X.Y.; Han, Y.; Sun, Z.J.; Ma, X.J.; Xu, Y.Q. Line-drawing enhanced interactive mural restoration for Dunhuang Mogao Grottoes. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, IV-2/W2, 99–106. [Google Scholar] [CrossRef]
  5. Pizurica, A.; Platisa, L.; Ruzic, T.; Cornelis, B.; Daubechies, I. Digital Image Processing of The Ghent Altarpiece: Supporting the painting’s study and conservation treatment. IEEE Signal Process. Mag. 2015, 32, 112–122. [Google Scholar] [CrossRef]
  6. Hedjam, R.; Cheriet, M. Historical document image restoration using multispectral imaging system. Pattern Recogn. 2013, 46, 2297–2312. [Google Scholar] [CrossRef]
  7. Wu, M.; Wang, H.; Li, W. Research on multi-scale detection and image inpainting of Tang dynasty tomb murals. Comput. Eng. Appl. 2016, 52, 169–174. [Google Scholar] [CrossRef]
  8. Hanif, M.; Tonazzini, A.; Savino, P.; Salerno, E. Non-Local Sparse Image Inpainting for Document Bleed-Through Removal. J. Imag. 2018, 4, 68. [Google Scholar] [CrossRef]
  9. Soo-Chang, P.; Yi-Chong, Z.; Ching-Hua, C. Virtual restoration of ancient Chinese paintings using color contrast enhancement and lacuna texture synthesis. IEEE Trans. Image Process. 2004, 13, 416–429. [Google Scholar] [CrossRef]
  10. Valdiviezo-N, J.C.; Urcid, G.; Lechuga, E. Digital restoration of damaged color documents based on hyperspectral imaging and lattice associative memories. Signal Image Video Process. 2017, 11, 937–944. [Google Scholar] [CrossRef]
  11. Kim, S.J.; Deng, F.; Brown, M.S. Visual enhancement of old documents with hyperspectral imaging. Pattern Recogn. 2011, 44, 1461–1469. [Google Scholar] [CrossRef]
  12. Guillemot, C.; Meur, O.L. Image Inpainting: Overview and Recent Advances. IEEE Signal Process. Mag. 2013, 31, 127–144. [Google Scholar] [CrossRef]
  13. Ding, D.; Ram, S.; Rodriguez, J.J. Image Inpainting Using Nonlocal Texture Matching and Nonlinear Filtering. IEEE Trans. Image Process. 2018, 28, 1705–1719. [Google Scholar] [CrossRef] [PubMed]
  14. Ioannis, G.; Nikos, N.; Ioannis, P. Digital image processing techniques for the detection and removal of cracks in digitized paintings. IEEE Trans. Image Process. 2005, 15, 178–188. [Google Scholar] [CrossRef]
  15. Liu, J. Intelligent Image Processing and Inpainting for Ancient Fresco Preservation; Zhejiang University: Hangzhou, China, 2010. [Google Scholar]
  16. Cornelis, B.; Ružić, T.; Gezels, E.; Dooms, A.; Pižurica, A.; Platiša, L.; Cornelis, J.; Martens, M.; Mey, M.D.; Daubechies, I. Crack detection and inpainting for virtual restoration of paintings: The case of the Ghent Altarpiece. Signal Process. 2013, 93, 605–619. [Google Scholar] [CrossRef]
  17. Li, C.; Wang, H.; Wu, M.; Pan, S. Automatic recognition and virtual restoration of mud spot disease of Tang dynasty tomb murals image. Comput. Eng. Appl. 2016, 52, 233–236. [Google Scholar] [CrossRef]
  18. Ardizzone, E.; Dindo, H.; Mazzola, G. A Knowledge Based Architecture for the Virtual Restoration of Ancient Photos. Pattern Recogn. 2018, 74, 326–339. [Google Scholar] [CrossRef]
  19. Liu, H.; Lu, G.; Bi, X.; Wang, W. Hierarchical Guidance Strategy and Exemplar-Based Image Inpainting. Information 2018, 9, 96. [Google Scholar] [CrossRef]
  20. Criminisi, A.; Pérez, P.; Toyama, K. Object Removal by Exemplar-Based Inpainting. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003. [Google Scholar]
  21. Zongben, X.; Jian, S. Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 2010, 19, 1153–1165. [Google Scholar] [CrossRef]
  22. Tonazzini, A.; Salerno, E.; Abdel-Salam, Z.A.; Harith, M.A.; Marras, L.; Botto, A.; Campanella, B.; Legnaioli, S.; Pagnotta, S.; Poggialini, F.; et al. Analytical and mathematical methods for revealing hidden details in ancient manuscripts and paintings: A review. J. Adv. Res. 2019, 17, 31–42. [Google Scholar] [CrossRef]
  23. Pan, N.; Hou, M.; Lv, S.; Hu, Y.; Zhao, X.; Ma, Q.; Li, S.; Shaker, A. Extracting faded mural patterns based on the combination of spatial-spectral feature of hyperspectral image. J. Cult. Herit. 2017, 27, 80–87. [Google Scholar] [CrossRef]
  24. Herens, E.; Defeyt, C.; Walter, P.; Strivay, D. Discovery of a woman portrait behind La Violoniste by Kees van Dongen through hyperspectral imaging. Herit. Sci. 2017, 5. [Google Scholar] [CrossRef]
  25. Delaney, J.K.; Zeibel, J.G.; Thoury, M.; Littleton, R.; Morales, K.M.; Palmer, M.; Rie, E.R.D.L. Visible and infrared reflectance imaging spectroscopy of paintings: pigment mapping and improved infrared reflectography. In Proceedings of the O3A: Optics for Arts, Architecture, & Archaeology II, Munich, Germany, 7 July 2009. [Google Scholar] [CrossRef]
  26. Goltz, D.; Attas, M.; Young, G. Assessing stains on historical documents using hyperspectral imaging. J. Cult. Herit. 2010, 11, 19–26. [Google Scholar] [CrossRef]
  27. Green, A.A.; Berman, M.; Switzer, P.; Craig, M.D. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef] [Green Version]
  28. Liu, B.; Fang, J.; Liu, X.; Zhang, L.; Zhang, B. Research on Crop-Weed Discrimination Using a Field ImagingSpectrometer. Spectrosc. Spectr. Anal. 2010, 30, 1830–1833. [Google Scholar] [CrossRef]
  29. Rez, P.; Gangnet, M.; Blake, A. Poisson image editing. ACM Trans. Graph. 2003, 22, 313–318. [Google Scholar] [CrossRef]
  30. Lin, C.H.; Tsai, P.H.; Lai, K.H.; Chen, J.Y. Cloud Removal From Multitemporal Satellite Images Using Information Cloning. IEEE Trans. Geosci. Remote Sens. 2013, 51, 232–241. [Google Scholar] [CrossRef]
  31. Lin, C.H.; Lai, K.H.; Chen, Z.B.; Chen, J.Y. Patch-Based Information Reconstruction of Cloud-Contaminated Multitemporal Images. IEEE Trans. Geosci. Remote Sens. 2013, 52, 163–174. [Google Scholar] [CrossRef]
  32. Bie, X.; Huang, H.; Wang, W. Free Appearance-Editing with Improved Poisson Image Cloning. J. Comput. Sci. Technol. 2011, 26, 1011–1016. [Google Scholar] [CrossRef]
  33. Barnes, C.; Shechtman, E.; Finkelstein, A.; Dan, B.G. PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing. Acm Trans. Graph. 2009, 28, 1–11. [Google Scholar] [CrossRef]
  34. Zhou, Y.; Chen, J.; Guo, Q.; Cao, R.; Zhu, X. Restoration of Information Obscured by Mountainous Shadows Through Landsat TM/ETM+ Images Without the Use of DEM Data: A New Method. IEEE Trans. Geosci. Remote Sens. 2013, 52, 313–328. [Google Scholar] [CrossRef]
  35. Nan, S.; Ye, Z.; Shu, T.; Yan, Y.; Miao, X. Shadow Detection and Removal for Occluded Object Information Recovery in Urban High-Resolution Panchromatic Satellite Images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 2568–2582. [Google Scholar] [CrossRef]
  36. Kahaki, S.M.M.; Arshad, H.; Nordin, M.J.; Ismail, W. Geometric Feature Descriptor and Dissimilarity-based Registration of Remotely Sensed Imagery. PLoS ONE 2018, 13, e0200676. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Examples showing stains of Chinese paintings at different wavelengths, and the stain regions were marked by the red dashed curves. The first row displays the image from the painting ‘Fishing’: (a) RGB image; (b) image at 497.3 nm and covered with small stains; (c) image at 877.4 nm with almost invisible stains. The second row displays the image from the painting ‘Crane and Banana’: (d) RGB image; (e) image at 497.3 nm and covered with large stains; (f) image at 877.4 nm with exposed information inside the stains. The third row displays the image from the painting ‘the Hui Nationality Painting’: (g) RGB image; (h) image at 497.3 nm and covered with large stains; and (i) image at wavelength of 877.4 nm with almost invisible stains.
Figure 1. Examples showing stains of Chinese paintings at different wavelengths, and the stain regions were marked by the red dashed curves. The first row displays the image from the painting ‘Fishing’: (a) RGB image; (b) image at 497.3 nm and covered with small stains; (c) image at 877.4 nm with almost invisible stains. The second row displays the image from the painting ‘Crane and Banana’: (d) RGB image; (e) image at 497.3 nm and covered with large stains; (f) image at 877.4 nm with exposed information inside the stains. The third row displays the image from the painting ‘the Hui Nationality Painting’: (g) RGB image; (h) image at 497.3 nm and covered with large stains; and (i) image at wavelength of 877.4 nm with almost invisible stains.
Remotesensing 11 01384 g001aRemotesensing 11 01384 g001b
Figure 2. Workflow of the proposed method.
Figure 2. Workflow of the proposed method.
Remotesensing 11 01384 g002
Figure 3. An example showing the comparison of the average spectrum of samples with and without stain cover: (a) the samples marked with color box in image; (b) average spectrum of samples with and without stain cover.
Figure 3. An example showing the comparison of the average spectrum of samples with and without stain cover: (a) the samples marked with color box in image; (b) average spectrum of samples with and without stain cover.
Remotesensing 11 01384 g003
Figure 4. Demonstration of Poisson editing: (a) target image; (b) source image; and (c) the gradient of the selected feature bands.
Figure 4. Demonstration of Poisson editing: (a) target image; (b) source image; and (c) the gradient of the selected feature bands.
Remotesensing 11 01384 g004
Figure 5. Poisson editing to remove the stains: (a) original red–green–blue (RGB) image; (b) the processed image by fusing the RGB image and feature bands.
Figure 5. Poisson editing to remove the stains: (a) original red–green–blue (RGB) image; (b) the processed image by fusing the RGB image and feature bands.
Remotesensing 11 01384 g005
Figure 6. Demonstration of the color constraint construction: (a) searching for the optimal patches in the nonstained region for each pixel in the stained region (marked with the red dashed line) in the feature bands; (b) constructing the color constraint for the target pixel (marked with the red star) with the locations that matched the neighboring patch overlapping the target pixel (marked with the red circles).
Figure 6. Demonstration of the color constraint construction: (a) searching for the optimal patches in the nonstained region for each pixel in the stained region (marked with the red dashed line) in the feature bands; (b) constructing the color constraint for the target pixel (marked with the red star) with the locations that matched the neighboring patch overlapping the target pixel (marked with the red circles).
Remotesensing 11 01384 g006
Figure 7. Image reconstruction with different numbers of constructed pixels as the color constraint: (a) the constructed pixels based on the searched patch; (b) the reconstructed image with all of the constructed pixels as the color constraint; (c) the selected locations of the color constraint, marked by white blocks; and (d) the reconstructed image with selected pixels as the color constraint.
Figure 7. Image reconstruction with different numbers of constructed pixels as the color constraint: (a) the constructed pixels based on the searched patch; (b) the reconstructed image with all of the constructed pixels as the color constraint; (c) the selected locations of the color constraint, marked by white blocks; and (d) the reconstructed image with selected pixels as the color constraint.
Remotesensing 11 01384 g007
Figure 8. The restored results of the painting ‘Fishing’ with different methods: (a) RGB image, and different color boxes indicate the selected samples for quantitative analysis of Section 3.2; (b) manually extracted stain regions (marked with white); (c) selected feature bands; (d) image restored by the proposed method; (e) image restored by Kim et al.’s method; and (f) images restored by Criminisi et al.’s method.
Figure 8. The restored results of the painting ‘Fishing’ with different methods: (a) RGB image, and different color boxes indicate the selected samples for quantitative analysis of Section 3.2; (b) manually extracted stain regions (marked with white); (c) selected feature bands; (d) image restored by the proposed method; (e) image restored by Kim et al.’s method; and (f) images restored by Criminisi et al.’s method.
Remotesensing 11 01384 g008
Figure 9. The restored results of the painting ‘Crane and Banana’ with different methods: (a) RGB image, and different color boxes indicate the selected samples for quantitative analysis of Section 3.2; (b) manually extracted stain regions (marked with white); (c) selected feature bands; (d) image restored by the proposed method; (e) image restored by Kim et al.’s method; and (f) result restored by Criminisi et al.’s method.
Figure 9. The restored results of the painting ‘Crane and Banana’ with different methods: (a) RGB image, and different color boxes indicate the selected samples for quantitative analysis of Section 3.2; (b) manually extracted stain regions (marked with white); (c) selected feature bands; (d) image restored by the proposed method; (e) image restored by Kim et al.’s method; and (f) result restored by Criminisi et al.’s method.
Remotesensing 11 01384 g009
Figure 10. The restored results for other study areas: (ad) the original images contaminated by the stains and marked by the red dashed line; (eh) the restored images with the proposed method.
Figure 10. The restored results for other study areas: (ad) the original images contaminated by the stains and marked by the red dashed line; (eh) the restored images with the proposed method.
Remotesensing 11 01384 g010
Figure 11. Image reconstruction with different numbers of pixels as the color constraint: (a) the original image; (b) the constructed pixels based on the searched patch; (c) the reconstructed image with all of the constructed pixels as the color constraint; and (d) the reconstructed result based on a quarter of blocks as the color constraint.
Figure 11. Image reconstruction with different numbers of pixels as the color constraint: (a) the original image; (b) the constructed pixels based on the searched patch; (c) the reconstructed image with all of the constructed pixels as the color constraint; and (d) the reconstructed result based on a quarter of blocks as the color constraint.
Remotesensing 11 01384 g011
Figure 12. Image reconstruction when severely contaminated band combinations were chosen as the feature bands: (a) feature band image; (b) the constructed pixels based on the searched patches; and (c) reconstructed image.
Figure 12. Image reconstruction when severely contaminated band combinations were chosen as the feature bands: (a) feature band image; (b) the constructed pixels based on the searched patches; and (c) reconstructed image.
Remotesensing 11 01384 g012
Table 1. Quantitative analysis by comparing the restored samples to the reference samples for the painting ‘Fishing’.
Table 1. Quantitative analysis by comparing the restored samples to the reference samples for the painting ‘Fishing’.
MaterialMetricKimCriminisiProposed Method
BGRBGRBGR
Brown color materialRMSE0.03140.04110.05280.03610.05230.06590.02820.04070.0526
NAE0.09170.09550.10260.09400.11750.12920.07160.08920.1005
AD−0.0137−0.0075−0.0030−0.0100−0.0170−0.02330.00210.0014−0.0005
PaperRMSE0.05440.06060.06700.05930.08150.09960.03670.04720.0591
NAE0.13260.11480.10620.12940.13690.13780.08040.08340.0903
AD−0.0337−0.0282−0.0206−0.0202−0.0269−0.0327−0.0041−0.0054−0.0088
Blue color materialRMSE0.01680.01270.00750.00860.00560.00560.00500.00350.0049
NAE0.05600.03320.01540.02260.01210.01080.01410.00720.0074
AD−0.0166−0.0122-0.00630.00500.00340.00380.0043−0.0005−0.0026
Table 2. Quantitative analysis by comparing the restored samples to the reference samples for the painting ‘Crane and Banana’.
Table 2. Quantitative analysis by comparing the restored samples to the reference samples for the painting ‘Crane and Banana’.
MaterialMetricKimCriminisiProposed Method
BGRBGRBGR
Blue color materialRMSE0.02260.02810.03410.02200.04150.06840.01200.02260.0343
NAE0.19220.14690.13150.13130.16950.21670.08550.10590.1311
AD−0.0179−0.01500.00290.00370.00790.0209−0.0033−0.00280.0035
Black color materialRMSE0.01440.02240.03000.04520.07430.10090.01150.02010.0298
NAE0.15450.15410.15240.32130.34010.34510.10190.11940.1478
AD−0.0047−0.0027−0.00200.03990.06240.07720.00430.0038−0.0008
Table 3. Quantitative analysis of the reconstructed results based on different numbers of pixels as the color constraint for the painting ‘Fishing’.
Table 3. Quantitative analysis of the reconstructed results based on different numbers of pixels as the color constraint for the painting ‘Fishing’.
BandBlock NumberBrown Color Material PaperBlue Color Material
RMSENAEADRMSENAEADRMSENAEAD
BFull0.02880.07390.00480.03940.0858−0.00540.00630.01830.0058
50%0.02840.07200.00320.03760.0814−0.00410.00520.01440.0045
25%0.02820.07130.00190.03730.0811−0.00360.00460.01260.0038
12.5%0.0284 0.0730−0.00040.03760.0825−0.00440.00430.01160.0034
GFull0.04090.08960.00290.04890.0857−0.00690.00350.00720.0014
50%0.04070.08900.00200.04770.0842−0.00580.00340.00720.0001
25%0.04060.08870.00130.04770.0833−0.00500.00350.0070−0.0001
12.5%0.04080.0893−0.00060.04840.0865−0.00540.00360.0073−0.0008
RFull0.05260.1010−0.00040.05910.0923−0.01010.00460.0093−0.0027
50%0.05260.1007−0.00040.05920.0907−0.00900.00460.0089−0.0024
25%0.05260.1005−0.00040.05930.0899−0.00820.00470.0089−0.0020
12.5%0.05270.1005−0.00050.06010.0903−0.00750.00500.0090−0.0023
Table 4. Quantitative analysis of the reconstructed results based on different numbers of pixels as the color constraint for the painting ‘Crane and Banana’.
Table 4. Quantitative analysis of the reconstructed results based on different numbers of pixels as the color constraint for the painting ‘Crane and Banana’.
BandBlock NumberBlue Color MaterialBlack Color Material
RMSENAEADRMSENAEAD
BFull0.01270.0917−0.00380.01190.10630.0046
50%0.01250.0903−0.00350.01180.10560.0044
25%0.01240.0893−0.00300.01150.10330.0042
12.5%0.01220.0898−0.00320.01140.10390.0045
G Full0.02300.1068−0.00260.02040.12310.0045
50%0.02280.1065−0.00270.02030.12220.0039
25%0.02270.1062−0.00300.02010.12070.0036
12.5%0.02280.1067−0.00330.02010.12180.0043
RFull0.03430.13080.00390.02960.1459−0.0011
50%0.03430.13080.00380.02960.1452−0.0005
25%0.03430.13080.00360.02970.1471−0.0008
12.5%0.03430.13100.00370.02980.1483−0.0015
Table 5. Quantitative analysis of the reconstructed image based on band combinations with the minimum OBP for the painting ‘Fishing’.
Table 5. Quantitative analysis of the reconstructed image based on band combinations with the minimum OBP for the painting ‘Fishing’.
BandBrown Color MaterialPaperBlue Color Material
RMSENAEADRMSENAEADRMSENAEAD
B0.02970.07770.00180.04140.0939−0.00870.01160.03450.0112
G0.04340.09690.00210.05410.1004−0.01040.01190.02880.0111
R0.05620.10960.00140.06640.1058−0.01240.01720.03690.0164
Table 6. Quantitative analysis of reconstructed image based on band combinations with the minimum OBP for the painting ‘Crane and Banana’.
Table 6. Quantitative analysis of reconstructed image based on band combinations with the minimum OBP for the painting ‘Crane and Banana’.
BandBlue Color MaterialBlack Color Material
RMSENAEADRMSENAEAD
B0.01230.0860−0.00420.01230.11390.0045
G0.02340.10750.00300.02180.13480.0047
R0.03710.13450.00860.03250.1677−0.0053

Share and Cite

MDPI and ACS Style

Zhou, P.; Hou, M.; Lv, S.; Zhao, X.; Wu, W. Virtual Restoration of Stained Chinese Paintings Using Patch-Based Color Constrained Poisson Editing with Selected Hyperspectral Feature Bands. Remote Sens. 2019, 11, 1384. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111384

AMA Style

Zhou P, Hou M, Lv S, Zhao X, Wu W. Virtual Restoration of Stained Chinese Paintings Using Patch-Based Color Constrained Poisson Editing with Selected Hyperspectral Feature Bands. Remote Sensing. 2019; 11(11):1384. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111384

Chicago/Turabian Style

Zhou, Pingping, Miaole Hou, Shuqiang Lv, Xuesheng Zhao, and Wangting Wu. 2019. "Virtual Restoration of Stained Chinese Paintings Using Patch-Based Color Constrained Poisson Editing with Selected Hyperspectral Feature Bands" Remote Sensing 11, no. 11: 1384. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111384

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop