Next Article in Journal
A Spatio-Temporal Analysis of Rainfall and Drought Monitoring in the Tharparkar Region of Pakistan
Previous Article in Journal
Simulating the Impact of Urban Surface Evapotranspiration on the Urban Heat Island Effect Using the Modified RS-PM Model: A Case Study of Xuzhou, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Landsat 8 OLI/TIRS Level-2 and Sentinel 2 Level-1C Fusion Techniques Intended for Image Segmentation of Archaeological Landscapes and Proxies

1
Department of Civil Engineering and Geomatics, Faculty of Engineering and Technology, Cyprus University of Technology, Saripolou 2-8, 3036 Limassol, Cyprus
2
Eratosthenes Centre of Excellence, Saripolou 2-8, 3036 Limassol, Cyprus
Submission received: 7 January 2020 / Revised: 27 January 2020 / Accepted: 7 February 2020 / Published: 10 February 2020

Abstract

:
The use of medium resolution, open access, and freely distributed satellite images, such as those of Landsat, is still understudied in the domain of archaeological research, mainly due to restrictions of spatial resolution. This investigation aims to showcase how the synergistic use of Landsat and Sentinel optical sensors can efficiently support archaeological research through object-based image analysis (OBIA), a relatively new scientific trend, as highlighted in the relevant literature, in the domain of remote sensing archaeology. Initially, the fusion of a 30 m spatial resolution Landsat 8 OLI/TIRS Level-2 and a 10 m spatial resolution Sentinel 2 Level-1C optical images, over the archaeological site of “Nea Paphos” in Cyprus, are evaluated in order to improve the spatial resolution of the Landsat image. At this step, various known fusion models are implemented and evaluated, namely Gram–Schmidt, Brovey, principal component analysis (PCA), and hue-saturation-value (HSV) algorithms. In addition, all four 10 m available spectral bands of the Sentinel 2 sensor, namely the blue, green, red, and near-infrared bands (Bands 2 to 4 and Band 8, respectively) were assessed for each of the different fusion models. On the basis of these findings, the next step of the study, focused on the image segmentation process, through the evaluation of different scale factors. The segmentation process is an important step moving from pixel-based to object-based image analysis. The overall results show that the Gram–Schmidt fusion method based on the near-infrared band of the Sentinel 2 (Band 8) at a range of scale factor segmentation to 70 are the optimum parameters for the detection of standing visible monuments, monitoring excavated areas, and detecting buried archaeological remains, without any significant spectral distortion of the original Landsat image. The new 10 m fused Landsat 8 image provides further spatial details of the archaeological site and depicts, through the segmentation process, important details within the landscape under examination.

Graphical Abstract

1. Introduction

Object-based image analysis (OBIA) builds on widely used remote sensing techniques such as image segmentation, edge detection, feature extraction, and classification concepts, having been originally developed as a result of the recognition of limitations working with pixel-based image approaches [1,2]. While OBIA has been thoroughly studied in the past concerning multispectral satellite image segmentation [3,4,5,6,7], this concept has been only recently introduced in the domain of archaeological research [8]. As argued by [9], OBIA is a useful and potential tool for archaeological research; however, it is essential to support the development of approaches that integrate OBIA with archaeological experience. A workflow of an archaeological object-based image analysis designed for stimulating the development of an operational routine for object-based applications in archaeology was recently proposed by [8].
OBIA analysis was introduced for archaeological research as part of an image fusion technique to combine different prospection datasets [10]. Integration and fusion of various datasets should be an active area for future research in archaeological research [11,12], while the development of new approaches is considered a key, but difficult, topic that requires more investigation [9].
Therefore, although pixel-based analysis of satellite images is nowadays well introduced in the literature [9,13], with various examples [14,15,16,17,18,19,20], further research is needed regarding the OBIA analysis for archaeological studies. OBIA processing goes beyond pixel-based traditional image analysis since it comprises groups of pixels that share similar spectral properties into homogenous objects. The latest is achieved through the image segmentation process, before any further image processing. Image segmentation is considered to be an important step of the object-oriented analysis and it is highly dependent on the quality of the image segmentation [21]. Segmentation is based on predefined parameters including the scale factor, taking into consideration also the spectral properties of the pixels. Over- and under-segmentation directly influences object-oriented analysis, generating poor results and detection of various objects.
Spectral information over an archaeological site can be retrieved systematically from satellite-based sensors. The recent open access policy of the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS) Landsat products and European Space Agency (ESA) Copernicus Sentinel images are a unique opportunity for researchers to work with a significant amount of earth observation data. Indeed, in the last years, a collaborative effort between ESA and NASA/USGS provides free access to satellite data at no cost [22]. Earth observation research builds upon the capacity of existing spaceborne sensors that share different spatial, spectral, and radiometric resolutions; however, efforts are still needed to understand the potential synergies between them [9,10]. As already mentioned by Wulder et al. [23] and recently by Agapiou et al. [24], integration of space-based remote sensors in various research studies maximizes the outcomes and supports future research, therefore, going a step further than the processing of individual datasets.
As argued by [25], the exploitation of optical Landsat and Sentinel images provides an annual global median average revisit interval of 2.9 days, and a global median minimum of 14 min (±1 min) and maximum revisit interval of 7.0 days. Therefore, their synergistic use improves their usage in different disciplines of remote sensing including archaeological research. Indeed, while the temporal analysis of the Landsat 8 and Sentinel 2 (2A and 2B) sensors is 16 and 5 days, respectively, their integration increases the temporal resolution by 5.5 and 1.7 times. In addition, the exploitation of both datasets is necessary in cases where the area of interest is not visible due to cloud coverage, and therefore alternative optical datasets are needed to be explored.
This study aims to integrate Landsat 8 and Sentinel 2 optical datasets, two widely used and operational sensors, and investigate their synergistic use for supporting archaeological research. Although Landsat or Sentinel products in archaeological research is not widely established, an increased interest by researchers has been reported in the literature [26,27,28,29,30,31,32].
The spatial resolution of the Landsat image, at 30 m pixel size resolution, is sometimes too coarse to support archaeological investigations. However, the higher spatial resolution of the Sentinel 2 images, at 10 m pixel resolution, provides increased spatial resolution within an archaeological area of interest. The 10 m pixel size resolution is valid for four different spectral bands, out of the total 13 spectral bands of the Sentinel 2, namely the blue (490 nm), green (560 nm), red (665 nm), and near-infrared (842 nm) bands.
This study exploits existing fusion techniques and aims to enhance the spatial resolution of the Landsat image by blending it with Sentinel (10 meter) datasets. Several fusion algorithms and band combinations have been tested and evaluated, providing a quantitative outcome of the fusion process for the selected case study. Upon these findings, the paper explores and quantifies the impact of the fusion analysis on the image segmentation process.
The paper is organized as follow: The next section presents the archaeological site selected as a case study, the satellite images used, as well as the various algorithms and equations for the fusion modeling and the segmentation analysis; the results are presented in Section 3; while the paper ends with some general conclusions and future work.

2. Materials and Methods

2.1. Area of Interest

The archaeological site of “Nea Paphos”, located at the western part of Cyprus, was selected as the case study. The site is enlisted as a World Heritage Monument, due to its significant archaeological findings and materials. According to written sources, the city of “Nea Paphos” was founded at the end of the 4th century while in the beginning of the next century, when the island of Cyprus became part of the Ptolemaic kingdom. “Nea Paphos” became the center of Ptolemaic administration on the island. Until the end of the 2nd century B.C., “Nea Paphos” was considered to be an important political and economic center of the region [33].
The selection of this site was based, to a large extent, on its ablitity to be detected from both Landsat and Sentinel sensors. The site of “Nea Paphos” includes excavated areas, as well as interesting archaeological proxies, indicators of potential underground archaeological remains which are not yet excavated (see [34] and Figure 1b, white arrows). The site is surrounded in the western and southern part by the Mediterranean Sea, while on the northern and eastern part by the modern city of Paphos. The area is depicted in Figure 1, in a high-resolution aerial orthophoto image.

2.2. Datasets

For the purposes of the study, a recent Landsat 8 OLI/TIRS scene (Scene Id: LC81770362019351LGN00) and a Sentinel 2 image (Granule Id: L1C_T36SVD_A023470_20191220T083342) were downloaded from the USGS Earth Explorer platform. Both images were set with minimum cloud coverage (less than 10%) while the time of acquisition between these two images differed by only three days (17 December 2019 for the Landsat image and 20 December 2019 for the Sentinel image). The zenith and sun azimuth angles of both images were comparable, thus, providing analogous solar illumination (sun azimuth ~160°), and sensor viewing angle (zenith angle ~60°). The Landsat 8 image was downloaded at a Level 2 processing, which corrected the image both in terms of geometry and radiometry, including corrections for atmospheric effects. The Sentinel 2 image was downloaded at the Level 1C processing providing top-of-atmosphere (TOA) reflectance data.
The USGS Earth Explorer provides corrected images (Level 2) of Landsat 8 for the first seven spectral bands (Band 1 to Band 7) which cover the coastal aerosol, blue, green, red, near-infrared (NIR), and short-wavelength infrared (SWIR-1 and SWIR-2) part of the spectrum, respectively. The spatial resolution of the above-mentioned spectral bands is 30 m. In contrast, for the fusion needs, the Sentinel 2 Level 1C images were restricted to only the available 10 m spatial resolution bands which include Band 2 to Band 4 and Band 8, and correspond to the blue, green, red and near-infrared part of the spectrum.
It should be mentioned that the images at the specific level of processing (i.e., Level 2 for Landsat and Level 1C for Sentinel images) are currently provided as “ready products”, meaning that these series of datasets can be directly retrieved from earth data engines such as those of the USGS Earth Explorer platform, without any preprocessing from the end users. Although Sentinel 2 Level 2A (bottom of atmosphere (BOA) reflectance and atmospheric corrected) images are currently not available and not provided through the USGS Earth Explorer platform, such images can be retrieved from the Sentinel Hub engine [35]. Nevertheless, these images are only retrieved at a 20 m spatial resolution [36], which is not efficient in our case study.
In our case study, the specific image (Sentinel 2 Level 1C) was acquired on a clear non-hazy day with minimum cloud cover, thus, minimizing potential impact from the atmosphere. The contribution of the atmosphere was less than 0.2% (reflectance), based on the darkest pixel (DP) image-based atmospheric correction algorithm. In direct comparison with the Sentinel 2 Level 2A (of the same date, downloaded from Sentinel Hub at 20 m pixel resolution), the difference between corrected and non-corrected atmospheric Sentinel images for Bands 2, 3, 4 and 8 was estimated at 0.03% for Bands 2, 3, and 8 and 0.05% for Band 4, which was considered not important.

2.3. Methodology

Using the four 10 m spectral bands of the Sentinel 2 image, as the high-resolution input image, fusion pansharpening techniques were implemented to improve the spatial resolution of the Landsat image. At this step, four different pansharpening methods, namely the Gram–Schmidt, the Brovey, the principal component analysis (PCA), and the hue-saturation-value (HSV) algorithms, were implemented.
The Gram–Schmidt method is based on a general algorithm for vector orthogonalization, i.e., the Gram–Schmidt orthogonalization. The Gram–Schmidt fusion simulates the high-resolution band from the lower spatial resolution spectral bands. In general, this is achieved by averaging the multispectral bands. As the next step, a Gram–Schmidt transformation is performed for the simulated high-resolution band and the multispectral bands, where the simulated high-resolution band is employed as the first band [37,38].
Brovey transformation is a simple but widely used red, green, and blue (RGB) color fusion [39]. Brovey transform is one of the most commonly used pansharpening methods due to its high degree of spatial enhancement, speed, and ease of implementation [40]. Since this transform is intended to produce RGB images, only three spectral bands at a time can be merged from the multispectral input scene. The Brovey transform is shown in Equation (1):
Red = H * R/(R + G + B)
Green = H * G/(R + G + B)
Blue = H * B/(R + G + B)
where H, R, G, and B are for high-resolution, red, green, and blue bands, respectively.
Principal component analysis (PCA) is a statistically-based transformation [41]. The PCA converts intercorrelated multispectral bands into a set of uncorrelated components using orthogonal reprojections of the spectral space. The high-resolution image is fused into the low-resolution multispectral bands by performing a reverse PCA transform [39,41]. The method can be simplified as followed [42]:
Fi = MSi↑ + vi(H − PC1),
where Fi is the i-th band in the fused (pansharpened) image, MSi↑ is the upscaled i-th multispectral (MS) band, vi represents the i-th element in the most significant eigenvector, H is the high-resolution image, and PC1 represents the first principal component in the transformation of the multispectral image.
In the hue-saturation-value (HSV) fusion model, hue (H) defines pure color in terms of “green”, “red”, or “magenta”, while saturation (S) defines a range from pure color (100%) to grey (0%) at a constant lightness level. Finally, value (V) refers to the brightness of the color. The HSV method transforms the RGB image to the HSV color space, while it replaces the value band with the high-resolution image. Then, the method resamples the hue and saturation bands to the high-resolution pixel size and transforms the image back to the RGB color space. More details for all the above fusion methods are found at [42,43]. The implementation of all these fusion methods was carried out in the ENVI Harris Geospatial Solutions v.5.1 environment.
The outcomes of the fusion between the Landsat and the Sentinel images were evaluated with various quality image methods. In this study five different methods were used, namely the (a) bias, (b) image entropy, (c) ERGAS (erreur relative globale adimensionnelle de synthèse), (d) RASE (relative average spectral error), and (e) RMSE (root mean squared error). Their equations are given below, i.e., Equations (3) to (7). The bias method provides the deviation degree between the fused and the low-resolution images, while image entropy quantifies the information of the fused image based on Shanon theorem. The ERGAS estimates the ratio between pixels of the fused and the low-resolution image, the RASE method characterizes the average performance of the image fusion for all spectral bands considered, and the RMSE measures the difference between the reference image and the fused image. More details regarding the aforementioned quality image methods are found in [44,45]. The implementation of the quality methods was carried out in the MathWorks MATLAB R2016b environment based on [45] toolbox.
B i a s = 1 y ¯ x
I m a g e   E n t r o p y   ( E ) = i = 1 b c p l o g 2 ( p )
E R G A S = 100 h l 1 N k = 1 N R M S E ( B k ) 2 x ¯ k 2
R A S E = 100 x 1 N k = 1 N R M S E ( B k ) 2
R M S E = i = 1 n ( x i y i ) 2 n
On the basis of the above findings, the best fusion model and spectral 10 m band were selected. The next step included the segmentation process in the ENVI Harris Geospatial Solutions v.5.1 environment. Several segments using the edge method were implemented with various scale factors. The latest included segmentations from a scale factor of 10 to 100, with a step of 10. The segmentation process (edge method) produces a gradient image using the Sobel edge detection method. Then, the watershed algorithm is applied to the gradient image [46]. The watershed segmentation is a region-based method that has its origins in mathematical morphology [47]. In watershed segmentation, an image is regarded as a topographic landscape with ridges and valleys [48].
The segmentation performance, as a comparison of the segments produced from the 30 m Landsat 8 image and the fused 10 m Landsat 8 image was then examined. The first image was used as the ground truth image, while the second image was used as the slave image. This procedure was carried out once again in the MathWorks MATLAB R2016b environment, based on the [49,50,51] toolbox. Several segmentation metrics were calculated including accuracy (Equation (8)), sensitivity (Equation (9)), precision (Equation (10)), Matthews correlation coefficient (MCC) (Equation (11)), Dice index (Equation (12)), and Jaccard index (Equation (13)). The optimum results from this analysis were also confirmed with high-resolution ground truth data, provided after the digitization of an aerial orthophoto of the area, scale 1:5000.
Accuracy = (TP + TN)/(FN + FP + TP + TN),
Sensitivity = TP/(TP + FN)
Precision = TP/(TP + FP)
MCC = (TP*TN − FP*FN)/sqrt((TP + FP)*(TP + FN)*(TN + FP)*(TN + FN))
Dice = 2*TP/(2*TP + FP + FN)
Jaccard = Dice/(2-Dice)
where TP refers to true positive, TN to true negative, FN to false negative, and FP to false positive. The findings are discussed in the context of the archaeological site of “Nea Paphos”. The overall methodology discussed earlier in this section is depicted in Figure 2.

3. Results

In this section, the fusion between the Landsat 8 (30 m) and the Sentinel 2 (10 m) images are demonstrated, while the results from the segmentation sensitivity follows.

3.1. Fusion Results

The general results of the fusion analysis between the Landsat 8 and Sentinel 2 images are shown in Figure 3. The first row of Figure 3, under the fused data label, presents the pansharpened Landsat 8 image based on the Gram–Schmidt method, the second row of Figure 3 presents the results after the application of the Brovey method, and the third and fourth rows indicate the pansharpened Landsat 8 image upon the application of the PCA and HSV methods, respectively. The first four columns of Figure 3, under the fused data label, present the different results of the above-mentioned pansharpening methods based on the four different Sentinel 2 (10 m) spectral bands used as the high-resolution input image. As mentioned earlier, these four spectral bands include the blue (Band 2), green (Band 3), red (Band 4), and NIR (Band 8) bands of Sentinel 2. The last column of Figure 3, under the raw data label, indicates the pseudo-color composites of the Landsat 8 (30 m) and Sentinel 2 (10 m) images at the red, green, and blue (RGB), and near-infrared red and green (NIR-R-G) visualization.
Compared to its original 30 m pixel resolution image, the fused pansharpened Landsat 8 image has an improved spatial resolution. The fusion is capable of capturing spatial details within the archaeological site of “Nea Paphos”. This is true for all four different fusion methods applied in this study. It is worthwhile to notice that such details, within the archaeological site, were not visible in the original Landsat 30 m image (last column of Figure 3). The PCA fusion results are distinct as compared with the other fusion algorithms.
Moreover, while similar optical results are demonstrated for the visible bands (columns 1 to 3 of Figure 3, fused data) of each fusion method, the NIR band (fourth column of Figure 3) presents some differentiation. From visual interpretation, the Gram–Schmidt method seems to provide the most evident results as compared with the rest methods, which are similar with the original Sentinel 2 (10 m) image (shown in the last column of Figure 3). In this image, the excavated areas of the archaeological site (area c of Figure 1) are visible from visual interpretation, while the eastern boundary of the site with the modern city of Paphos, is becoming more distinct.
A closer look on the fusion results at specific areas of the archaeological site can be found in Figure 4 (for area b, Figure 1) and Figure 5 (for area c, Figure 1). These areas indicate the northern defensive wall of the city and other archaeological proxies, i.e., buried archaeological remains (area b, Figure 1), as well as existing excavated areas of the site (area c of Figure 1,).
As illustrated in Figure 4, the standing defensive wall (see yellow arrows at Figure 4), and archaeological proxies (see white arrows at Figure 4), are becoming visible in all outcomes of the fusion process as compared with the original Landsat 8 image. Similar observations, as those reported earlier, concerning the PCA fusion method are also valid. While it is difficult through visual interpretation to assess which method and spectral band better enhance the contrast of the archaeological proxies and standing defensive wall with the surrounding archaeological site, once again the Gram–Schmidt method seems to be the best fusion method.
Comparable observations with those of Figure 4, can also be reported for area c in Figure 5. The excavated areas of the archaeological site (see yellow arrows at Figure 5) as well as the eastern boundary of the site with the modern city of Paphos (see white arrows at Figure 5) are becoming visible after the application of the fusion methods, while the Gram–Schmidt method once again provides the clearest (sharpen) fused image.
Following the visual inspection of the fusion results (see Figure 3, Figure 4 and Figure 5), a quantitative analysis evaluating the performance of each fusion method was carried out. The overall results of this analysis are grouped in Table 1. As mentioned in the previous section, five different quality image methods, namely the bias, image entropy, ERGAS, RASE, and RMSE, have been applied. Each method compares the spectral distance between the original (i.e., Landsat 8 image, 30 m) and the fused image (i.e., Landsat 8 fused image, 10 m), based on the different mathematical equations indicated in Equations (3) to (7). Results closest to zero indicate no significant spectral difference between the two images.
As presented in Table 1, the NIR spectral band for both Gram–Schmidt and Brovey transformations tends to give better results, with the exception of the Brovey method based on the entropy quality image method. In contrast, for the PCA and HSV fusion methods, the best results are obtained at the visible spectral bands, namely the blue, green, and red bands.
However, once we compare all the results, we can observe that the Gram–Schmidt method provides the best results, i.e., lower values for each different method, in relation to the rest of the fusion models. The score of the Gram–Schmidt method using the NIR spectral band of Sentinel 2 (Band 8) for the bias, entropy, ERGAS, RASE, and RMSE was estimated to 0.028, 0.360, 0.748, 3.271, and 5.058, respectively, while the same score for the Brovey pansharpening method was calculated to 0.991, 1.654, 27.364, 100.783, and 236.473.
This finding is also compatible with the visual interpretation analysis and the outcomes of Figure 3, Figure 4 and Figure 5, where the specific method seemed to provide the sharpest images for the archaeological site. In addition to this, and in alignment with the interpretation of the fused images at Figure 3, Figure 4 and Figure 5, the best 10 m spectral band of the Sentinel 2 is the NIR band.
Therefore, with this evaluation analysis over the archaeological site of “Nea Paphos”, we conclude that the Gram–Schmidt fusion method based on the spectral Band 8 (NIR) of the Sentinel 2, better improves the visual interpretation of the 30 m Landsat 8 to that of a 10 m fused Landsat 8 image, without any significant spectral distortion of the original image.

3.2. Segmentation Analysis

According to these findings, we moved on to the next step of our analysis, investigating the sensitivity of the segmentation process of the fused image (Gram–Schmidt pansharpened Landsat 8 image) as compared with the original Landsat. The segmentation process was performed using different segmentation parameters, changing thus the “scale” parameter from 0 to 100, with a step of 10. The results from the segmentation process for area b of Figure 1 are shown in Figure 6 (for scales 10 to 50) and Figure 7 (for scales 60 to 100). Groups of pixels and segments are visualized in both images with a blue polygon.
As presented in Figure 6, the original Landsat 8 for scales 10 to 40, do not capture the details of the archaeological site in this area, namely the standing defensive wall (indicated with yellow arrows in Figure 6) and the archaeological proxies (indicated with black arrows in Figure 6). The segmentation process at these scales provides almosts pixel-based segmentations (over-segmentation). Therefore the segments are too small and can not capture objects of interest.
Partially good segmentation of the original Landsat 8 is performed at a scale factor of 50 (last row of Figure 6), whereas groups pixels are aligned together, however, once again over-segmentation phenomenon can be also observed. Beyond this scale factor, i.e., for the scale factors 60 to 100 (see Figure 7), an under-segmentation phenomenon is visible since segments are becoming too large to map the objects of interest at the archaeological site.
Therefore, the segmentation process, with the original 30 m Landsat image tends to provide poor results, mainly due to its spatial resolution. In contrast, the fused Gram–Schmidt pansharpened Landsat 8 image is capable of capturing part of the defendsive wall and the archaeological proxies. The defensive wall is well captured at scales 60 and 70 (see Figure 7) since the segments capture various objects of interest at this area of the archaeological site, whereas for the next coarse scales (beyond the factor 70), the segments are quite extensive and not able to provide any useful information for this area (under-segmentation). Similar observations, not shown here, have also been reported for area c of Figure 1 (excavated archaeological areas).
Table 2 shows the results of the segmentation performance based on the seven different segmentation metrics, as described earlier in Equations (8) to (13). The segmentation performance of this Table is a result of the direct comparison of the Gram–Schmidt fused Landsat image and the original Landsat image. As shown in the table, the scale factor 80 provides poor results for all metrics, whereas, for the scale factor 90, the segmentation of both images is the same (score equals to one). This is because the segmentation scale in both images is quite large, and therefore provides identical outcomes, leading an under-segmentation process. The most interesting results are found at scale factor 70, whereas the accuracy, sensitivity, and precision metrics are quite high (as also reported in the smaller scales) and the Dice and Jaccard indexes are the highest among all scales. Most important is that the Matthews correlation coefficient (MCC) score at the scale factor 70 (indicated by grey in Table 2) is the highest from all other scale factors. The MCC is an indicator of the quality of the segmentation as it takes into consideration all the scores from the true positives, true negatives, false positives, and false negatives results. When the value of MCC is one, this is an indicator of a perfect positive correlation between the segments. Conversely, when the classifier always misclassifies, then MCC gets a value of −1. Quantitative results presented in Table 2 are aligned with the visual interpretation and findings of Figure 6 and Figure 7 reported earlier.
These results suggest that the Gram–Schmidt fused Landsat image generates segments that are, on the one hand, aligned with the segments of the 30 m Landsat image (see Table 2), but, on the other hand, with more details, as evident in Figure 6 and Figure 7. However, a comparison between the Gram–Schmidt fused Landsat image against ground truth data is needed. For this comparison, digitization of different areas of interest within the archaeological site of “Nea Paphos” was carried out in the ESRI ArcGIS v10.6 environment and used as a ground truth dataset. Digitized areas are shown as orange and yellow in Figure 8, based on the aerial orthophoto of the area (scale 1:5000) provided by the Department of Land and Surveys of Cyprus (see also Figure 1). These objects included both excavated large areas, visible in the center of Figure 8 with orange regular shapes, and other archaeological proxies indicated by yellow in Figure 8. The background of Figure 8 shows the segmentation of the Gram–Schmidt pansharpened Landsat 8 image using the NIR band of Sentinel. Using ground truth areas as input, it was found that scale factor 70 provided reasonably good results in regard to the segmentation based on both the Dice and Jaccard indices. The score for these two indices was estimated to be 0.687 and 0.523, respectively, which was considered reasonable due to the spatial resolution (10 m) of the fused image as compared with the very high detail (~1 m error) of the orthophoto.
Important details within the archaeological site of “Nea Paphos” are indicated with yellow arrows and areas a–f, in both Figure 9 and Figure 10. Area (a) refers to the standing wall of the northern part of the archaeological site (area b, of Figure 1), and area (b) indicates the outlines of an excavated part where several floors and frescoes can be found in situ. Areas (c) and (d) are excavated regions of the site, and area (e) refers to an archaeological proxy with elliptical shape (possibly an amphitheater) (see more for this proxy at [34]). Finally, area (f) visualizes the eastern boundary of the archaeological site with the modern city of Paphos.
Figure 9 shows the segmentation of the Gram–Schmidt method at the scale factor 70 over the high-resolution aerial orthophoto image (Figure 9, left) and over the segmented Landsat 30 m pixel resolution (Figure 9, right). As demonstrated, the segmentation at this range effectively captures the defensive wall at the northern part of the archaeological site (arrow a), as well as other areas within the “Nea Paphos” site. The fusion of the Landsat 8 image improves the segmentation process as compared with the direct segmentation of the 30 m Landsat 8 image at the same scale factor. Large segments observed in the Landsat 8 (Figure 9b), which removes details and geometrical shapes of the archaeological site, can now be partially recovered in the Gram–Schmidt fused Landsat image. The segmentation difference between the original Landsat 8 image and the fused Gram–Schmidt pansharpened image, for a scale factor 70, is shown in Figure 10.

4. Conclusions

The segmentation process is of great importance to the object-oriented analysis of multispectral satellite images. This paper aims to investigate and evaluate the impact of the fusion of freely distributed satellite-based images such as those of Landsat 8 and Sentinel 2, to improve, at a first step, the spatial resolution of the first image and, consequently, to evaluate the segmentation performance of the fused images based on different scale factors.
The Gram–Schmidt pansharpening method, the near-infrared spectral 10 m band of Sentinel 2 (namely Band 8), and the scale factors at approximately 50 to 70 were the best indicators for the segmentation process for the archaeological site of “Nea Paphos”. This conclusion was supported both from the visual interpretation of the overall results, as well as from the quantitative analysis carried out and demonstrated in Table 1 and Table 2. While a single segmentation method was implemented, here, other segmentation algorithms and strategies could be applied to improve the overall performance of the fused datasets. In our study, the watershed transformation and Sobel filter were used for the segmentation process in various scale factors, which are, however, strongly sensitive to image noise. Therefore, future research could focus in this direction, and therefore minimize the impact of noise (e.g., due to radiometric or atmospheric effects) and improve the segmentation analysis.
Although the results reported, in this study, are only representative of the specific archaeological site of interest, the overall methodological framework could be adopted in any other archaeological site where Landsat and Sentinel images can be found. Important aspects of these datasets are their open access policy distribution, with no acquisition cost, and high temporal revisit times (16 days for Landsat 8 and five days for Sentinel 2 sensors) that are ideal for capturing archaeological sites and landscapes in specific time windows (upon cloud coverage). The proposed methodology is only applicable, however, for images acquired after 2015, when the Sentinel optical sensors datasets (2A and 2B) are available along with the Landsat 8 images. Since Sentinel 2 Level 1C datasets are not atmospherically corrected, this needs to be taken into consideration prior to the fusion. Images taken with minimum cloud coverage, on non-hazy days, are expected to have minimum atmospheric impact, and this can be evaluated using simple image-based atmospheric correction methods such as those of darkest pixel.
Segmentation of Landsat fused images using Sentinel (10 m) bands are expected to better support object-oriented (OBIA) classifications and increase the detection rate of archaeological proxies, therefore, expanding the scope of raw 30 m resolution Landsat images.
Future work is expected to focus in this direction, aiming to further study synergistic use of different sensors, integrated to extract better results for satellite-based archaeological prospection and management, as well as work in other periods of times, whereas archaeological proxies can be better enhanced [52].

Funding

The results are part of the project “Synergistic Use of Optical and Radar data for cultural heritage applications”, (PLACES), under the Research and Innovation Foundation grant agreement CULTURE/AWARD-YR/0418/0007 funded by the Republic of Cyprus.

Acknowledgments

The author would like to acknowledge the “CUT Open Access Author Fund” for covering the open access publication fees of the paper. The author acknowledges the use of Landsat-8 image courtesy of the U.S. Geological Survey and Copernicus Sentinel-2 (ESA) image courtesy of the U.S. Geological Survey.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  2. Hay, G.J.; Castilla, G. Object-based image analysis: Strengths, weaknesses, opportunities and threats (SWOT). In Proceedings of the 1st International Conference on Object-based Image Analysis (OBIA 2006), Salzburg, Austria, 4–5 July 2006. [Google Scholar]
  3. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, A.B. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  4. Ming, D.; Zhang, X.; Wang, M.; Zhou, W. Cropland Extraction Based on OBIA and Adaptive Scale Pre-estimation. Photogramm. Eng. Remote Sens. 2016, 82, 635–644. [Google Scholar] [CrossRef]
  5. Liu, M.; Yu, T.; Gu, X.; Sun, Z.; Yang, J.; Zhang, Z.; Mi, X.; Cao, W.; Li, J. The Impact of Spatial Resolution on the Classification of Vegetation Types in Highly Fragmented Planting Areas Based on Unmanned Aerial Vehicle Hyperspectral Images. Remote Sens. 2020, 12, 146. [Google Scholar] [CrossRef] [Green Version]
  6. Brinkhoff, J.; Vardanega, J.; Robson, A.J. Land Cover Classification of Nine Perennial Crops Using Sentinel-1 and -2 Data. Remote Sens. 2020, 12, 96. [Google Scholar] [CrossRef] [Green Version]
  7. De Castro, A.I.; Peña, J.M.; Torres-Sánchez, J.; Jiménez-Brenes, F.M.; Valencia-Gredilla, F.; Recasens, J.; López-Granados, F. Mapping Cynodon Dactylon Infesting Cover Crops with an Automatic Decision Tree-OBIA Procedure and UAV Imagery for Precision Viticulture. Remote Sens. 2020, 12, 56. [Google Scholar] [CrossRef] [Green Version]
  8. Magnini, L.; Bettineschi, C. Theory and practice for an object-based approach in archaeological remote sensing. J. Archaeol. Sci. 2019, 107, 10–22. [Google Scholar] [CrossRef]
  9. Luo, L.; Wang, X.; Guo, H.; Lasaponara, R.; Zong, X.; Masini, N.; Wang, G.; Shi, P.; Khatteli, H.; Chen, F.; et al. Airborne and spaceborne remote sensing for archaeological and cultural heritage applications: A review of the century (1907–2017). Remote Sens. Environ. 2019, 232, 111280. [Google Scholar] [CrossRef]
  10. Opitz, R.; Herrmann, J. Recent Trends and Long-standing Problems in Archaeological Remote Sensing. J. Comput. Appl. Archaeol. 2018, 1, 19–41. [Google Scholar] [CrossRef] [Green Version]
  11. Agapiou, A.; Sarris, A. Beyond GIS Layering: Challenging the (Re)use and Fusion of Archaeological Prospection Data Based on Bayesian Neural Networks (BNN). Remote Sens. 2018, 10, 1762. [Google Scholar] [CrossRef] [Green Version]
  12. Agapiou, A.; Lysandrou, V.; Sarris, A.; Papadopoulos, N.; Hadjimitsis, D.G. Fusion of Satellite Multispectral Images Based on Ground-Penetrating Radar (GPR) Data for the Investigation of Buried Concealed Archaeological Remains. Geosciences 2017, 7, 40. [Google Scholar] [CrossRef] [Green Version]
  13. Agapiou, A.; Lysandrou, V. Remote Sensing Archaeology: Tracking and mapping evolution in scientific literature from 1999–2015. J. Archaeol. Sci. 2015, 4, 192–200. [Google Scholar]
  14. Alexakis, D.; Sarris, A.; Astaras, T.; Albanakis, Κ. Detection of neolithic settlements in thessaly (Greece) through multispectral and hyperspectral satellite imagery. Sensors 2009, 9, 1167–1187. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Alexakis, A.; Sarris, A.; Astaras, T.; Albanakis, K. Integrated GIS, remote sensing and geomorphologic approaches for the reconstruction of the landscape habitation of Thessaly during the Neolithic period. J. Archaeol. Sci. 2011, 38, 89–100. [Google Scholar] [CrossRef]
  16. Traviglia, A.; Cottica, D. Remote sensing applications and archaeological research in the Northern Lagoon of Venice: The case of the lost settlement of Constanciacus. J. Archaeol. Sci. 2011, 38, 2040–2050. [Google Scholar] [CrossRef]
  17. Gallo, D.; Ciminale, M.; Becker, H.; Masini, N. Remote sensing techniques for reconstructing a vast Neolithic settlement in Southern Italy. J. Archaeol. Sci. 2009, 36, 43–50. [Google Scholar] [CrossRef]
  18. Lasaponara, R.; Masini, N. Beyond modern landscape features: New insights in the archaeological area of Tiwanaku in Bolivia from satellite data. Int. J. Appl. Earth Obs. Geoinform. 2014, 26, 464–471. [Google Scholar] [CrossRef]
  19. Agapiou, A.; Lysandrou, V.; Hadjimitsis, D.G. Optical Remote Sensing Potentials for Looting Detection. Geosciences 2017, 7, 98. [Google Scholar] [CrossRef] [Green Version]
  20. Orengo, H.; Petrie, C. Large-Scale, Multi-Temporal Remote Sensing of Palaeo-River Networks: A Case Study from Northwest India and its Implications for the Indus Civilisation. Remote Sens. 2017, 9, 735. [Google Scholar] [CrossRef] [Green Version]
  21. Hossain, D.M.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
  22. USGS EROS Archive-Sentinel-2. Available online: https://www.usgs.gov/centers/eros/science/usgs-eros-archive-sentinel-2?qt-science_center_objects=0#qt-science_center_objects (accessed on 5 January 2020).
  23. Wulder, M.A.; Hilker, T.; White, J.C.; Coops, N.C.; Masek, J.G.; Pflugmacher, D.; Crevier, Y. Virtual constellations for global terrestrial monitoring. Remote Sens. Environ. 2015, 170, 62–76. [Google Scholar] [CrossRef] [Green Version]
  24. Agapiou, A.; Alexakis, D.D.; Hadjimitsis, D.G. Potential of Virtual Earth Observation Constellations in Archaeological Research. Sensors 2019, 19, 4066. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Li, J.; Roy, D.P. A Global Analysis of Sentinel-2A, Sentinel-2B and Landsat-8 Data Revisit Intervals and Implications for Terrestrial Monitoring. Remote Sens. 2017, 9, 902. [Google Scholar]
  26. Agapiou, A.; Alexakis, D.D.; Sarris, A.; Hadjimitsis, D.G. Evaluating the Potentials of Sentinel-2 for Archaeological Perspective. Remote Sens. 2014, 6, 2176–2194. [Google Scholar] [CrossRef] [Green Version]
  27. Chyla, J.M. How Can Remote Sensing Help in Detecting the Threats to Archaeological Sites in Upper Egypt? Geosciences 2017, 7, 97. [Google Scholar] [CrossRef] [Green Version]
  28. Bachagha, N.; Wang, X.; Luo, L.; Li, L.; Khatteli, H.; Lasaponara, R. Remote sensing and GIS techniques for reconstructing the military fort system on the Roman boundary (Tunisian section) and identifying archaeological sites. Remote Sens. Environ. 2020, 236, 111418. [Google Scholar] [CrossRef]
  29. Rayne, L.; Bradbury, J.; Mattingly, D.; Philip, G.; Bewley, R.; Wilson, A. From Above and on the Ground: Geospatial Methods for Recording Endangered Archaeology in the Middle East and North Africa. Geosciences 2017, 7, 100. [Google Scholar] [CrossRef] [Green Version]
  30. Tapete, D.; Cigna, F. Appraisal of Opportunities and Perspectives for the Systematic Condition Assessment of Heritage Sites with Copernicus Sentinel-2 High-Resolution Multispectral Imagery. Remote Sens. 2018, 10, 561. [Google Scholar] [CrossRef] [Green Version]
  31. Zanni, S.; De Rosa, A. Remote Sensing Analyses on Sentinel-2 Images: Looking for Roman Roads in Srem Region (Serbia). Geosciences 2019, 9, 25. [Google Scholar] [CrossRef] [Green Version]
  32. Fenger-Nielsen, R.; Hollesen, J.; Matthiesen, H.; Andersen, E.A.S.; Westergaard-Nielsen, A.; Harmsen, H.; Michelsen, A.; Elberling, B. Footprints from the past: The influence of past human activities on vegetation and soil across five archaeological sites in Greenland. Sci. Total Environ. 2018, 654, 895–905. [Google Scholar] [CrossRef]
  33. Department of Antiquities of Cyprus. Available online: www.mcw.gov.cy/mcw/DA/DA.nsf (accessed on 6 January 2020).
  34. Agapiou, A. Enhancement of Archaeological Proxies at Non-Homogenous Environments in Remotely Sensed Imagery. Sustainability 2019, 11, 3339. [Google Scholar] [CrossRef] [Green Version]
  35. Sentinel Hub. Available online: https://www.sentinel-hub.com (accessed on 23 January 2020).
  36. L2A Product Definition Document. Available online: http://step.esa.int/thirdparties/sen2cor/2.8.0/docs/S2-PDGS-MPC-L2A-PDD-V14.5-v4.7.pdf (accessed on 23 January 2020).
  37. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6,011,875, 4 January 2000. [Google Scholar]
  38. Sarp, G. Spectral and spatial quality analysis of pan-sharpening algorithms: A case study in Istanbul. Eur. J. Remote Sens. 2014, 47, 19–28. [Google Scholar] [CrossRef] [Green Version]
  39. Zhang, Y. Understanding image fusion. Photogramm. Eng. Remote Sens. 2004, 70, 657–661. [Google Scholar]
  40. Johnson, A.B.; Scheyvens, H.; Shivakoti, R.B. An ensemble pansharpening approach for finer-scale mapping of sugarcane with Landsat 8 imagery. Int. J. Appl. Earth Obs. Geoinf. 2014, 33, 218–225. [Google Scholar] [CrossRef]
  41. Basaeed, E.; Bhaskar, H.; Al-Mualla, E.M. Comparative analysis of pan-sharpening techniques on DubaiSat-1 images. In Proceedings of the 16th International Conference on Information Fusion, Istanbul, Turkey, 9–12 July 2013. [Google Scholar]
  42. Nikolakopoulos, G.K. Comparison of nine fusion techniques for very high resolution data. Photogramm. Eng. Remote Sens. 2008, 74, 647–659. [Google Scholar] [CrossRef] [Green Version]
  43. Ehlers, M.; Klonus, S.; Åstrand, J.P.; Rosso, P. Multi-sensor image fusion for pansharpening in remote sensing. Int. J. Image Data Fusion 2010, 1, 25–45. [Google Scholar] [CrossRef]
  44. Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32 Pt A, 75–89. [Google Scholar] [CrossRef]
  45. Vaiopoulos, A.D. Developing Matlab scripts for image analysis and quality assessment. In Proc. SPIE 8181, Earth Resources and Environmental Remote Sensing/GIS Applications II; International Society for Optics and Photonics: Bellingham, WA, USA, 2011; p. 81810B. [Google Scholar]
  46. L3 Harris ENVI, Segmentation Algorithms Background. Available online: https://www.harrisgeospatial.com/docs/BackgroundSegmentationAlgorithm.html (accessed on 6 January 2020).
  47. Serra, J. Image Analysis and Mathematical Morphology; Academic Press: London, UK, 1982. [Google Scholar]
  48. Preim, B.; Botha, C. Chapter 4—Image Analysis for Medical Visualization, Editor(s): Bernhard Preim, Charl Botha. In Visual Computing for Medicine, 2nd ed.; Morgan Kaufmann: Burlington, MA, USA, 2014; pp. 111–175. Available online: https://0-doi-org.brum.beds.ac.uk/10.1016/B978-0-12-415873-3.00004-3 (accessed on 6 January 2020).
  49. Thanh, D.N.H.; Sergey, D.; Surya Prasath, V.B.; Hai, N.H. Blood Vessels Segmentation Method for Retinal Fundus Images Based on Adaptive Principal Curvature and Image Derivative Operators; Int. Worksh: Moscow, Russia, 2019; pp. 211–218. [Google Scholar] [CrossRef] [Green Version]
  50. Thanh, D.N.H.; Erkan, U.; Prasath, V.B.S.; Kumar, V.; Hien, N.N. A Skin Lesion Segmentation Method for Dermoscopic Images Based on Adaptive Thresholding with Normalization of Colour Models. In Proceedings of the 2019 6th International Conference on Electrical and Electronics Engineering (ICEEE), Istanbul, Turkey, 16–17 April 2019. [Google Scholar]
  51. Thanh, D.N.H.; Prasath, V.B.S.; Hieu, L.M.; Hien, N.N. Melanoma Skin Cancer Detection Method Based on Adaptive Principal Curvature, Colour Normalisation and Feature Extraction with the ABCD Rule. J Digit. Imaging 2019. [Google Scholar] [CrossRef]
  52. Agapiou, A.; Hadjimitsis, D.G.; Sarris, A.; Georgopoulos, A.; Alexakis, D.D. Optimum Temporal and Spectral Window for Monitoring Crop Marks over Archaeological Remains in the Mediterranean region. J. Archaeol. Sci. 2013, 40, 1479–1492. [Google Scholar] [CrossRef]
Figure 1. (a) The archaeological site of “Nea Paphos” located at the western part of Cyprus; (b) area at the northern part of the site, indicating the standing defensive wall (yellow arrows), as well as other archaeological proxies in the area (white arrows); and (c) area at the central part of the archaeological site, where significant archaeological excavations have been carried out in the past (background: orthoimage from the Department of Land and Surveyors, Cyprus).
Figure 1. (a) The archaeological site of “Nea Paphos” located at the western part of Cyprus; (b) area at the northern part of the site, indicating the standing defensive wall (yellow arrows), as well as other archaeological proxies in the area (white arrows); and (c) area at the central part of the archaeological site, where significant archaeological excavations have been carried out in the past (background: orthoimage from the Department of Land and Surveyors, Cyprus).
Remotesensing 12 00579 g001
Figure 2. Overall methodology implemented in the study.
Figure 2. Overall methodology implemented in the study.
Remotesensing 12 00579 g002
Figure 3. Fusion results of the whole archaeological site of “Nea Paphos” using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites.
Figure 3. Fusion results of the whole archaeological site of “Nea Paphos” using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites.
Remotesensing 12 00579 g003
Figure 4. Fusion results of the whole archaeological site of “Nea Paphos” (area b, Figure 1) using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at the first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites. The standing defensive wall is indicated with yellow arrows, and the archaeological proxies are indicated with white arrows.
Figure 4. Fusion results of the whole archaeological site of “Nea Paphos” (area b, Figure 1) using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at the first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites. The standing defensive wall is indicated with yellow arrows, and the archaeological proxies are indicated with white arrows.
Remotesensing 12 00579 g004
Figure 5. Fusion results of the whole archaeological site of “Nea Paphos” (area c, Figure 1) using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at the first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites. Excavated areas are indicated with yellow arrows and the boundary of the archaeological site with the modern city of Paphos is indicated with white arrows.
Figure 5. Fusion results of the whole archaeological site of “Nea Paphos” (area c, Figure 1) using the four different pansharpening methods (Gram–Schmidt, Brovey, PCA, and HSV, at the first to fourth row, respectively) based on the different Sentinel 2 (10 m) spectral bands used as the panchromatic image (columns). The last column indicates the original 30 m Landsat 8 and Sentinel 2 10 m pixel resolution images at the red, green, and blue (RGB), and near-infrared green and blue (NIR-R-G) pseudo composites. Excavated areas are indicated with yellow arrows and the boundary of the archaeological site with the modern city of Paphos is indicated with white arrows.
Remotesensing 12 00579 g005
Figure 6. Segmentation results for area b of Figure 1 using a scale factor from 10 to 50, with a step of 10, applied to the Gram–Schmidt pansharpened Landsat 8 image (left) and the original Landsat 8 image (right). Groups of pixels (segments) are visualized in both images with a blue polygon. Similar results for scales 60 to 100 are shown in Figure 7. The standing defensive wall is indicated with yellow arrows and the archaeological proxies are indicated with black arrows.
Figure 6. Segmentation results for area b of Figure 1 using a scale factor from 10 to 50, with a step of 10, applied to the Gram–Schmidt pansharpened Landsat 8 image (left) and the original Landsat 8 image (right). Groups of pixels (segments) are visualized in both images with a blue polygon. Similar results for scales 60 to 100 are shown in Figure 7. The standing defensive wall is indicated with yellow arrows and the archaeological proxies are indicated with black arrows.
Remotesensing 12 00579 g006
Figure 7. Segmentation results for area b of Figure 1 using a scale factor from 60 to 100, with a step of 10, applied at the Gram–Schmidt pansharpened Landsat 8 image (left) and the original Landsat 8 image (right). Groups of pixels (segments) are visualized in both images with a blue polygon. Similar results for scales 10 to 50 are shown in Figure 6. The standing defensive wall is indicated with yellow arrows and the archaeological proxies are indicated with black arrows.
Figure 7. Segmentation results for area b of Figure 1 using a scale factor from 60 to 100, with a step of 10, applied at the Gram–Schmidt pansharpened Landsat 8 image (left) and the original Landsat 8 image (right). Groups of pixels (segments) are visualized in both images with a blue polygon. Similar results for scales 10 to 50 are shown in Figure 6. The standing defensive wall is indicated with yellow arrows and the archaeological proxies are indicated with black arrows.
Remotesensing 12 00579 g007
Figure 8. Various ground truth areas digitized for the needs of the segmentation performance of the fused Landsat image. In the background, the Gram–Schmidt pansharpened Landsat 8 image using the NIR band of Sentinel is shown.
Figure 8. Various ground truth areas digitized for the needs of the segmentation performance of the fused Landsat image. In the background, the Gram–Schmidt pansharpened Landsat 8 image using the NIR band of Sentinel is shown.
Remotesensing 12 00579 g008
Figure 9. (a) Segmentation of the fused Landsat 8, after the application of the Gram–Schmidt method (indicated with yellow polygons) over the high-resolution aerial photograph of the archaeological site of “Nea Paphos”; (b) segmentation of the original Landsat 30 m pixel resolution. For the details of areas (a) to (f) refer to the text.
Figure 9. (a) Segmentation of the fused Landsat 8, after the application of the Gram–Schmidt method (indicated with yellow polygons) over the high-resolution aerial photograph of the archaeological site of “Nea Paphos”; (b) segmentation of the original Landsat 30 m pixel resolution. For the details of areas (a) to (f) refer to the text.
Remotesensing 12 00579 g009
Figure 10. Difference of the segmentation image of the original Landsat 8 image and the fused Gram–Schmidt pansharpened image, at the scale factor 60. For the details of areas (a) to (f) refer to the text.
Figure 10. Difference of the segmentation image of the original Landsat 8 image and the fused Gram–Schmidt pansharpened image, at the scale factor 60. For the details of areas (a) to (f) refer to the text.
Remotesensing 12 00579 g010
Table 1. Overall statistics for each fused method and each spectral band used for integrating Landsat 8 and Sentinel 2 images. The five different quality methods implemented in this study are also provided. Lowest values (closest to zero) are underlined and indicated in grey.
Table 1. Overall statistics for each fused method and each spectral band used for integrating Landsat 8 and Sentinel 2 images. The five different quality methods implemented in this study are also provided. Lowest values (closest to zero) are underlined and indicated in grey.
Gram–Schmidt Pansharpening Method
Blue bandGreen bandRed bandNIR band
Bias0.1680.1470.1530.028
Entropy 1.0690.6240.6690.360
ERGAS3.7803.0283.1710.748
RASE15.02912.48613.0133.271
RMSE21.99117.01517.8665.058
Brovey Pansharpening Method
Blue bandGreen bandRed bandNIR band
Bias0.9920.9940.9940.991
Entropy 1.4001.4551.5171.654
ERGAS27.40727.44527.44527.364
RASE100.947101.068101.080100.783
RMSE236.901237.231237.235236.473
PCA Pansharpening Method
Blue bandGreen bandRed bandNIR band
Bias0.2000.1980.1990.212
Entropy 1.6771.7101.7521.704
ERGAS5.9965.9565.9746.043
RASE22.49522.26722.31822.512
RMSE40.38639.62939.71740.098
HSV Pansharpening Method
Blue bandGreen bandRed bandNIR band
Bias0.7450.6340.6350.646
Entropy 3.6573.7923.8384.142
ERGAS20.70217.62317.59217.662
RASE78.37968.13167.98068.201
RMSE179.337152.969152.574152.793
Table 2. Segmentation analysis results after the application of the Gram–Schmidt pansharpening method.
Table 2. Segmentation analysis results after the application of the Gram–Schmidt pansharpening method.
Scale 10Scale 20Scale 30Scale 40Scale 50Scale 60Scale 70Scale 80Scale 90
Accuracy0.9530.9510.9440.9520.8880.8890.9730.5531.000
Sensitivity0.9650.9430.9200.9250.8370.8370.9640.5531.000
Precision0.9640.9830.9930.9980.9960.9950.9891.0001.000
MCC0.8930.8930.8840.9050.7870.7880.946--
Dice0.9640.9620.9550.9600.9100.9090.9760.7121.000
Jaccard0.9310.9270.9140.9220.8340.8340.9540.5531.000
Specificity0.9280.9660.9870.9960.9940.9920.986 - -

Share and Cite

MDPI and ACS Style

Agapiou, A. Evaluation of Landsat 8 OLI/TIRS Level-2 and Sentinel 2 Level-1C Fusion Techniques Intended for Image Segmentation of Archaeological Landscapes and Proxies. Remote Sens. 2020, 12, 579. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030579

AMA Style

Agapiou A. Evaluation of Landsat 8 OLI/TIRS Level-2 and Sentinel 2 Level-1C Fusion Techniques Intended for Image Segmentation of Archaeological Landscapes and Proxies. Remote Sensing. 2020; 12(3):579. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030579

Chicago/Turabian Style

Agapiou, Athos. 2020. "Evaluation of Landsat 8 OLI/TIRS Level-2 and Sentinel 2 Level-1C Fusion Techniques Intended for Image Segmentation of Archaeological Landscapes and Proxies" Remote Sensing 12, no. 3: 579. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12030579

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop