Next Article in Journal
Global Assessments of the HY-2B Measurements and Cross-Calibrations with Jason-3
Next Article in Special Issue
Assessing the Influence of Tourism-Driven Activities on Environmental Variables on Hainan Island, China
Previous Article in Journal
Observations and Recommendations for Coordinated Calibration Activities of Government and Commercial Optical Satellite Systems
Previous Article in Special Issue
Land Cover Classification using Google Earth Engine and Random Forest Classifier—The Role of Image Composition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Sentinel-1 Surface Water Mapping Algorithms and Radiometric Terrain Correction Processing in Southeast Asia Utilizing Google Earth Engine

1
Earth System Science Center, The University of Alabama in Huntsville, 320 Sparkman Drive, Huntsville, AL 35805, USA
2
SERVIR Science Coordination Office, NASA Marshall Space Flight Center, 320 Sparkman Drive, Huntsville, AL 35805, USA
3
Department of Atmospheric and Earth Science, The University of Alabama in Huntsville, 320 Sparkman Drive, Huntsville, AL 35805, USA
4
Deltares, Boussinesqweg 1, 2629 HV Delft, The Netherland
5
Spatial Informatics Group, LLC, 2529 Yolanda Ct., Pleasanton, CA 94566, USA
6
SERVIR-Mekong, SM Tower, 24th Floor, 979/69 Paholyothin Road, Samsen Nai Phayathai, Bangkok 10400, Thailand
7
Asian Disaster Preparedness Center, SM Tower, 24th Floor, 979/69 Paholyothin Road, Samsen Nai Phayathai, Bangkok 10400, Thailand
8
Google, Inc., 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
9
Geospatial Analysis Lab, University of San Francisco, 2130 Fulton St., San Francisco, CA 94117, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(15), 2469; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12152469
Submission received: 24 May 2020 / Revised: 21 July 2020 / Accepted: 29 July 2020 / Published: 1 August 2020

Abstract

:
Satellite remote sensing plays an important role in the monitoring of surface water for historical analysis and near real-time applications. Due to its cloud penetrating capability, many studies have focused on providing efficient and high quality methods for surface water mapping using Synthetic Aperture Radar (SAR). However, few studies have explored the effects of SAR pre-processing steps used and the subsequent results as inputs into surface water mapping algorithms. This study leverages the Google Earth Engine to compare two unsupervised histogram-based thresholding surface water mapping algorithms utilizing two distinct pre-processed Sentinel-1 SAR datasets, specifically one with and one without terrain correction. The resulting surface water maps from the four different collections were validated with user-interpreted samples from high-resolution Planet Scope data. It was found that the overall accuracy from the four collections ranged from 92% to 95% with Cohen’s Kappa coefficients ranging from 0.7999 to 0.8427. The thresholding algorithm that samples a histogram based on water edge information performed best with a maximum accuracy of 95%. While the accuracies varied between methods it was found that there is no statistical significant difference between the errors of the different collections. Furthermore, the surface water maps generated from the terrain corrected data resulted in a intersection over union metrics of 95.8%–96.4%, showing greater spatial agreement, as compared to 92.3%–93.1% intersection over union using the non-terrain corrected data. Overall, it was found that algorithms using terrain correction yield higher overall accuracy and yielded a greater spatial agreement between methods. However, differences between the approaches presented in this paper were not found to be significant suggesting both methods are valid for generating accurate surface water maps. High accuracy surface water maps are critical to disaster planning and response efforts, thus results from this study can help inform SAR data users on the pre-processing steps needed and its effects as inputs on algorithms for surface water mapping applications.

Graphical Abstract

1. Introduction

Satellite remote sensing offers a means to monitor water resources and their change in time across large areas. Monitoring these variations is critical in monsoonal regions, such as Southeast Asia, where annual variation in rainfall results in hydrologic extremes that affect local communities [1,2,3,4]. As more people are negatively affected by floods in Asia than in any other place in the world, there is a need for increased hydrologic monitoring to guide flood response efforts [5]. Traditionally, ground-based stream gauges are used to monitor water level or streamflow/discharge in major water bodies; however, these observations fail to provide a large scale overview of conditions in regions where stream gauges are sparsely located. Furthermore, stream gauge-based monitoring provides a simple means to identify floods based on pre-determined water level thresholds set to individual gauge locations, bu fail to capture the spatial extent of flooding, a critical component in disaster response and damage assessments [6]. To address these shortcomings, many methods have been developed leveraging satellite remote sensing to map the surface water extent, particularly during floods. However, to date, there are few automated surface water mapping methods implemented due to uncertainties in the large-scale accuracy of these methods and the need for robust computational resources. Fortunately, the rise of cloud-based data providers and computational resources such as Google Earth Engine (GEE) offer a means to address these computational challenges enabling satellite image processes to be scaled.
Many satellite remote sensing surface water mapping studies and applications focus on the use of optical sensors, such as Landsat [7,8], Sentinel-2 [9], the Moderate Resolution Imaging Spectroradiometer (MODIS) [2,10,11], and the Visible Infrared Imaging Radiometer Suite (VIIRS) [12,13]. These optical water mapping methods include spectral information and thresholds [14,15,16], decision tree approaches [17,18], historical time-series analysis [19,20], and machine learning/deep learning [21,22]. Even with well-defined methods and readily available data for surface water mapping applications, optical sensors can only be used during the day and are hindered by clouds that obscure surface observations, especially in monsoon-driven environments [23]. Often, peak surface water extent during flood events occur when there is cloud cover, resulting in data gaps limiting the use of optical sensors for flood monitoring applications [5].
To address the issue of cloud cover, Synthetic Aperture Radar (SAR) has been employed as the sensor signals penetrate clouds, thus can be used in all weather conditions and during the day or night [24]. There are many examples of SAR data being used for surface water mapping efforts [25,26,27,28], and it is considered to be the most useful space-based remote sensing technology for detecting surface water in the presence of clouds [29]. With the 2014 launch of the Copernicus Sentinel-1 satellite by the European Space Agency (ESA), consistent data acquisition with free, publicly accessible data, has enabled SAR to be applied more frequently to a variety of research areas [30,31]. The capabilities of SAR technology also allows for continuous monitoring of ground features and their changes over time [32]. Surface water mapping methods for SAR imagery are similar to those of optical imagery, employing thresholding [33,34], decision tree classifiers [35], active contour modeling [36], time series information, [37] and statistical/machine learning [38,39,40] methods. While SAR imagery provides unobstructed views of the Earth, it is susceptible to image artifacts caused by radio frequency interference, terrain effects, heavy precipitation, and speckle noise, thus requiring substantial pre-processing [41]. Furthermore, SAR imagery relies on the specular reflectance of open water for detection which can lead to errors of commission with other smooth surfaces such as pavement and errors of omission when surface water may be obfuscated beneath vegetation canopies [42]. Therefore, careful consideration is needed in pre-processing SAR imagery and in the application of automated surface water mapping methods due to these challenges.
As previously mentioned, SAR users are often required to select and undertake a series of complex pre-processing steps to convert the data from a Level-1 SAR product into a suitable higher-level product for scientific analysis. Standardized pre-processing steps of Level-1 data include updating the orbit state vectors, thermal and border noise removal, calibration to either γ 0 , σ 0 , or β 0 units, Range Doppler terrain correction (also known as geometric terrain correction or geocoding), and conversion from linear backscatter units to logarithmic decibels (dB) [43]. Efforts are underway to provide Analysis Ready Datasets (ARD) using these standardized pre-processing steps [44] through select venues: Swiss Data Cube (http://www.swissdatacube.org), Digital Earth Africa (https://www.digitalearthafrica.org), and GEE (https://earthengine.google.com). Further optional processing steps include radiometric terrain correction (RTC), also know as terrain flattening, and speckle filtering [45]. Truckenbrodt el al. [46] evaluated how different software suites and external data sources used during pre-processing affected the computed SAR backscatter values finding that the processing workflow to generate SAR RTC ARDs is subject to the user’s preference. Studies have focused on developing automated workflows for Sentinel-1 to develop surface water maps using the standard pre-processing with speckle filtering to produce surface water maps [47,48]; however, no RTC process was performed. To the extent of the author’s knowledge, few studies have been conducted exploring the effects of SAR pre-processing, specifically RTC, on subsequent automated algorithms such as surface water mapping.
Automation is required for any systematic mapping approach to produce rapid robust water maps and avoid subjective, time-consuming, and expensive manual image interpretation. This analysis was conducted to determine how differently pre-processed SAR data used as inputs into automated surface water mapping approaches influence the accuracy of generated surface water maps. Furthermore, this analysis focused on evaluating unsupervised threshold-based surface water mapping algorithm in effort to provide accurate results with little computational burden in a near real-time applications context. Therefore, the goals of this study were to (1) assess the performances of different automated image-based methods for extracting surface water, and (2) compare their performance across SAR processed imagery with/without RTC. We accomplished these goals by leveraging GEE [49] to apply two automated image-based thresholding methods to extract surface water processed Sentinel 1 data with and without RTC. The generated Sentinel-1 derived water maps were compared and validated against manually interpreted high-resolution Planet Scope data. The results from this analysis can help inform SAR data users on how pre-processing steps affect subsequent automated surface water mapping algorithms.

2. Materials and Methods

2.1. Study Area and Period

For this study we focused our analysis in Southeast Asia, particularly on the Upper Irrawaddy river system of Northern Myanmar and the floodplains of the Lower Mekong basin in the Cambodian Tonlé Sap sub-watershed. The Irrawaddy River system of Myanmar starts at the confluence of the Nmai and Mali rivers at an elevation of 160 m above sea level and proceeds south for over 1900 km crossing complex terrain including the Shan highland and Irrawaddy plain to the Andaman Sea [50]. Starting in Tibetan Plateau, the Mekong river system moves south for 4800 km and across six different countries. The entire basin is home to over 70 million people [51]. Across the Lower Mekong Floodplains, the average elevation ranges from 0.5 to 1.2 m above sea level [52]. Both Cambodia and Myanmar have a tropical monsoon climate [53], where 75%–90% of the country’s annual rainfall occurs in the summer monsoon months from June to September [54,55]. While monsoonal rains are essential for agriculture, supplying water for irrigation, and alluvial sediments, these events can lead to severe flooding greatly impacting people, homes, and ecosystems [56].
This study specifically focused on recent 2019 conditions. For Cambodia, the study period was constrained to May–December 2019, observing both the wet and dry seasons. For Myanmar, the study period was limited to July–August 2019, capturing the summer wet season. These time frames were selected due to availability of both Sentinel-1 imagery as well as quality Planet Scope high resolution imagery with limited cloud cover which is used for validation. Figure 1 depicts the study area and the associated coverage of Planet Scope imagery used for this analysis.

2.2. Data Used

2.2.1. Sentinel-1 Data

The Sentinel-1 sensor is a C-band SAR that operates in multiple acquisition modes at different ground sampling distances (GSD). For this study we used the the Sentinel-1 Interferometric Wide (IW) swath mode at the 10 m GSD, which offers single and dual polarization options of vertical transmitting with vertical receiving (VV) and vertical transmitting with horizontal receiving (VH) dual-polarization. Both polarizations have different interactions with water; VV polarization reacts to the roughness of the surface, which can change due to wind, whereas VH polarization reacts to presence of a canopy or vegetation. Twele et al. [47] performed an analysis of automated surface water mapping and found that the VV polarization performed best. Furthermore, we investigated the use of VV and VH for surface water mapping in the Lower Mekong region and found that using VH polarization causes a larger number of false positives. Thus, only the VV polarization data was used in this study. Two pre-processed versions of the ESA Copernicus Open Access Hub Sentinel 1 Level-1 IW Ground Range Detected (GRD) dataset were used for this study. This first version used, provided through GEE, was the Sentinel-1 Level 1 GRD ARD derived from the ESA data on Copernicus. To generate the GRD GEE dataset, each tile is processed via the Sentinel-1 SNAP7 Toolbox (Sentinel Application Platform, http://step.esa.int/main/toolboxes/snap/) using the standard pre-processing steps to provide a radar backscatter dataset in dB units. The Shuttle Radar Topography Mission (SRTM) [57] is used for the geometric terrain correction. For comparing different pre-processing steps, the second version leveraged Sentinel-1 images accessed from Copernicus and processed locally with the SNAP7 toolbox using the same standard pre-processing steps as the GEE GRD dataset. However, in this case an additional step of RTC [58] was applied to reduce the topographic effects in the SAR imagery. This dataset was uploaded to GEE as an ImageCollection asset to be used for this study. A Lee-sigma speckle filter [59] was then applied to both datasets before surface water mapping to eliminate the granular noise that can occur from the interference of waves reflected from many underlying scatterers. For this comparison we defined the two different pre-processed datasets as “GRD” for the default GEE data and “RTC” for the dataset processed locally, with an additional RTC step, and uploaded to GEE.

2.2.2. MERIT DEM

As surface water occurs at the lowest relative point of a local drainage systems, it is common practice to use elevation information to constrain surface water detection algorithms to logical areas [60]. We used the Multi-Error Removed Improved-Terrain (MERIT) Digital Elevation Model (DEM) [61], a DEM derived from the SRTM and the Advanced Land Observing Satellite (ALOS), World 3D-DEM (AW3D) [62] with absolute bias, stripe noise, speckle noise, and tree height bias removed from the input data to produce an improved representation of elevation particularly in major floodplains and flooded forests. The MERIT DEM was used to derive a Height Above Nearest Drainage (HAND) model [63] and to focus the analysis to areas less than 30 m in height relative to the nearest drainage.

2.2.3. Planet Scope

Planet Scope is a constellation of more than 120 cubesats, named Doves, that operate to acquire visible-near infrared optical data at high-resolution (approximately 3 m GSD). The Planet Scope constellation can acquire imagery globally everyday providing valuable data to large-scale spatial analysis requiring high-resolution in both space and time. A total of 500 Planet Scope images within Myanmar and Cambodia covering an area of 50,140 km 2 were utilized in the analysis. Each Planet Scope image had corresponding Sentinel-1 SAR acquisition for the same date and region. The Planet Scope data were used as a validation dataset by visual interpretation of surface water/no surface water classification for a given sample within each scene. More information regarding the sampling and image interpretation for the validation dataset generation is provided in the Section 2.4 below.

2.3. Surface Water Mapping

Two unsupervised surface water mapping algorithms using Otsu’s method to perform automatic thresholding were employed to the GRD and RTC SAR products. Otsu’s method is a histogram-based thresholding approach were the inter-class variance between two classes, a foreground and background class, is maximized [64]. Otsu’s method for image thresholding assumes bimodality in the histogram of pixel values (in this case dB), however, can produce sub-optimal results if the image has more than two distinct classes. The two surface water mapping algorithms used in this study attempt to constrain the Otsu thresholding by sampling histogram values from areas that are more likely to represent bimodal histogram of water/no water. The two algorithms, herein referred to as “Bmax Otsu” and “Edge Otsu”, define areas to sample histogram values from the image differently. More information on the algorithms is provided in the Section 2.3.1 and Section 2.3.2. To generate the surface water maps for evaluation, the GRD and RTC datasets were used as inputs into both algorithms. Post processing of the water extent maps included elevation thresholding where only observations below 30 m of the HAND model are kept to remove erroneous results for both Bmax Otsu and Edge Otsu approaches. The final flood map generated is a binary “water”/“non-water” image hosted as a GEE ImageCollection asset and subsequently used in the accuracy assessment within the GEE platform. An overview schematic of the workflow for generating the surface water maps is provided in Figure 2 and explained in more detail in the following sub-sections. Source code used and scripts illustrating the methods are available as Supplementary Material.

2.3.1. Bmax Otsu Algorithm

The Bmax Ostsu algorithm is a method to extract a bimodal histogram from imagery as inputs into the Otsu thresholding. This algorithm was originally developed by Cao et al. [65] for operational surface water mapping using Sentinel-1 and was implemented in GEE for this study. Specifically, the algorithm subsets the image into a grid using a chessboard segmentation with a user defined spatial resolution, for this study we used 0.1 ° resolution, after-which each subregion is checked for a bimodal histogram using a maximum normalized between-class variance (BCV), or Bmax, test [66]. To calculate Bmax, an initial estimate of water/no-water for the segment needs to be provided for estimating the probabilities of the individual classes, we provide the initial threshold in this study as −16 dB. The initial threshold was selected using a sensitivity analysis conducted by sampling histograms from images with differing terrain and hydrological conditions for Cambodia and Myanmar. Another threshold is provided to consider the Bmax value as bimodal or not; in this study we considered a Bmax value greater than 0.65 to be bimodal based on a study comparing bimodality thresholds for images with a variety of distributions [66]. However, this bimodal Bmax threshold is variable where values up to 0.75 have been used. For a detailed explanation of the Bmax algorithm readers are referred to Cao et al. [65]. The selected bimodal regions are then used to sample a histogram from the dB values and used to calculate a segmentation threshold using Otsu’s method. The final surface water map is a binary image where dB values less than the threshold are classified as water and values greater than are non-water.

2.3.2. Edge Otsu Algorithm

The concept of the Edge Otsu algorithm was initially pioneered by Donchyts et al. [20] where an index highlighting water was used to extract edges of features in the imagery using a Canny edge filter [67] (with the assumption the edges are water), the algorithm buffers the defined edges and subsequently samples histogram values as input for Otsu thresholding. This method was expanded upon by providing an initial segmentation threshold to create a binary image to alleviate any edges being defined from other classes that are present in SAR imagery (i.e., Urban Areas or Forests). Then the defined edges are filtered by length to omit small edges that can occur and skew the histogram sampling. Similar to the Bmax Otsu algorithm, we provided the initial threshold as −16 dB. The edge features extracted are then filtered by length where only edge features over 200 m in length are considered to be valid water edges; this was done to reduce misclassifications that occur from the initial threshold. The extracted edges were then buffered by 3000 m (1500 m on either side) where the dB values within the buffered edges were used to construct the histogram. Finally, the histogram sampled from the buffered edges were used to calculate a threshold using Otsu’s method and applied to the entire image where dB values less than the threshold are water and values greater than are non-water.

2.4. Evaluation Design

Surface water maps were generated using both the Bmax Otsu and Edge Otsu methods with both GRD and RTC products for 19 dates (Table 1). The resulting surface water maps were analyzed to evaluate the accuracy as compared to a validation dataset. Specifically, the validation dataset was generated utilizing a simple random approach distributed over the corresponding Planet Scope imagery associated with individual flood dates. The validation samples were classified using a digital ocular sampling approach [68,69]. An individual sampler performed a visual interpretation on the Planet Scope imagery, constraining the interpretation to a 3 × 3 pixel neighborhood equating to the approximate resolution of the surface water products generated (10 m GSD). This interpretation survey was conducted to estimate the presence/absence for the following classes: cloud, water, and not water. The interpreter followed a decision tree approach for classifying the validation samples shown in Figure 3. Points classified as clouds were removed leaving 3787 sample points available for the validation.
The validation samples were then used to extract values of water/no-water from the generated surface water maps and statistical metrics were calculated. Specifically, we used a stratified KFold [70] to extract sub-samples of the larger validation data set for estimating a distribution of errors while retaining the distribution of water/non-water samples from the original dataset. From the generated 10 sub-samples we calculated the following metrics: overall accuracy, Cohen’s Kappa coefficient, and F1-score. These metrics were selected as they are widely used to evaluate classification methods [71,72]. We further evaluated the methods using the precision-recall ratio [73] which provides insights on the trade-off between precision (error of commission) and recall (error of omission) rates for each method, where a value of 1 is preferred means there the precision and recall scores are equal and balanced. To assess the statistical difference between methods and datasets, we implemented the McNemar’s test [74], a statistical test to analyze statistical significance of the differences in classifier errors. This method statistically compare error matrices by testing the differences in false positives and false negatives made by either method [75]. The results of this test follows a χ 2 distribution with one degree of freedom where a significance level over 5% means that it can be stated that the two methods differ in their performances.

3. Results and Discussion

The two water mapping algorithms, Bmax Otsu and Edge Otsu, were applied to the two pre- processed Sentinel-1 datasets, GRD and RTC, across all of the scenes and dates where the validation dataset coincided. This produced four collections of surface water maps for validation. The validation points were sampled from the four surface water map collections and tabulated from all samples for interpretation (Table 2). To display the distribution of accuracy metrics from the KFold sampling for the collections, we use violin plots in which the box plot and kernel density estimation are combined to illustrate the metrics in Figure 4. It can be seen that overall, the Edge Otsu algorithm performed the best with an accuracy range of 92%–94% using the GRD dataset and 94%–95% using the RTC dataset. The two surface water maps using the Bmax Otsu method performed similarly with accuracy distributions around 92%–95.5% using the GRD dataset (only one sub-sample iteration had an accuracy near 96%) and 92%–93.5% using the RTC dataset. This highlights that the Edge Otsu algorithm can achieve high accuracies; however, the differences in accuracy ranges using different input data indicate sensitivity to data inputs for the algorithm. Whereas the Bmax Otsu algorithm performs similarly regardless of the dataset used as inputs. These findings are consistent across the overall accuracy, Cohen Kappa, and F1-score metrics. Furthermore, the precision-recall ratio is above 1 for the Edge Otsu methods using both GRD and RTC datasets, meaning there are higher errors of omission as compared to errors of commission. The Bmax Otsu algorithm resulted in a precision-recall ratio of 0.95 and 0.99 for the GRD and RTC datasets, respectively, suggesting a balance of error of commission and omission where using the RTC dataset provides slightly better results. Lastly, across all metrics the collections using the RTC data product have a tighter distribution of the metrics when compared to the methods using the GRD dataset. For example, the standard deviation for the RTC products is 0.33% and 0.36% for the Bmax and Edge Otsu algorithms, respectively, as compared to 0.77% and 0.50% for the algorithms using the GRD dataset. This suggests that the algorithms using the RTC product can produce more stable results across space and time as compared to the GRD product. However, when the McNemar’s test was performed to assess statistically significant differences in errors between methods, no statistically significant differences were found (Table 3). When comparing the differences between the Bmax algorithm using different datasets, we see that there is no difference (McNemar’s test = 1) showing that the algorithm has the same number of false positives and false negatives using either dataset. Whereas when comparing the Edge Otsu algorithm, the RTC product differs the greatest compared to the other methods. This suggests that there is a slight difference in error between the Edge Otsu-RTC product and other methods but it is, however, not statistically significant.
While statistically there are no differences between the method’s resulting errors, there are some considerations regarding the use of the Bmax Otsu and Edge Otsu algorithms with the the GRD and RTC datasets. These considerations include the source of variation in these two algorithms particularly how the two methods differ in sampling histograms for input into the Otsu threhsolding function. The Bmax Otsu algorithm identifies chessboard tiles within the image that are bimodal and uses all the pixels within the tiles for the histogram. This approach captures broad scale bimodality within an image and is dependent on parameters such as the chessboard tile size and the Bmax threshold value for determining bimodality. Choosing a large chessboard tile resolution and too low of Bmax threshold can produce skewed histograms towards high dB values; however, being too strict with parameters without inspecting the data may result in few pixels being sampled. The Edge Otsu algorithm samples a histogram using a buffer around identified edges. This focuses the sampling on localized areas depending on the length of edges considered to be valid and the buffer size. Choosing too small of a buffer will produce a non-bimodal histogram and choosing a very large buffer size can skew the histogram towards higher dB values. Whereby the Edge Otsu is relatively more sensitive to data inputs based on the more localized sampling (as seen from the accuracy assessment) and influenced by the buffer size, the Bmax Otsu relies on a definitions of parameters and thresholds that can impact the results. The Bmax Otsu algorithm in total has fewer parameters to define, therefore, will be easier to quickly transfer to other regions with reasonable results, while to opposite might be true for Edge Otsu algorithm. On the other hand, this also means that Edge Otsu algorithm can be more fine tuned, so its performance could potentially be improved and better calibrated to regional conditions.
To illustrate the differences between the surface water extents generated using the two methods across the two datasets, we provide two examples illustrating the surface water map extents and agreement between the four collections for two different areas and times. The maps are compared using the Intersection over Union (IoU) ratio [77] to calculate the percent overlap of the maps (or agreement) compared to the total extent from both maps. Figure 5 highlights a case in Cambodia over the Mekong River near Phnom Penh for 5 October 2019 with the corresponding Planet Scope data for comparison. In this example, slight differences between the water extents estimated by the Bmax Otsu and Edge Otsu algorithms using the GRD product are observed. Specifically, the Bmax Otsu and Edge Otsu algorithm’s have an IoU of 92.3% in water area using the GRD dataset showing good agreement; however, the Bmax Otsu algorithm shows a larger extent by 7.6% compared to the Edge Otsu extent. This over estimation of the Bmax Otsu method as compared to the Edge Otsu method can be seen by the red pixels in the difference map (Figure 5c). The over estimation using the Bmax algorithm is also seen using RTC dataset as the input data, however, with better agreement with an IoU of 95.8% (Figure 5f). When comparing the same method across the different input datasets, the Bmax Otsu method had an IoU agreement of 89.2% and the water extents estimated using the GRD product was greater by 5.4% (Figure 5g). Similar results are seen using the Edge Otsu algorithm applied to both datasets resulting in an IoU agreement of 89.6% where the water extent generated using the GRD product had an area greater by 2% as compared to the water extent with the RTC product (Figure 5h). This case in Cambodia shows that using Bmax Otsu algorithm or the GRD dataset resulted in larger estimated surface water extents. Although, high spatial agreement can be achieved using the RTC data as inputs into both methods.
Figure 6 further highlights the resulting surface water maps from both algorithms using both datasets for a case over the Irrawaddy River in Myanmar for 28 July 2019. For this case, the RTC data product had the higher agreement between both algorithms (Figure 6f) with an agreement of of 96.4% as compared to using the GRD datset which resulted in an agreement of 93.1%. The Myanmar case shows similarities with the Cambodia case where the RTC dataset has greater spatial agreement in surface water mapping results and the Bmax Otsu method estimates surface water area greater than the Edge Otsu algorithm. However, when comparing how the different input data compares within the methods, the Edge Otsu algorithm had slightly better agreement between the two input datasets with an agreement of 89.4% for the Myanmar case (Figure 6g) compared to the Bmax Otsu algorithm with an agreement of 87.2% (Figure 5h). While there is no statistical difference found between the errors produced using the different methods and datasets, these cases provide additional perspective as to how the different algorithm’s resulting surface water extents compare with each other even though the spatial differences are subtle.
The difference between the methods and generated water maps are also prevalent in the histograms that were sampled. Figure 7 displays the histograms sampled from the GRD data in green and RTC data in blue with the calculated thresholds using the two algorithms and for the two cases previously presented. The Bmax Otsu algorithm produced similar histograms and threshold values using both GRD and RTC datasets (Figure 7a,c). The larger peak for greater dB values from the Bmax Otsu algorithm is due to sampling over the entire chessboard segmentation area, in this case a 0.1 ° resolution grid, that is considered bimodal, but can have a large area of land present. However, the skewness of the resulting histogram from the Bmax algorithm towards greater values can still produce adequate surface water results as demonstrated in the accuracy metrics. Furthermore, the threshold values generated are similar for the Bmax Otsu algorithm using both input data showing that the RTC preprocessing step has little effect on the Bmax Otsu algorithm derived water maps. Again, this is due to the relatively large area chessboard segments that are sampled which may not capture the local-scale adjustments due to the RTC process. The histogram produced with the Edge Otsu algorithm with the GRD data did not produce a distinct bimodal histogram with one prominent peak near −17 dB present in both cases (Figure 7b). Instead, a smaller local maxima near −8 dB for the Myanmar case is present, highlighting the lower threshold for the Cambodia case leading to less surface water area and explaining the higher errors of omission. Whereas the histogram from the Edge Otsu algorithm with the RTC dataset has local maxima peaks near −19 and −7 dB (Figure 7d) leading to a threshold more similar to the thresholds calculated with the Bmax Otsu algorithm. The difference in the histogram distribution for the Edge Otsu algorithm supports the finding that the Edge Otsu algorithm is more sensitive to the input data than the Bmax Otsu algorithm. Overall, these results suggest that the RTC product is superior for sampling bimodal histograms particularly in the case of the Edge Otsu algorithm.
While this analysis of the two cases provide different snapshots of performance between the two algorithms using the differently processed SAR datasets, commonalities exist. First, the use of RTC dataset as inputs into the algorithms indicate more consistent results in terms of threshold calculation and agreement in surface water extent (as seen in the Cambodia and Myanmar cases highlighted). Furthermore, the Edge Otsu algorithm resulted in slightly higher accuracies when using the RTC product, whereas there was little change in accuracy when using the RTC product with the Bmax Otsu algorithm. When considering the performance of algorithms, the Bmax Otsu resulted in similar accuracies across the validation dataset when using different data inputs, but the Edge Otsu method performed best with the RTC dataset, suggesting the Bmax Otsu method is less sensitive to input data (as seen in Figure 7). Although the accuracies of the approaches differed slightly, it was found that there is no statistical significance in the errors.

3.1. Caveats and Limitations

While this study attempts to provide a robust analysis to evaluate SAR data pre-processed with and without an RTC step along with the results from different surface water mapping algorithms, there are some caveats to note. First, this analysis was somewhat limited due to its geographic location and temporal range. This was largely due to the limited availability of cloud-free Planet Scope imagery to generate the validation samples from which coincided with Sentinel-1 imagery during the monsoon. While the study area and period covered different environments (mountainous riverine vs. flood plain) and different timing (dry vs. wet season) the results could prove different for other areas of the world. For example, Chapman et al. [78] suggested that surface water mapping algorithms may have higher errors or omission in flooded forested areas where the C-band SAR signal cannot penetrate the canopy structure to view the underlying water and L-band SAR data is preferred. Conversely, Martinis et al. [79] found higher errors of commission with surface water mapping algorithms in arid regions where flat, smooth surfaces can have a similar signal as water in SAR imagery. Furthermore, when pre-processing the SAR data, we used a Lee Sigma filter, where using different speckle filtering algorithms (such as a Refined Lee filter) can potentially provide adjusted inputs into the surface water algorithms [80]. Lastly, additional parameters can be provided to the algorithms which were not discussed in this paper. While tuning parameters can provide highly customized results for a specific case, this study focused on using default parameters discussed in the Methods section to highlight baseline accuracies which can be applicable for other geographic regions. Exploring all possible combinations offers a thorough comparison of surface water mapping parameterization for different environments, however, is outside the scope of this study.

3.2. Future Work

As stated in Section 3.1, there is additional work that can build upon this study. First, further evaluation of the Sentinel-1 surface water algorithms will be explored across other areas in South Asia and other regions in the world that experience regular flooding (i.e., Amazon River or Niger River). The statistical analysis performed is based on the ground validation samples used and may yield different results for different samples, therefore, additional work will be needed to explore if the approaches produce the same level of accuracy and are in fact statistically significant for other regions. Expanding the validation work initiated in this study will be essential to explore the various surface water mapping approaches and their parameterization in an effort to provide highly accurate surface water maps for a variety of environments with additional reference data. Additionally, these surface water mapping methods can be evaluated using additional datasets such as the Landsat series, Sentinel-2, and even Sentinel-1 data processed with other pre-processing software suites (e.g., Gamma software; https://www.gamma-rs.ch/software). Data fusion methods [23,81] can also be explored using the generated surface water maps to provide consistent, high-temporal surface water extent data when leveraging multi-sensor sources. Lastly, these methods, including proposed future work, can be integrated into an automated surface water mapping system to assist in delivering near real-time surface water maps for a variety of applications such as automated disaster response and flood risk mapping [56].
This work was conducted under the auspices of the SERVIR-Mekong project. SERVIR harnesses space technology and geospatial technologies to to help decision makers to integrate geospatial information into their decision-making process. High quality surface water maps will support efforts for the regional land cover monitoring system [82,83,84,85,86], food security [87], monitoring of ecosystems [88,89] and water management applications [90]. The application of the presented surface water maps integrated with near real-time information systems for water have the potential to support effective decision making processes for sustainable landscape management.

4. Conclusions

This study evaluated the performance of two pre-processed Sentinel-1 SAR datasets (GRD and RTC products) as inputs into two different unsupervised histogram-based surface water mapping algorithms, referred to as the Bmax Otsu and Edge Otsu algorithms. The objectives were to understand how different pre-processing steps in SAR data (using a terrain correction and without a terrain correction) influence the results of surface water mapping and how the different algorithms perform with the inputs. Surface water maps were generated for 19 days in 2019 over Myanmar and Cambodia and compared with user interpreted validation points from Planet Scope imagery. The results highlighted that the Edge Otsu algorithm performed best with the RTC data inputs with an accuracy around 94%–95%. Alternatively, the Bmax Otsu algorithm performed similarly with both input datasets resulting in accuracies from 92% to 94%. The differences in the Edge Otsu algorithm’s performance with separate input data based on the validation suggest the Edge Otsu algorithm is more sensitive to the input data than the Bmax Otsu algorithm, particularly for sampling a bimodal histogram. Although the overall accuracy between the methods is not significantly different, the Edge Otsu showed higher errors of omission whereas the Bmax Otsu algorithm showed slightly higher errors of commission. The Bmax Otsu and Edge Otsu algorithms have a higher spatial agreement when using the RTC data as inputs. While there are differences in accuracies, there was no statically significant differences found between the approach’s errors. The results from this study can help inform remote sensing users on how data inputs and algorithms affect results when producing operational surface water maps. To expand on this study, additional software will be used to process RTC products and the automated surface water algorithms will be evaluated in other regions of the globe with more regionally detailed parameterizations with sensitivity analyses.

Supplementary Materials

Source code for the processing of the raw Sentinel-1 data to RTC products is available at: https://github.com/Servir-Mekong/sentinel-1-pipeline. The source code for the Bmax and Edge Otsu algorithms implemented on GEE with the JavScript API is available at https://code.earthengine.google.com/?accept_repo=users/kelmarkert/hydrafloods with the code used to export this studies surface water maps available at: https://code.earthengine.google.com/2a24d1887bc42a9617dffdfe64a92e11. An example for sampling the validation points from surface water maps and exporting the results is available at: https://code.earthengine.google.com/63216c599e76f6b018c222b4455c82b2

Author Contributions

Conceptualization: K.N.M., A.M.M., T.M., A.H., A.P., F.C., N.C.; methodology: K.N.M., A.M.M., T.M., C.N., A.H., A.P., F.C.; data: K.N.M., C.N., A.P., B.B., N.S.T., N.C.; software: K.N.M., A.H., A.P., B.B., T.K., M.K., N.C.; validation: K.N.M., T.M., C.N., A.H.; visualization, K.N.M., A.M.M., T.M., C.N.; supervision: P.T., D.S.; writing—original draft preparation, K.N.M., T.M., C.N., A.M.M.; Writing—review & editing, K.M., A.M., T.M., C.N., A.H., A.P., B.B., N.S.T., T.K., F.C., M.K., K.P., N.C., P.T. and D.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the joint US Agency for International Development (USAID) and National Aeronautics and Space Administration (NASA) initiative SERVIR-Mekong, Cooperative Agreement Number: AID-486-A-14-00002. Individuals affiliated with the University of Alabama in Huntsville (UAH) are funded through the NASA Applied Sciences Capacity Building Program, NASA Cooperative Agreement: NNM11AA01A.

Acknowledgments

The authors would like to thank the data providers, NASA and the EU Copernicus program, for making data freely available. This analysis contains modified Copernicus Sentinel data (2019), processed by ESA. The high-resolution Planet data for this study was provided by the NASA Commercial Smallsat Data Acquisition Program Pilot. We extend our appreciation to the three anonymous reviewers for their comments that ultimately improved the quality of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The funding agents had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Ali, M.; Clausi, D. Using the Canny edge detector for feature extraction and enhancement of remote sensing images. In Proceedings of the IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No. 01CH37217), Sydney, NSW, Australia, 9–13 July 2001; Volume 5, pp. 2298–2300. [Google Scholar]
  2. Ahamed, A.; Bolten, J.D. A MODIS-based automated flood monitoring system for southeast asia. Int. J. Appl. Earth Obs. Geoinf. 2017, 61, 104–117. [Google Scholar] [CrossRef] [Green Version]
  3. Poortinga, A.; Bastiaanssen, W.; Simons, G.; Saah, D.; Senay, G.; Fenn, M.; Bean, B.; Kadyszewski, J. A self-calibrating runoff and streamflow remote sensing model for ungauged basins using open-access earth observation data. Remote Sens. 2017, 9, 86. [Google Scholar] [CrossRef] [Green Version]
  4. Tolentino, P.L.M.; Poortinga, A.; Kanamaru, H.; Keesstra, S.; Maroulis, J.; David, C.P.C.; Ritsema, C.J. Projected impact of climate change on hydrological regimes in the Philippines. PLoS ONE 2016, 11, e0163941. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Oddo, P.C.; Bolten, J.D. The Value of Near Real-Time Earth Observations for Improved Flood Disaster Response. Front. Environ. Sci. 2019, 7, 127. [Google Scholar] [CrossRef] [Green Version]
  6. Liu, C.C.; Shieh, M.C.; Ke, M.S.; Wang, K.H. Flood Prevention and Emergency Response System Powered by Google Earth Engine. Remote Sens. 2018, 10, 1283. [Google Scholar] [CrossRef] [Green Version]
  7. Du, Z.; Li, W.; Zhou, D.; Tian, L.; Ling, F.; Wang, H.; Gui, Y.; Sun, B. Analysis of Landsat-8 OLI imagery for land surface water mapping. Remote Sens. Lett. 2014, 5, 672–681. [Google Scholar] [CrossRef]
  8. Ji, L.; Geng, X.; Sun, K.; Zhao, Y.; Gong, P. Target detection method for water mapping using Landsat 8 OLI/TIRS imagery. Water 2015, 7, 794–817. [Google Scholar] [CrossRef] [Green Version]
  9. Yang, X.; Zhao, S.; Qin, X.; Zhao, N.; Liang, L. Mapping of urban surface water bodies from Sentinel-2 MSI imagery at 10 m resolution via NDWI-based image sharpening. Remote Sens. 2017, 9, 596. [Google Scholar] [CrossRef] [Green Version]
  10. Yilmaz, K.K.; Adler, R.F.; Tian, Y.; Hong, Y.; Pierce, H.F. Evaluation of a satellite-based global flood monitoring system. Int. J. Remote Sens. 2010, 31, 3763–3782. [Google Scholar] [CrossRef]
  11. Fayne, J.V.; Bolten, J.D.; Doyle, C.S.; Fuhrmann, S.; Rice, M.T.; Houser, P.R.; Lakshmi, V. Flood mapping in the lower Mekong River Basin using daily MODIS observations. Int. J. Remote Sens. 2017, 38, 1737–1757. [Google Scholar] [CrossRef]
  12. Huang, C.; Chen, Y.; Zhang, S.; Li, L.; Shi, K.; Liu, R. Surface water mapping from Suomi NPP-VIIRS imagery at 30 m resolution via blending with Landsat data. Remote Sens. 2016, 8, 631. [Google Scholar] [CrossRef] [Green Version]
  13. Li, S.; Sun, D.; Goldberg, M.D.; Sjoberg, B.; Santek, D.; Hoffman, J.P.; DeWeese, M.; Restrepo, P.; Lindsey, S.; Holloway, E. Automatic near real-time flood detection using Suomi-NPP/VIIRS data. Remote Sens. Environ. 2018, 204, 672–689. [Google Scholar] [CrossRef]
  14. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  15. Feyisa, G.L.; Meilby, H.; Fensholt, R.; Proud, S.R. Automated Water Extraction Index: A new technique for surface water mapping using Landsat imagery. Remote Sens. Environ. 2014, 140, 23–35. [Google Scholar] [CrossRef]
  16. Zhou, Y.; Dong, J.; Xiao, X.; Xiao, T.; Yang, Z.; Zhao, G.; Zou, Z.; Qin, Y. Open surface water mapping algorithms: A comparison of water-related spectral indices and sensors. Water 2017, 9, 256. [Google Scholar] [CrossRef]
  17. Jones, J.W. Efficient wetland surface water detection and monitoring via landsat: Comparison with in situ data from the everglades depth estimation network. Remote Sens. 2015, 7, 12503–12538. [Google Scholar] [CrossRef] [Green Version]
  18. Jones, J.W. Improved automated detection of subpixel-scale inundation—Revised dynamic surface water extent (dswe) partial surface water tests. Remote Sens. 2019, 11, 374. [Google Scholar] [CrossRef] [Green Version]
  19. Donchyts, G.; Baart, F.; Winsemius, H.; Gorelick, N.; Kwadijk, J.; Van De Giesen, N. Earth’s surface water change over the past 30 years. Nat. Clim. Chang. 2016, 6, 810–813. [Google Scholar] [CrossRef]
  20. Donchyts, G.; Schellekens, J.; Winsemius, H.; Eisemann, E.; Van de Giesen, N. A 30 m resolution surface water mask including estimation of positional and thematic differences using landsat 8, srtm and openstreetmap: A case study in the Murray-Darling Basin, Australia. Remote Sens. 2016, 8, 386. [Google Scholar] [CrossRef] [Green Version]
  21. Pekel, J.F.; Cottam, A.; Gorelick, N.; Belward, A.S. High-resolution mapping of global surface water and its long-term changes. Nature 2016, 540, 418–422. [Google Scholar] [CrossRef]
  22. Isikdogan, F.; Bovik, A.C.; Passalacqua, P. Surface water mapping by deep learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4909–4918. [Google Scholar] [CrossRef]
  23. Markert, K.N.; Chishtie, F.; Anderson, E.R.; Saah, D.; Griffin, R.E. On the merging of optical and SAR satellite imagery for surface water mapping applications. Results Phys. 2018, 9, 275–277. [Google Scholar] [CrossRef]
  24. Sanyal, J.; Lu, X. Application of remote sensing in flood management with special reference to monsoon Asia: A review. Nat. Hazards 2004, 33, 283–301. [Google Scholar] [CrossRef]
  25. Psomiadis, E. Flash flood area mapping utilising SENTINEL-1 radar data. In Earth Resources and Environmental Remote Sensing/GIS Applications VII. International Society for Optics and Photonics; SPIE Remote Sensing: Edinburgh, UK, 2016; Volume 10005, p. 100051G. [Google Scholar]
  26. Elkhrachy, I. Assessment and management flash flood in Najran Wady using GIS and remote sensing. J. Indian Soc. Remote Sens. 2018, 46, 297–308. [Google Scholar] [CrossRef]
  27. Clement, M.; Kilsby, C.; Moore, P. Multi-temporal synthetic aperture radar flood mapping using change detection. J. Flood Risk Manag. 2018, 11, 152–168. [Google Scholar] [CrossRef]
  28. Amitrano, D.; Di Martino, G.; Iodice, A.; Riccio, D.; Ruello, G. Unsupervised rapid flood mapping using Sentinel-1 GRD SAR images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3290–3299. [Google Scholar] [CrossRef]
  29. Yan, K.; Di Baldassarre, G.; Solomatine, D.P.; Schumann, G.J.P. A review of low-cost space-borne data for flood modelling: Topography, flood extent and water level. Hydrol. Process. 2015, 29, 3368–3387. [Google Scholar] [CrossRef]
  30. Torres, R.; Snoeij, P.; Geudtner, D.; Bibby, D.; Davidson, M.; Attema, E.; Potin, P.; Rommen, B.; Floury, N.; Brown, M.; et al. GMES Sentinel-1 mission. Remote Sens. Environ. 2012, 120, 9–24. [Google Scholar] [CrossRef]
  31. Flores-Anderson, A.I.; Herndon, K.E.; Thapa, R.B.; Cherrington, E. The SAR Handbook: Comprehensive Methodologies for Forest Monitoring and Biomass Estimation; SERVIR Global: Huntsville, AL, USA, 2019. [Google Scholar]
  32. Tsyganskaya, V.; Martinis, S.; Marzahn, P.; Ludwig, R. SAR-based detection of flooded vegetation – a review of characteristics and approaches. Int. J. Remote Sens. 2018, 39, 2255–2293. [Google Scholar] [CrossRef]
  33. Schumann, G.; Di Baldassarre, G.; Bates, P.D. The utility of spaceborne radar to render flood inundation maps based on multialgorithm ensembles. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2801–2807. [Google Scholar] [CrossRef]
  34. Chini, M.; Hostache, R.; Giustarini, L.; Matgen, P. A hierarchical split-based approach for parametric thresholding of SAR images: Flood inundation as a test case. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6975–6988. [Google Scholar] [CrossRef]
  35. Olthof, I.; Tolszczuk-Leclerc, S. Comparing Landsat and RADARSAT for current and historical dynamic flood mapping. Remote Sens. 2018, 10, 780. [Google Scholar] [CrossRef] [Green Version]
  36. Horritt, M. A statistical active contour model for SAR image segmentation. Image Vis. Comput. 1999, 17, 213–224. [Google Scholar] [CrossRef]
  37. DeVries, B.; Huang, C.; Armston, J.; Huang, W.; Jones, J.W.; Lang, M.W. Rapid and robust monitoring of flood events using Sentinel-1 and Landsat data on the Google Earth Engine. Remote Sens. Environ. 2020, 240, 111664. [Google Scholar] [CrossRef]
  38. Westerhoff, R.S.; Kleuskens, M.P.H.; Winsemius, H.C.; Huizinga, H.J.; Brakenridge, G.R.; Bishop, C. Automated global water mapping based on wide-swath orbital synthetic-aperture radar. Hydrol. Earth Syst. Sci. 2013, 17, 651–663. [Google Scholar] [CrossRef] [Green Version]
  39. Benoudjit, A.; Guida, R. A novel fully automated mapping of the flood extent on SAR images using a supervised classifier. Remote Sens. 2019, 11, 779. [Google Scholar] [CrossRef] [Green Version]
  40. Shen, X.; Anagnostou, E.N.; Allen, G.H.; Brakenridge, G.R.; Kettner, A.J. Near-real-time non-obstructed flood inundation mapping using synthetic aperture radar. Remote Sens. Environ. 2019, 221, 302–315. [Google Scholar] [CrossRef]
  41. Landuyt, L.; Van Wesemael, A.; Schumann, G.J.; Hostache, R.; Verhoest, N.E.C.; Van Coillie, F.M.B. Flood Mapping Based on Synthetic Aperture Radar: An Assessment of Established Approaches. IEEE Trans. Geosci. Remote Sens. 2019, 57, 722–739. [Google Scholar] [CrossRef]
  42. Huang, W.; DeVries, B.; Huang, C.; Lang, M.W.; Jones, J.W.; Creed, I.F.; Carroll, M.L. Automated extraction of surface water extent from Sentinel-1 data. Remote Sens. 2018, 10, 797. [Google Scholar] [CrossRef] [Green Version]
  43. Filipponi, F. Sentinel-1 GRD Preprocessing Workflow. Proceedings 2019, 18, 11. [Google Scholar] [CrossRef] [Green Version]
  44. Wicks, D.; Jones, T.; Rossi, C. Testing the Interoperability of Sentinel 1 Analysis Ready Data Over the United Kingdom. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 8655–8658. [Google Scholar] [CrossRef]
  45. Uddin, K.; Matin, M.A.; Meyer, F.J. Operational Flood Mapping Using Multi-Temporal Sentinel-1 SAR Images: A Case Study from Bangladesh. Remote Sens. 2019, 11, 1581. [Google Scholar] [CrossRef] [Green Version]
  46. Truckenbrodt, J.; Freemantle, T.; Williams, C.; Jones, T.; Small, D.; Dubois, C.; Thiel, C.; Rossi, C.; Syriou, A.; Giuliani, G. Towards Sentinel-1 SAR Analysis-Ready Data: A Best Practices Assessment on Preparing Backscatter Data for the Cube. Data 2019, 4, 93. [Google Scholar] [CrossRef] [Green Version]
  47. Twele, A.; Cao, W.; Plank, S.; Martinis, S. Sentinel-1-based flood mapping: A fully automated processing chain. Int. J. Remote Sens. 2016, 37, 2990–3004. [Google Scholar] [CrossRef]
  48. Bioresita, F.; Puissant, A.; Stumpf, A.; Malet, J.P. A Method for Automatic and Rapid Mapping of Water Surfaces from Sentinel-1 Imagery. Remote Sens. 2018, 10, 217. [Google Scholar] [CrossRef] [Green Version]
  49. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  50. Kravtsova, V.; Mikhailov, V.; Kidyaeva, V. Hydrological regime, morphological features and natural territorial complexes of the Irrawaddy River Delta (Myanmar). Water Resour. 2009, 36, 243. [Google Scholar] [CrossRef]
  51. Hoang, L.P.; Lauri, H.; Kummu, M.; Koponen, J.; Van Vliet, M.; Supit, I.; Leemans, R.; Kabat, P.; Ludwig, F. Mekong River flow and hydrological extremes under climate change. Hydrol. Earth Syst. Sci. 2016, 20, 3027–3041. [Google Scholar] [CrossRef] [Green Version]
  52. Renaud, F.G.; Kuenzer, C. The Mekong Delta System: Interdisciplinary Analyses of a River Delta; Springer Science & Business Media: Dordrecht, The Netherlands, 2012. [Google Scholar]
  53. Taft, L.; Evers, M. A review of current and possible future human-water dynamics in Myanmar’s river basins. Hydrol. Earth Syst. Sci. 2016, 20, 4913. [Google Scholar] [CrossRef] [Green Version]
  54. Sen Roy, N.; Kaur, S. Climatology of monsoon rains of Myanmar (Burma). Int. J. Climatol. J. R. Meteorol. Soc. 2000, 20, 913–928. [Google Scholar] [CrossRef]
  55. Sein, Z.M.M.; Ogwang, B.A.; Ongoma, V.; Ogou, F.K.; Batebana, K. Inter-annual variability of summer monsoon rainfall over Myanmar in relation to IOD and ENSO. J. Environ. Agric. Sci. 2015, 4, 28–36. [Google Scholar]
  56. Phongsapan, K.; Chishtie, F.; Poortinga, A.; Bhandari, B.; Meechaiya, C.; Kunlamai, T.; Aung, K.S.; Saah, D.; Anderson, E.; Markert, K.; et al. Operational flood risk index mapping for disaster risk reduction using Earth Observations and cloud computing technologies: A case study on Myanmar. Front. Environ. Sci. 2019, 7, 191. [Google Scholar] [CrossRef] [Green Version]
  57. Farr, T.G.; Rosen, P.A.; Caro, E.; Crippen, R.; Duren, R.; Hensley, S.; Kobrick, M.; Paller, M.; Rodriguez, E.; Roth, L.; et al. The shuttle radar topography mission. Rev. Geophys. 2007, 45, RG2004. [Google Scholar] [CrossRef] [Green Version]
  58. Small, D. Flattening gamma: Radiometric terrain correction for SAR imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3081–3093. [Google Scholar] [CrossRef]
  59. Lee, I.-S.; Wen, J.-H.; Ainsworth, T.L.; Chen, K.-S.; Chen, A.J. Improved Sigma Filter for Speckle Filtering of SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 202–213. [Google Scholar] [CrossRef]
  60. Huang, C.; Chen, Y.; Zhang, S.; Wu, J. Detecting, Extracting, and Monitoring Surface Water From Space Using Optical Sensors: A Review. Rev. Geophys. 2018, 56, 333–360. [Google Scholar] [CrossRef]
  61. Yamazaki, D.; Ikeshima, D.; Tawatari, R.; Yamaguchi, T.; O’Loughlin, F.; Neal, J.C.; Sampson, C.C.; Kanae, S.; Bates, P.D. A high-accuracy map of global terrain elevations. Geophys. Res. Lett. 2017, 44, 5844–5853. [Google Scholar] [CrossRef] [Green Version]
  62. Tadono, T.; Takaku, J.; Tsutsui, K.; Oda, F.; Nagai, H. Status of “ALOS World 3D (AW3D)” global DSM generation. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 3822–3825. [Google Scholar]
  63. Nobre, A.; Cuartas, L.; Hodnett, M.; Rennó, C.; Rodrigues, G.; Silveira, A.; Waterloo, M.; Saleska, S. Height Above the Nearest Drainage – A hydrologically relevant new terrain model. J. Hydrol. 2011, 404, 13–29. [Google Scholar] [CrossRef] [Green Version]
  64. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  65. Cao, H.; Zhang, H.; Wang, C.; Zhang, B. Operational flood detection using Sentinel-1 SAR data over large areas. Water 2019, 11, 786. [Google Scholar] [CrossRef] [Green Version]
  66. Demirkaya, O.; Asyali, M.H. Determination of image bimodality thresholds for different intensity distributions. Signal Process. Image Commun. 2004, 19, 507–516. [Google Scholar] [CrossRef]
  67. Canny, J. A computation approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 670–700. [Google Scholar]
  68. Lister, T.W.; Lister, A.J.; Alexander, E. Land use change monitoring in Maryland using a probabilistic sample and rapid photointerpretation. Appl. Geogr. 2014, 51, 1–7. [Google Scholar] [CrossRef]
  69. Woodward, B.D.; Evangelista, P.H.; Young, N.E.; Vorster, A.G.; West, A.M.; Carroll, S.L.; Girma, R.K.; Hatcher, E.Z.; Anderson, R.; Vahsen, M.L.; et al. CO-RIP: A riparian vegetation and corridor extent dataset for colorado river basin streams and rivers. Isprs Int. J. -Geo-Inf. 2018, 7, 397. [Google Scholar] [CrossRef] [Green Version]
  70. Kohavi, R. A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection. Ijcai 1995, 14, 1137–1143. [Google Scholar]
  71. Hughes, M.J.; Kennedy, R. High-Quality Cloud Masking of Landsat 8 Imagery Using Convolutional Neural Networks. Remote Sens. 2019, 11, 2591. [Google Scholar] [CrossRef] [Green Version]
  72. Ngo, T.; Mazet, V.; Collet, C.; De Fraipont, P. Shape-Based Building Detection in Visible Band Images Using Shadow Information. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 920–932. [Google Scholar] [CrossRef]
  73. Yang, H.L.; Yuan, J.; Lunga, D.; Laverdiere, M.; Rose, A.; Bhaduri, B. Building Extraction at Scale Using Convolutional Neural Network: Mapping of the United States. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2600–2614. [Google Scholar] [CrossRef] [Green Version]
  74. McNemar, Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 1947, 12, 153–157. [Google Scholar] [CrossRef]
  75. Abdi, A.M. Land cover and land use classification performance of machine learning algorithms in a boreal landscape using Sentinel-2 data. Gisci. Remote Sens. 2020, 57, 1–20. [Google Scholar] [CrossRef] [Green Version]
  76. Planet, T. Planet Application Program Interface: In Space for Life on Earth; Planet Labs Inc.: San Francisco, CA, USA, 2017; Volume 40, p. 2017. [Google Scholar]
  77. Yao, Y.; Jiang, Z.; Zhang, H.; Zhao, D.; Cai, B. Ship detection in optical remote sensing images based on deep convolutional neural networks. J. Appl. Remote Sens. 2017, 11, 1–12. [Google Scholar] [CrossRef]
  78. Chapman, B.; McDonald, K.; Shimada, M.; Rosenqvist, A.; Schroeder, R.; Hess, L. Mapping Regional Inundation with Spaceborne L-Band SAR. Remote Sens. 2015, 7, 5440–5470. [Google Scholar] [CrossRef] [Green Version]
  79. Martinis, S.; Plank, S.; Ćwik, K. The Use of Sentinel-1 Time-Series Data to Improve Flood Monitoring in Arid Areas. Remote Sens. 2018, 10, 583. [Google Scholar] [CrossRef] [Green Version]
  80. Kulkarni, S.; Kedar, M.; Rege, P.P. Comparison of Different Speckle Noise Reduction Filters for RISAT -1 SAR Imagery. In Proceedings of the 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 14–16 July 2018; pp. 537–541. [Google Scholar]
  81. Bioresita, F.; Puissant, A.; Stumpf, A.; Malet, J.P. Fusion of Sentinel-1 and Sentinel-2 image time series for permanent and temporary surface water mapping. Int. J. Remote Sens. 2019, 40, 9026–9049. [Google Scholar] [CrossRef]
  82. Potapov, P.; Tyukavina, A.; Turubanova, S.; Talero, Y.; Hernandez-Serna, A.; Hansen, M.; Saah, D.; Tenneson, K.; Poortinga, A.; Aekakkararungroj, A.; et al. Annual continuous fields of woody vegetation structure in the Lower Mekong region from 2000–2017 Landsat time-series. Remote Sens. Environ. 2019, 232, 111278. [Google Scholar] [CrossRef]
  83. Poortinga, A.; Tenneson, K.; Shapiro, A.; Nquyen, Q.; San Aung, K.; Chishtie, F.; Saah, D. Mapping plantations in Myanmar by fusing landsat-8, sentinel-2 and sentinel-1 data along with systematic error quantification. Remote Sens. 2019, 11, 831. [Google Scholar] [CrossRef] [Green Version]
  84. Saah, D.; Tenneson, K.; Poortinga, A.; Nguyen, Q.; Chishtie, F.; San Aung, K.; Markert, K.N.; Clinton, N.; Anderson, E.R.; Cutter, P.; et al. Primitives as building blocks for constructing land cover maps. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101979. [Google Scholar] [CrossRef]
  85. Saah, D.; Johnson, G.; Ashmall, B.; Tondapu, G.; Tenneson, K.; Patterson, M.; Poortinga, A.; Markert, K.; Quyen, N.H.; San Aung, K.; et al. Collect Earth: An online tool for systematic reference data collection in land cover and use applications. Environ. Model. Softw. 2019, 118, 166–171. [Google Scholar] [CrossRef]
  86. Saah, D.; Tenneson, K.; Matin, M.; Uddin, K.; Cutter, P.; Poortinga, A.; Ngyuen, Q.H.; Patterson, M.; Johnson, G.; Markert, K.; et al. Land cover mapping in data scarce environments: Challenges and opportunities. Front. Environ. Sci. 2019, 7, 150. [Google Scholar] [CrossRef] [Green Version]
  87. Poortinga, A.; Nguyen, Q.; Tenneson, K.; Troy, A.; Bhandari, B.; Ellenburg, W.L.; Aekakkararungroj, A.; Ha, L.T.; Pham, H.; Nguyen, G.V.; et al. Linking earth observations for assessing the food security situation in Vietnam: A landscape approach. Front. Environ. Sci. 2019, 7, 186. [Google Scholar] [CrossRef] [Green Version]
  88. Poortinga, A.; Clinton, N.; Saah, D.; Cutter, P.; Chishtie, F.; Markert, K.N.; Anderson, E.R.; Troy, A.; Fenn, M.; Tran, L.H.; et al. An operational before-after-control-impact (BACI) designed platform for vegetation monitoring at planetary scale. Remote Sens. 2018, 10, 760. [Google Scholar] [CrossRef] [Green Version]
  89. Simons, G.; Poortinga, A.; Bastiaanssen, W.G.; Saah, D.; Troy, D.; Hunink, J.; Klerk, M.D.; Rutten, M.; Cutter, P.; Rebelo, L.M.; et al. On Spatially Distributed Hydrological Ecosystem Services: Bridging the Quantitative Information Gap Using Remote Sensing and Hydrological Models; FutureWater: Wageningen, The Netherlands, 2017. [Google Scholar]
  90. Aekakkararungroj, A.; Chishtie, F.; Poortinga, A.; Mehmood, H.; Anderson, E.; Munroe, T.; Cutter, P.; Loketkawee, N.; Tondapu, G.; Towashiraporn, P.; et al. A publicly available GIS-based web platform for reservoir inundation mapping in the lower Mekong region. Environ. Model. Softw. 2020, 123, 104552. [Google Scholar] [CrossRef]
Figure 1. Study Area in Southeast Asia focused on portions of the Upper Irrawaddy river system in North Myanmar and the Tonlé Sap sub-watershed of Cambodia in 2019 during both the wet season and dry seasons. Bounded areas indicate the spatial coverage of Planet Scope Imagery in Myanmar (orange) and Cambodia (blue).
Figure 1. Study Area in Southeast Asia focused on portions of the Upper Irrawaddy river system in North Myanmar and the Tonlé Sap sub-watershed of Cambodia in 2019 during both the wet season and dry seasons. Bounded areas indicate the spatial coverage of Planet Scope Imagery in Myanmar (orange) and Cambodia (blue).
Remotesensing 12 02469 g001
Figure 2. The surface water mapping effort utilized two processing workflows differentiated by the algorithms employed Bmax Otsu (outlined in blue) and Edge Otsu (outlined in green). Each workflow employed the same two pre-processing data streams “RTC” and “GRD” and same post processing steps.
Figure 2. The surface water mapping effort utilized two processing workflows differentiated by the algorithms employed Bmax Otsu (outlined in blue) and Edge Otsu (outlined in green). Each workflow employed the same two pre-processing data streams “RTC” and “GRD” and same post processing steps.
Remotesensing 12 02469 g002
Figure 3. Interpreter decision tree utilized for sampling imagery. Circle numbers 1–4 (pink, blue, green, yellow) provide visual representation of decision points for sample classification [76].
Figure 3. Interpreter decision tree utilized for sampling imagery. Circle numbers 1–4 (pink, blue, green, yellow) provide visual representation of decision points for sample classification [76].
Remotesensing 12 02469 g003
Figure 4. Accuracy assessment for the two algorithms (Bmax Otsu and Edge Otsu) using the two datasets (GRD and RTC) showing the distribution of (a) overall accuracy, (b) precision/recall ratio, (c) Cohen Kappa coefficient, and (d) F-1 score. The violin plots show the distribution of values at different accuracy scores. The thicker the violin, the more values fall at the y value.
Figure 4. Accuracy assessment for the two algorithms (Bmax Otsu and Edge Otsu) using the two datasets (GRD and RTC) showing the distribution of (a) overall accuracy, (b) precision/recall ratio, (c) Cohen Kappa coefficient, and (d) F-1 score. The violin plots show the distribution of values at different accuracy scores. The thicker the violin, the more values fall at the y value.
Remotesensing 12 02469 g004
Figure 5. Surface water maps generated over the Tonlé Sap River and Lake in Cambodia for 2019-10-05 using the Bmax Ostu algorithm with the GRD dataset (a) and RTC datasets (d), as well as the Edge Otsu algorithm with GRD (b) and RTC (e) data. Difference maps between the methods using either the GRD (c) or RTC (f) datasets are provided. The difference maps between datasets using either the Bmax Ostu (g) and Edge Otsu (h) algorithms is also provided. Image differences are denoted in either red for areas only water from Image 1 is present and yellow for for areas only water from Image 2 is present. Image 1 for the difference maps is the first image used to find the difference starting from either top row or left column whereas Image 2 is the second image across rows or down columns. The corresponding Planet imagery for the region and date (i).
Figure 5. Surface water maps generated over the Tonlé Sap River and Lake in Cambodia for 2019-10-05 using the Bmax Ostu algorithm with the GRD dataset (a) and RTC datasets (d), as well as the Edge Otsu algorithm with GRD (b) and RTC (e) data. Difference maps between the methods using either the GRD (c) or RTC (f) datasets are provided. The difference maps between datasets using either the Bmax Ostu (g) and Edge Otsu (h) algorithms is also provided. Image differences are denoted in either red for areas only water from Image 1 is present and yellow for for areas only water from Image 2 is present. Image 1 for the difference maps is the first image used to find the difference starting from either top row or left column whereas Image 2 is the second image across rows or down columns. The corresponding Planet imagery for the region and date (i).
Remotesensing 12 02469 g005
Figure 6. Same as Figure 5 except highlighting surface water maps over the Irrawaddy river in Myanmar for 2019-07-28.
Figure 6. Same as Figure 5 except highlighting surface water maps over the Irrawaddy river in Myanmar for 2019-07-28.
Remotesensing 12 02469 g006
Figure 7. Histograms highlighting the values sampled from the GRD (green) and RTC (blue) data using the Bmax Otsu (left) and Edge Otsu (right) algorithms for the Cambodia (KH; left) and Myanmar (MM; right). Note the different scales on the y-axis due to the different number of pixels sampled depending on the method.
Figure 7. Histograms highlighting the values sampled from the GRD (green) and RTC (blue) data using the Bmax Otsu (left) and Edge Otsu (right) algorithms for the Cambodia (KH; left) and Myanmar (MM; right). Note the different scales on the y-axis due to the different number of pixels sampled depending on the method.
Remotesensing 12 02469 g007
Table 1. Spatial and temporal extent of Planet Scope high resolution imagery used for evaluation.
Table 1. Spatial and temporal extent of Planet Scope high resolution imagery used for evaluation.
CountryDaten Sample Pointsn Planet Scope ScenesFootprint Area (km 2 )
Cambodia2019-05-0168245138
Cambodia2019-05-0280255352
Cambodia2019-09-0938122253
Cambodia2019-09-1187487095
Cambodia2019-09-161261177
Cambodia2019-09-212271575
Cambodia2019-09-2372376421
Cambodia2019-10-031346411,230
Cambodia2019-10-0439234246
Cambodia2019-10-0520210618,903
Cambodia2019-10-1057354825
Cambodia2019-10-15112479722
Cambodia2019-12-0489237760
Myanmar2019-07-16514132818
Myanmar2019-07-1887293859
Myanmar2019-07-2115562042
Myanmar2019-07-2818561689
Myanmar2019-08-027951988
Myanmar2019-08-05874649
Table 2. Statistical evaluation of all input sample points for each algorithms (Bmax Otsu and Edge Otsu) and each dataset (GRD and RTC). Values are mean metric (stand deviation is in parentheses) of metrics across the different subsamples.
Table 2. Statistical evaluation of all input sample points for each algorithms (Bmax Otsu and Edge Otsu) and each dataset (GRD and RTC). Values are mean metric (stand deviation is in parentheses) of metrics across the different subsamples.
StatisticBmax Otsu GRDBmax Otsu RTCEdge Otsu GRDEdge Otsu RTC
Overall Accuracy0.925 (0.007)0.925 (0.003)0.928 (0.005)0.943 (0.004)
Precision/Recall0.946 (0.028)0.996 (0.022)1.18 (0.033)1.17 (0.020)
Cohen Kappa0.804 (0.019)0.801 (0.009)0.800 (0.015)0.843 (0.011)
F1 Score0.855 (0.014)0.851 (0.007)0.846 (0.012)0.879 (0.009)
Table 3. Statistical significance testing scores using McNemar’s test. First row is the method compared against the method from the first column. Values are p-value of test where higher values mean that null-hypothesis, there no difference in the accuracy, has a higher probability of being true. Values less than 0.05 are statistically significant.
Table 3. Statistical significance testing scores using McNemar’s test. First row is the method compared against the method from the first column. Values are p-value of test where higher values mean that null-hypothesis, there no difference in the accuracy, has a higher probability of being true. Values less than 0.05 are statistically significant.
Bmax Otsu GRDBmax Otsu RTCEdge Otsu GRD
Bmax Otsu RTC1.0
Edge Otsu GRD0.7900.790
Edge Otsu RTC0.1590.1590.253

Share and Cite

MDPI and ACS Style

Markert, K.N.; Markert, A.M.; Mayer, T.; Nauman, C.; Haag, A.; Poortinga, A.; Bhandari, B.; Thwal, N.S.; Kunlamai, T.; Chishtie, F.; et al. Comparing Sentinel-1 Surface Water Mapping Algorithms and Radiometric Terrain Correction Processing in Southeast Asia Utilizing Google Earth Engine. Remote Sens. 2020, 12, 2469. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12152469

AMA Style

Markert KN, Markert AM, Mayer T, Nauman C, Haag A, Poortinga A, Bhandari B, Thwal NS, Kunlamai T, Chishtie F, et al. Comparing Sentinel-1 Surface Water Mapping Algorithms and Radiometric Terrain Correction Processing in Southeast Asia Utilizing Google Earth Engine. Remote Sensing. 2020; 12(15):2469. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12152469

Chicago/Turabian Style

Markert, Kel N., Amanda M. Markert, Timothy Mayer, Claire Nauman, Arjen Haag, Ate Poortinga, Biplov Bhandari, Nyein Soe Thwal, Thannarot Kunlamai, Farrukh Chishtie, and et al. 2020. "Comparing Sentinel-1 Surface Water Mapping Algorithms and Radiometric Terrain Correction Processing in Southeast Asia Utilizing Google Earth Engine" Remote Sensing 12, no. 15: 2469. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12152469

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop