Next Article in Journal
Modeling Directional Brightness Temperature (DBT) over Crop Canopy with Effects of Intra-Row Heterogeneity
Previous Article in Journal
Estimates of Conservation Tillage Practices Using Landsat Archive
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Shoreline Extraction from WorldView2 Satellite Data in the Presence of Foam Pixels Using Multispectral Classification Method

1
Université de Toulon, SeaTech, CNRS, LIS Laboratory UMR 7020, 83041 Toulon, France
2
Université Paris-Est, LaSTIG, IGN, ENSG, 94160 Saint-Mandé, France
3
Sorbonne Université, CNRS-INSU, LATMOS, CEDEX, 06304 Nice, France
4
Institut de Radioprotection et de Sûreté Nucléaire (IRSN), Centre Ifremer, 83507 La Seyne sur Mer, France
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(16), 2664; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162664
Submission received: 1 July 2020 / Revised: 10 August 2020 / Accepted: 16 August 2020 / Published: 18 August 2020

Abstract

:
Foam is often present in satellite images of coastal areas and can lead to serious errors in the detection of shorelines especially when processing high spatial resolution images (<20 m). This study focuses on shoreline extraction and shoreline evolution using high spatial resolution satellite images in the presence of foam. A multispectral supervised classification technique is selected, namely the Support Vector Machine (SVM) and applied with three classes which are land, foam and water. The merging of water and foam classes followed by a segmentation procedure enables the separation of land and ocean pixels. The performance of the method is evaluated using a validation dataset acquired on two study areas (south and north of the bay of Sendaï—Japan). On each site, WorldView-2 multispectral images (eight bands, 2 m resolution) were acquired before and after the Fukushima tsunami generated by the Tohoku earthquake in 2011. The consideration of the foam class enables the false negative error to be reduced by a factor of three. The SVM method is also compared with four other classification methods, namely Euclidian Distance, Spectral Angle Mapper, Maximum Likelihood, and Neuronal Network. The SVM method appears to be the most efficient to determine the erosion and the accretion resulting from the tsunami, which are societal issues for littoral management purposes.

Graphical Abstract

1. Introduction

Shorelines mark the transition between land and sea. They are vulnerable to nearshore currents, human modification, winds and waves. It is estimated that there are more than 348,000 km of shorelines in the world and that 50% of the world’s population lives within 100 km of the coast [1]. The monitoring and management of shorelines are therefore of considerable social and economic importance. Furthermore, among the most serious consequences of climate change, sea-level rise threatens to significantly alter shorelines leading to erosion and coastal flooding [2].
Many satellite sensors currently enable the observation of land and ocean areas. High and medium spatial resolution satellite sensors are typically characterized by medium spectral resolution and by a revisit period longer than 1 day. Examples of this type of sensors are the OLI (Operational Land Imager))/LANDSAT instrument (30 m, 7 bands, 16 days, NASA/USGS (United States Geological Survey), [3]), the multispectral SPOT (Satellite pour l’observation de la Terre) instrument (6 m, 4 bands, 26 days, CNES (Centre National d’Etude Spatiale), [4]), the Pleiades instrument (2.8 m, 4 bands, 1 day, CNES, [5]), the MSI (MultiSpectral Instrument)/Sentinel-2 sensor (10 m, 13 bands, 5 days, ESA (European Spatial Agency), [6]) and the WorldView-2 sensor (2 m, 8 bands, 11 days, DigitalGlobe, [7]). The Sentinel-2 MSI sensor does not have 13 bands with 10 m resolution but has some bands with 10 (bands 2, 3, 4 and 8), some with 20 (bands 5, 6, 7, 8a, 11 and 12) and some with 60 m (bands 1, 9 and 10). Low spatial resolution sensors are typically designed with more spectral bands than high and medium spatial resolution sensors and they also show a higher radiometric sensitivity and a better revisit period [8]. For example, the MODIS (Moderate-resolution Imaging Spectroradiometer)/Terra and Aqua instrument (1000 m, 9 bands, 1 day, NASA), the VIIRS (Visible Infrared Imaging Radiometer Suite) sensor (750 m, 8 bands, 1 day, NASA/NOAA (Oceanic and Atmospheric Administration)), the OLCI (Ocean and Land Colour Instrument)/ Sentinel-3 sensor (300 m, 21 bands, 1 day, ESA) and the GOCI (Geostationary Ocean Color Imager) instrument (500 m, 8 bands, 1 h, COMS (Communication, Ocean, and Meteorological Satellite)). Nevertheless, high spatial resolution is often preferred to high revisit period because the shoreline dynamics do not require a daily revisit period. The two satellites and the sensor agility (capability to adapt the viewing angle to the area of interest) are sufficient to monitor shoreline dynamics [3]. Regarding the sensitivity, the Signal to Noise ratio of MODIS (>700) is much higher than Sentinel-2 (<174) and this is because the MODIS resolution is coarser than Sentinel-2. Because Sentinel-2 was designed to study land area whose reflectance is generally high, a high resolution was therefore preferred to a strong SNR (Signal to Noise Ratio).
Previous studies investigated the potential of optical passive satellite data to study shoreline changes [9,10,11,12,13]. The shoreline detection methodology can be divided into two approaches, namely the spatial and the spectral approaches. The spatial method is based on the use of a single spectral band for setting a threshold value which is consistent with potential morphological operations, segmentation and vectorization of optical or SAR (Synthetic Aperture Radar) images [14,15,16,17]. The spectral method is based on the use of the spectral dimension of pixels to organize them into classes through various procedures, such as the procedure of unsupervised classification [18], the Normalized Difference Vegetation Index, referred to as NDVI [7] or the Modified Normalized Difference Water Index, noted as MNDWI [19]. However, the optical signatures of the targets that are dealt with in this study, namely foam, water and land, are highly sensitive to wavelengths. Therefore, the performance of their detection is improved when using high spectrally resolved sensors such as hyperspectral data. The use of multispectral bands as when exploiting spectral indices is thus not an optimal technique for distinguishing between the various targets observed. The spectral method can also be more simply a manual method [20]. All these methods have been applied to medium resolution images (~30 m) where the foam is either not visible on the images or not present in the ocean pixels (e.g., low wind speed). Several studies focused on the automatic extraction of shorelines from medium resolution images (Landsat 8/OLI and Sentinel-2/MSI) using subpixel detection techniques [6,13,21,22,23,24,25]. However, none of them discuss the presence of foam. Pardo-Pascual et al. 2018, indicate that “Shorelines obtained from the NIR (Near Infrared band) band have usually been accurate, but have shown to be more affected by whitewater and foam”. However, they do not know if pixels containing foam are classified as water or land. In this study, the contribution of the class of foam was quantified compared to results obtained when ignoring it. Furthermore, the classification was made at the pixel-level classification but because the spatial resolution was 2 m; this study can help to understand what is happening inside a 30 m resolution pixel of Landsat, for example.
One of the most promising approaches to distinguishing between the foam pixels from the water and land pixels is the multispectral classification technique, which interestingly exploits the differences in the spectral features of each of these three components (i.e., foam, land and water). Many classification methods have been developed for land classification. Yu et al. [26] compared Euclidean Distance (ED), Maximum Likelihood (ML), Spectral Angle Mapper (SAM), and Support Vector Machine (SVM) classification methods to map land cover types using Tiangong-2 multispectral satellite data (CNSA (Chinese National Space Administration), [27]). For the classification of land cover types of the Qinghai Lake area (China), the overall classification accuracy of the SVM technique was found to be the highest, followed by the SAM, ED, and ML performance results. For the land cover types, classification of the Taihu Lake area (China), the best performance was also obtained when using the SVM method, followed by the ED, SAM and ML. SVM classification has already been applied to multispectral images to automatically detect the shoreline [28,29] but none of these studies has taken into account the foam and this is the originality of the presented method.
The Pacific coast of Tohoku in Japan near Fukushima prefecture is an area of major interest since it was recently affected by a megathrust earthquake on 11 March 2011, whose magnitude was 9.0–9.1 (Mw) undersea. The epicenter was approximately 70 km east of the Oshika Peninsula of Tohoku and the hypocenter at an underwater depth of approximately 29 km [30]. It was the most powerful earthquake ever recorded in Japan, and the fourth most powerful earthquake in the world since modern record-keeping began in 1900 [31]. The earthquake triggered powerful tsunami waves that reached heights of up to 40.5 m in Miyako in Tōhoku’s Iwate Prefecture, and which traveled up to 10 km inland in the Sendai area. Within the Fukushima prefecture, the average height of the waves, which reached up to 5 km inland, was 15 m. According to the Geospatial Information Authority of Japan, a surface area of approximately 560 km2 was flooded by the tsunami [32]. The tsunami caused the destruction of many dykes [33] and hit four nuclear plants on the east coast of Japan, namely Onagawa, Fukushima Daini, Fukushima Daiichi and Tokai Daini [34]. The most significant damages concerned the Fukushima Daiichi Nuclear Power Plant (FDNPP) since the loss of external power and emergency cooling system led to the meltdown of nuclear fuels in reactor cores, and caused the release of a large quantity of radioactive material into the environment.
After the cataclysm, the scientific community was mobilized to evaluate the impact of the tsunami and the nuclear accident on the environment and the health of the population. The AMORAD (Amélioration des modèles de prévision de la dispersion et d’évaluation de l’impact des radionucléides au sein de l’environnement) project, led by the French institute IRSN (Institut de Radioprotection et de Sureté Nucléaire), aims at improving radionuclide dispersion modeling and assessment of its environmental impact on both the marine and terrestrial environment [35]. Another goal of the AMORAD project is to monitor the evolution of the shoreline after the tsunami. In the case of Japan, surveys and measurements were difficult to achieve due to the high level of contamination and restricted access. However, remote sensing satellite data can be used to analyze the influences of the tsunami on the shoreline without in situ measurements. Remote sensing techniques are the most adapted for studying the contaminated or inaccessible areas.
The objective of this study is to determine an effective classification method that enables the detection of the shoreline with very high resolution satellite images such as those from the World-View2 satellite. The method is based on the discrimination between land and ocean pixels in the presence of foam which is often observed in coastal waters and clearly visible in high resolution images. The proposed method allows an assessment of the erosion and the accretion processes induced not only by the tsunami but also by other environmental phenomenon. Note that erosion and accretion are societal issues for coastal management purposes. To achieve our objective, the SVM classification method was selected and applied to four multispectral high resolution images acquired by WorldView-2 sensor. Three different pixel classes, namely land, foam and water were considered. The SVM method was then compared with four other techniques before the merging of the water and foam classes, resulting in two classes (land and ocean). The four other multispectral supervised classification techniques are the Euclidean Distance (ED), Spectral Angle Mapper (SAM), Maximum Likelihood (ML), and Neural Network (NN). It should be highlighted that the originality of this study relies on the consideration of foam pixels, thus allowing the exploitation of high resolution satellite data.
The paper is organized as follows: the data and the proposed methodology are presented in Section 2. The SVM classification method is applied to four satellite images and compared with validation data in Section 3. The SVM classification technique is compared with four other classification methods and discussed in Section 4.

2. Materials and Methods

2.1. Study Area

The study area is located around and south of the bay of Sendai (North-East Japan) (Figure 1). The coastal shore is composed of agricultural lands, natural landscapes and small towns. Several rivers such as the Abukuma, Takase, Maeda or Kuma rivers flow into the ocean in this area. Several harbors are located on the coast, which is made up of beaches and cliffs. The beaches are often protected from waves by dikes. The Fukushima Daiichi Nuclear Power Plant (FDNPP) is located in the south of the area.

2.2. Satellite Data

To analyze the evolution of the shoreline, satellite databases that consist of high spatial resolution images of the study area acquired before and after the tsunami were considered. Most of the satellite images were acquired by the optical imaging sensor WorldView-2 in the surroundings of the FDNPP site after the tsunami because the Fukushima site was of no special interest before this event. The WorldView-2 data are provided by European Space Imaging. The WorldView-2 sensor provides high spatial resolution data of 2 m for 8 spectral bands ranging from 400 nm to 1040 nm. The red boxes shown in Figure 1 indicate the areas covered by the Worldview-2 sensor.
The benthic composition of the study site is principally sandy bottom [36]. In the southern area, the images were acquired on 8 November 2010 (before the tsunami) and on 10 February 2012 (after the tsunami). In the northern area, the images were acquired on 4 August 2010 (before the tsunami) and on 10 April 2012 (after the tsunami). Worldview-2 being a sun synchronous satellite, it acquires all the images around 10:30 local time. Image registration was performed for these four images. Figure 2 shows the images acquired on the southern area covering 3 × 26 km (coastal area lies between 37°16′–37°30′N latitude and 141°01′–141°03′E longitude). Figure 3 shows the images acquired on the northern area, covering 5 × 19 km (coastal area lies between 38°11′–38°01′N latitude and 140°54′–140°58′E longitude).
The satellite data were corrected for the atmospheric effects by subtracting the Rayleigh reflectance and the aerosol reflectance (black pixel method [37]) and dividing by the atmospheric transmittance. The extent of the shoreline evolution can be seen in Figure 2a,b, which focus on Ukedo and Yuriage harbors. For the sake of convenience, some the figures in this paper show results for the Ukedo harbor site, which is located at 37.48° N and 141.04° E, but it should be highlighted that the analysis was carried out over the entire southern and northern areas.

2.3. Methodology

A spectral-based method was used here to extract the shoreline even in the presence of foam pixels. The classification can be supervised or not, depending on the a priori knowledge of the user. Since the classes were known in our study, a supervised classification was therefore preferred. Three classes were defined for representing our landscape, namely the water, foam and land classes. Three Regions of Interest (ROI) were then selected on the images and the mean spectral profile of each ROI was then derived. Only one ROI for each class was selected for the classification and sand was the only class considered for land. Because no value of the maximum distance set, all the pixels were classified into one of these three classes. For urban area, the spectral profile was often similar to the sand spectrum (i.e, an increase of the reflectance with the wavelength for minerals, concrete, tar…). Vegetation was also classified as sand because the reflectance is closer to the sand reflectance than to the foam or the water reflectance. Here the ROI were picked by photointerpretation. Nevertheless, the mean spectral reflectance of each class can be also chosen in a spectral data base, provided that, the images have been corrected for the atmospheric effect (to be normalized from the acquisitions conditions and be compared to normalized reflectance).
Figure 3 shows the mean spectral profile of each of the 3 classes. As an example, the land class is represented by a ROI spectral profile that is based on sand properties.
The spectral profile of the water class is consistent with oceanic spectral reflectance when chlorophyll and a moderate concentration of suspended particulate matter (SPM) are present in the water column [38]. An inversion of the Lee’s model [39] gives concentrations of 1 g m−3 of chlorophyll, 8 mg m−3 of SPM, and a CDOM (Colored Dissolved Organic Matter) absorption at 440 nm of 0.07 m−1. Lee’s model is a direct semi-analytical radiative transfer model providing the remote sensing reflectance (denoted Rrs) as a function of the water composition (chlorophyll concentration, SPM concentration, CDOM absorption coefficient at 440 nm). The inversion of this model is achieved by minimizing the Euclidian distance between the model and the measured reflectance through optimization. The minimization is operated by a nonlinear curve-fitting in the least-squares sense with bounds for each parameter. The outputs of the inversion are the optimized values of Chlorophyll, SPM and CDOM. The spectral profile of the land class is consistent with a sand-like spectral reflectance which increases with the wavelength [40]. The spectral profile of the foam class is consistent with the profile given by [41]. The standard deviation is highest for the foam and the land and lowest for the water. This can be explained by the spatial heterogeneity of the foam and the land reflectance compared to the spatial homogeneity of the water reflectance. The spectral profiles of foam and land classes show similarities in the visible domain but differ in the near infrared domain. The water spectral profile is easily distinguishable from the others. If pixels contain turbid waters in the image, they will not be misclassified in sand or foam class because turbid waters consist mainly of water, there is a strong absorption in the blue, the red and the near infrared domain. The spectral features between turbid waters and sand or foam reflectances are so different that a pixel of turbid water will always be classified in the water class and not in the sand or foam class.
Based on Figure 3, it should be highlighted that the consideration of all the spectral bands is required to properly distinguish between the 3 classes using the classification technique. The use of only a single band associated with a threshold value would not be sufficient to obtain satisfactory results due to possible spectral similarities between classes, especially for the case where the selected band is inappropriate.
Support-vector machines (SVM) are supervised learning models which consist of associated learning algorithms that analyze data used for classification and regression analysis [42]. For a given set of training examples, each pixel is marked as belonging to one or the other of two categories. An SVM training algorithm builds a model that assigns new pixels to one category or the other, making it a non-probabilistic binary linear classifier. An SVM model is a representation of the pixels as points in space that are mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to one category based on which side of the gap they fall.
Once the supervised classification is obtained, foam and water pixels can be merged together to provide the ocean class and an ocean/land map can then be derived. At this stage, pixels located inland may be classified in the water class if they belong to a lake or a flooded area. Since the inland water pixels are not relevant for the purpose of the shoreline detection, a segmentation process is then applied. This process consists of assigning adjacent pixels belonging to the same class to the same region. Each region is then numbered and the region corresponding to the sea (the largest) is the only region kept. All the pixels not belonging to this region are considered to be land, removing the inland waters, thus facilitating the shoreline extraction from the land and ocean map only. The flowchart of the overall methodology is presented in Figure 4.
To evaluate the erosion and the accretion between the 2 dates, the land/ocean maps must be corrected from the normal tidal effects and from the subsidence phenomenon induced by the tsunami. t1 refers to the date before the tsunami and t2 the date after the tsunami. Based on the satellite acquisition times, the tidal effect was corrected over the image acquired at time t2 by taking into account the difference of sea level between the 2 dates (t2 and t1). If we note the water level h1 (m) at t1 and h2 (m) at t2, the difference of sea level due to normal tidal effect is then h2 − h1. Considering that the tsunami has induced a subsidence of 37 cm in this area [43], the difference of sea level δh is finally of h2h1 + 0.37. The slope at the shoreline was produced by a bathymetric survey [44] with a resolution of 2–3 m and an accuracy of 2%. The bathymetric survey was acquired after the tsunami in 2013 but, because this map was only used to calculate the slope at the shoreline, the subsidence correction of the map was of no use. The shift (positive or negative depending on the sign of δh) of the shoreline is then given by Equation (1).
s h i f t = δ h t a n s l o p e
where tan(slope) is the tangent of the slope. The number of pixels to be removed or added at the shoreline location in the land/ocean map (depending on whether the shift is positive or negative) is the rounded value of shift divided by the spatial resolution of the Woldview-2 sensor (i.e., 2 m).
Since the performance of the SVM classification method will be compared with four other methods, namely the Euclidean Distance, the Spectral Angle Mapper, the Maximum Likelihood, the Support Vector Machine (SVM) and the Neural Network (NN), in Section 4 (Discussion), these latter methods are defined here.
To enhance the method performance, the wave run up should also be taken into consideration using series of images acquired during a short period to obtain an average shoreline. Nevertheless, because the erosion and the accretion processes do not show the same magnitude of error as the error made with the wave run up, the error was ignored in this study.
The Euclidean Distance (ED) is defined as d x , r k , the distance between the spectral reflectance of the pixel x and the spectral reflectance of the class rk (Equation (2)):
d x , r k = i = 1 N x i r k ¯ i 2
where N is the number of spectral bands, k is the number of the class, i is the number of the band, x is the spectral profile of one pixel and r k ¯ is the mean spectral profile of class k. Richards et al. [45] determined the classification criterion for each pixel: if d x , r k < d x , r j , the pixel defined by vector x belongs to class k .
The Spectral Angle Mapper Distance (SAM) method is based on the calculation of the angle between the spectral profile of pixel x and r k ¯   , which is the mean spectral profile of class k. The spectral angle α is defined by Richards et al. [45] as follows (Equation (3)):
α x , r k =   c o s 1 ( i = 1 N x i * r ¯ k ( i = 1 N x i ² ) 1 / 2 * ( i = 1 N r ¯ k 2 ) 1 / 2 )
If α x , r k < α x , r j , the pixel defined by vector x belongs to class k .
The Maximum likelihood classification (ML) method is based on the assumption that the statistics for each class in each band are normally distributed; the probability that a given pixel belongs to a specific class is calculated. Unless a probability threshold value is fixed, all pixels are classified. Each pixel is assigned to the class that shows the highest probability that is in fact the maximum likelihood [45]. According to [45], r ¯ k and R i   are, respectively, the mean profile and the covariance matrix of class k. The discriminant function g x , r k for each pixel is defined as follows (Equation (4)):
g x , r k = l n R k x r ¯ k T * R k 1 * x r ¯ k
The Neural Network (NN) is composed of a large number of simple, interconnected neurons working in parallel within a network. The NN has the capability to develop an internal representation of the spectral profile that is presented as input to the network. The learning phase is accomplished through the dynamic adjustment of network interconnection strengths (adaptive weights) associated with each neuron. Such a process, termed back propagation, uses the desired outcome (class) and a defined input (training set) to initiate feedback to the neural network. In this study, the inputs were the spectral reflectance of the image pixels and the output were the classes. The training set was composed of the reflectance of the pixels contained in each ROI associated with each output class. The network cycles through the training set until the synapse weights are such that the network correctly relates the defined input to the desired output. When presented with new data, the internal synapse weights excite or inhibit the firing of specific processing units (neurons). The pattern of these neuron firings segregates the input signals into the output classes [46]. In this study, a sigmoid logistic activation function was used.

2.4. Validation

For each classification method, the land/ocean maps were compared to the reference shoreline as a validation. The term “ocean” includes both foam and water. The reference shorelines were manually obtained using the visual inspection of the satellite image (Figure 5a). This validation was considered to be reliable because the human brain takes into account many contextual parameters such as the color, the shape and the texture of the shore that the computer ignores. Other studies used human expertise for the shoreline reference [47,48]. A land/ocean reference map is then generated from the reference shoreline. Ocean pixels are assigned the value of 1 (white color) while land pixels are assigned the value of 0 (black color) (Figure 5b).
Two types of comparisons are carried out to evaluate the performances of the classification methods:
-
the comparison between land/ocean maps for each classification and the reference map; such a comparison provides estimates of false positive and false negative errors.
-
the comparison between the erosion and accretion surface areas estimated for each classification and the reference values.

2.4.1. False Positive and False Negative Errors

The first comparison consists of a simple subtraction, pixel by pixel, between the land/ocean maps obtained for each classification method and the corresponding land/ocean reference maps. Such a subtraction enables the determination of whether the estimated shoreline is placed in the land or in the ocean area. The false positive error occurs when the estimated shoreline is located in the land area. The false negative error occurs when the estimated shoreline is located in the ocean area.

2.4.2. Erosion and Accretion

The estimation of erosion and accretion surface areas was obtained by subtraction as follows: the pre-tsunami land/ocean map was subtracted from the post tsunami land/ocean map. The resulting 1 value means emergence of ocean (erosion), the resulting −1 value means an emergence of land (accretion). To estimate the erosion and accretion surface, pixels multiplied by the surface of one pixel (2 × 2 = 4 m2 for the WorldView2 sensor).
The derived erosion and accretion surface areas can be compared with reference values using the Estimated Surface Relative Error (ESRE) as follows (Equation (5)):
E S R E e n   % = S ^ S r e f S r e f * 100
where S ^   is the estimated surface and S r e f is the reference surface obtained from the difference between the reference shoreline before and after the tsunami.

3. Results

Importance of the Consideration of the Foam Class

The SVM classification was applied to the four WorldView-2 images when three classes (land, foam and water) were used and when only two classes (land and water) were used to evaluate the importance of adding the class of foam in the classification. Figure 6 shows the difference between the two classifications for a sample of the image acquired on 8 November 2010 (southern area) and the image acquired on 10 August 2010 (northern area). Yellow areas correspond to land pixels, dark blue areas to water pixels and light blue areas to foam pixels. The foam pixels were correctly classified within the foam class for the case where three classes were used (Figure 6a,e). For the case where two classes were used, the foam pixels were classified within the land class (Figure 6c,g). This is due to the fact that the spectral profile of foam was fairly similar to the spectral profile of land (Figure 3). The land/ocean maps obtained after merging water and foam pixels (when three classes) and the segmentation process are reported in Figure 6b,d,f,h. In these maps, the inland water pixels disappeared.
The relative false positive (resp. false negative) error was obtained by dividing the sum the false positive (resp. false negative) pixels by the number of ocean pixels in the reference. They are given for each image in Table 1. For the four dates, the relative errors were lower than 1% for three classes and lower than 2% for two classes which is low considering the ocean mask size but the introduction of the new class made the false negative relative error significantly decrease to a factor of 4.9 for the 4 August 2010. This is because the foam pixels were no longer classified as land. Note that the false positive errors remained stable for both two and three classes.
Table 2 presents the reference value of erosion and accretion in the two sites and the derived values when the SVM classification method was used for both two and three classes. The results show that taking into account the new class of foam substantially improves the estimation of erosion and the accretion. This is because the area covered by foam is of the same order as the erosion and accretion area. Thus, ignoring this class may result in significantly overestimating or underestimating the erosion or the accretion, typically by a factor of two or three.
Table 2 also shows that the erosion process caused by the tsunami was higher than the accretion process for the two sites; differences of 9.6 m2/km and 1.8 m2/km were observed for the southern and northern site, respectively. This means that the tsunami removed more material from the coast than it deposited. A similar phenomenon was observed after the tsunami induced by the Sumatra earthquake in 2004 [49]. Kench et al. [49] indicated that erosional and depositional impacts were observed on all islands. In general, changes were of a minor nature with a maximum reduction in the island area of 9% and an average of 3.75%. The tsunami accentuated predictable seasonal oscillations in shoreline change, including localized erosion reflected in fresh scarps and seepage gullies [49].
The results do not seem to be sensitive to the choice of ROIs because the reflectance shapes of sand, water and foam are stable whatever the site when images were corrected for the atmospheric effect. Even if the shore is composed of rocks rather than sand, the spectral profile will be similar (increasing with the wavelength). On the southern area, the shore was composed of both rocks and sand and all the pixels were correctly classified as land (Figure 6).
In the current study, it is noteworthy that the northern area was more affected by the tsunami than the southern area (Table 2). This is due to the composition of the shore that is more compact and massive in the south than in the north of Japan, where it is mainly composed of sandy beaches. The areas of the coastline most dramatically altered by the tsunami were beaches and river mouths where the silts, that had accumulated over a long period of time, was swept to other areas. In the southern area, only river mouths were affected by the tsunami. A long stretch of the coastline is composed of rocks and dikes which protect it from erosion and accretion.

4. Discussion

4.1. Comparison of the SVM Method with Other Methods of Classifications

The results obtained by the SVM classification method described in Section 2 were compared with four other classification methods (ED, SAM, ML and NN). The spectral profiles used for these different methods were the same as the profiles used for the SVM method. Figure 7 shows a sample of the image acquired on 8 November 2010 in the southern area and Figure 8 a sample of the image acquired on 10 August 2010 in the northern area. It can be observed that for both images, the SAM and ML classification methods wrongly retrieved a significant number of foam pixels inland which were not removed by the segmentation whereas the ED (Figure 7 and Figure 8), the NN (Figure 7 and Figure 8) and the SVM (see Figure 6a) wrongly retrieved a significant number of water pixels inland which were removed by the segmentation process. The SVM method remains more robust in comparison to the ED and NN.

4.2. Comparison of False Positive and False Negative Errors

For validation purposes, relative false positive and false negative errors were evaluated by comparing the retrieved pixels with the reference for each classification; the results are shown in Figure 9 (left) for the southern area and Figure 9 (right) for the northern area.
On 8 November 2010 the ED, SVM and NN methods provide the lowest false positive and negative errors whereas the SAM and ML classification techniques lead to errors of more than 4% (Figure 8). This is explained by the classification technique used by SAM and ML where foam is largely detected in land. On 10 February 2012, 4 August 2010 and 10 April 2012, the SVM and NN methods remain the most effective techniques showing errors of less than 2%. The ML method systematically exhibits the highest number of false positives and negatives errors, greater than 5%. False positive errors are often higher than false negative errors because water pixels erroneously appear inland and more rarely, land pixels are wrongly retrieved in water areas thanks to the new class of foam.

4.3. Estimation of Erosion and Accretion

As in Table 2 for the SVM classification method, Table 3 shows the area of erosion and accretion per km of coast and the Estimated Surface Relative Error (ESRE, Equation (5)), which is the relative error on surface estimation compared to reference surfaces, for all the methods on both sites.
Table 3 clearly points out that the SVM method is highly efficient to estimate both the erosion process due to the tsunami, where relative errors of 19% and −2% respectively occurred compared to the reference desired values, and the accretion process where relative errors of 24%, 13%, respectively, occurred compared to the reference values. The erosion process was higher than the accretion process for both sites.
For the ED method, which is the oldest and simplest classification method, each class is represented by a single spectrum and each pixel spectrum is compared to the class spectrum in absolute values with the Euclidean distance. This method can be sensitive to the presence of shadow. The most inaccurate result of the ED classification method occurred for the image acquired on the 10 February 2012. This image is the one whose illumination is the most variable within the image in presence of haze, thus explaining why the error was higher than that obtained for the other images.
The SAM distance is often used when considering multispectral images. The SAM technique allows the shapes of spectra to be compared rather than the absolute values. This is especially interesting when irradiance changes in the presence of clouds and shadow. The error that occurred when using the SAM distance was higher than 4% on 8 November 2010 and on 4 august 2010. For these two dates, the foam pixels were retrieved inland (Figure 7 for the first date). As shown in Figure 3, land and foam reflectances show almost the same spectral shape. Therefore, significant errors could have been be made when the land pixels were classified as foam. The SAM distance is therefore not suitable to discriminate between land and foam pixels.
Contrary to the ED and SAM methods, the ML method takes into account the intra-class variability. Indeed, the ML method considers both the variances and covariances of the class signatures when assigning each pixel to one of the classes represented by the signature. Assuming that the distribution of a class is normal, a class can be characterized by the mean vector and the covariance matrix. Given these two characteristics for each cell value, the statistical probability is calculated for each class, identifying which class the pixel belongs to or is a member of. If we compare the ML results to the other methods, the error was always higher than 6% except for 4 August 2010. This can be explained by the small samples of each class exhibiting a weak variability.
The SVM and NN methods were efficient for all the dates because these two methods have a higher generalization capability, in particular with regard to small training sample sizes. These results were also confirmed by [50], who concluded that SVM and NN algorithms provided better performance in comparison to SAM for the classification using LISS (Linear Imagine Self Scanning System)-IV satellite sensor data. In the context of supervised crop type classification [51], they also concluded that the classification results were strongly influenced by the type of classifier. SVM classifiers outperformed random forest and NN in most cases. The poorest results by far were obtained with ML classification. This conclusion was also confirmed by Yu et al. [26]. Our results show that the best methods to obtain the land /ocean maps, and then the shoreline extraction, are the SVM (19% and −2% on the erosion and 24% and 13% on the accretion) and the NN classification methods (45% and 43% on the erosion and −35% and 16% on the accretion).

5. Conclusions

The purpose of this study was to propose a method for monitoring shorelines using high spatial resolution images containing foam pixels contrary to most of the previous studies which ignore them. The method was first tested on Worlview-2 satellite images (2 m resolution and eight spectral bands) acquired over the East coast of Japan close to the FDNPP that was severely damaged by the Tohoku tsunami. The SVM classification technique was first applied, and the new class of foam led to a decrease in the false negative error by a maximum factor of 4.9 with the four images. The SVM method was further compared with four other classification methods (ED, SAM, ML and NN). The SVM and NN methods were the most efficient with false positive and false negative errors with less than 2%. The results also showed that erosion and accretion processes caused by the tsunami were higher in the northern area (respectively 29.1 m2/km, 27.3 m2/km) than in the southern area (16.5 m2/km, 6.9 m2/km) due to the presence of beaches in the north. It was also shown that the erosion was higher (respectively 29.1 m2/km, 16.5 m2/km) than the accretion (27.3 m2/km, 6.9 m2/km) on both sites as observed after the tsunami induced by the Sumatra earthquake in 2004.

Author Contributions

Conceptualization, A.M.; Data curation, J.S. and M.L.; Formal analysis, A.M.; Funding acquisition, S.C.; Methodology, A.M. and M.C.; Project administration, S.C.; Software, J.S. and M.L.; Supervision, A.M.; Validation, J.S.; Writing—original draft, A.M. and J.S.; Writing—review and editing, M.C. and S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the French program “Investissement d’Avenir” run by the National Research Agency. (AMORAD project, grant ANR-11-RSNR-0002).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Small, C.; Nicholls, R.J. A global analysis of human settlement in coastal zones. J. Coast. Res. 2003, 19, 584–599. [Google Scholar]
  2. Wong, P.P.; Losada, I.J.; Gattuso, J.-P.; Hinkel, J.; Khattabi, A.; McInnes, K.L.; Saito, Y.; Sallenger, A. Coastal systems and low-lying areas. Clim. Chang. 2014, 2104, 361–409. [Google Scholar]
  3. Blodget, H.; Taylor, P.; Roark, J. Shoreline changes along the Rosetta-Nile Promontory: Monitoring with satellite observations. Mar. Geol. 1991, 99, 67–77. [Google Scholar] [CrossRef]
  4. Ruiz-Beltran, A.P.; Astorga-Moar, A.; Salles, P.; Appendini, C.M. Short-term shoreline trend detection patterns using SPOT-5 image fusion in the northwest of Yucatan, Mexico. Estuar. Coasts 2019, 42, 1761–1773. [Google Scholar] [CrossRef]
  5. Collin, A.; Duvat, V.; Pillet, V.; Salvat, B.; James, D. Understanding Interactions between Shoreline Changes and Reef Outer Slope Morphometry on Takapoto Atoll (French Polynesia). J. Coast. Res. 2018, 85, 496–500. [Google Scholar] [CrossRef]
  6. Hagenaars, G.; de Vries, S.; Luijendijk, A.P.; de Boer, W.P.; Reniers, A.J. On the accuracy of automated shoreline detection derived from satellite imagery: A case study of the sand motor mega-scale nourishment. Coast. Eng. 2018, 133, 113–125. [Google Scholar] [CrossRef]
  7. Maglione, P.; Parente, C.; Vallario, A. Coastline extraction using high resolution WorldView-2 satellite imagery. Eur. J. Remote Sens. 2014, 47, 685–699. [Google Scholar] [CrossRef]
  8. Sylla, D.; Minghelli-Roman, A.; Blanc, P.; Mangin, A.; d’Andon, O.H.F. Fusion of multispectral images by extension of the pan-sharpening ARSIS method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1781–1791. [Google Scholar] [CrossRef]
  9. Chen, W.-W.; Chang, H.-K. Estimation of shoreline position and change from satellite images considering tidal variation. Estuar. Coast. Shelf Sci. 2009, 84, 54–60. [Google Scholar] [CrossRef]
  10. Wang, H.; Bi, N.; Saito, Y.; Wang, Y.; Sun, X.; Zhang, J.; Yang, Z. Recent changes in sediment delivery by the Huanghe (Yellow River) to the sea: Causes and environmental implications in its estuary. J. Hydrol. 2010, 391, 302–313. [Google Scholar] [CrossRef]
  11. Kuleli, T.; Guneroglu, A.; Karsli, F.; Dihkan, M. Automatic detection of shoreline change on coastal Ramsar wetlands of Turkey. Ocean Eng. 2011, 38, 1141–1149. [Google Scholar] [CrossRef]
  12. Pardo-Pascual, J.E.; Almonacid-Caballer, J.; Ruiz, L.A.; Palomar-Vázquez, J. Automatic extraction of shorelines from Landsat TM and ETM+ multi-temporal images with subpixel precision. Remote Sens. Environ. 2012, 123, 1–11. [Google Scholar] [CrossRef] [Green Version]
  13. Toure, S.; Diop, O.; Kpalma, K.; Maiga, A.S. Shoreline Detection using Optical Remote Sensing: A Review. ISPRS Int. J. Geo Inf. 2019, 8, 75. [Google Scholar] [CrossRef] [Green Version]
  14. Ghoneim, E.; Mashaly, J.; Gamble, D.; Halls, J.; AbuBakr, M. Nile Delta exhibited a spatial reversal in the rates of shoreline retreat on the Rosetta promontory comparing pre-and post-beach protection. Geomorphology 2015, 228, 1–14. [Google Scholar] [CrossRef]
  15. Erteza, I.A. An Automatic Coastline Detector for Use with SAR Images; Sandia National Laboratories (SNL-NM): Albuquerque, NM, USA, 1998. [Google Scholar]
  16. Aedla, R.; Dwarakish, G.; Reddy, D.V. Automatic shoreline detection and change detection analysis of netravati-gurpurrivermouth using histogram equalization and adaptive thresholding techniques. Aquat. Procedia 2015, 4, 563–570. [Google Scholar] [CrossRef]
  17. Kale, S.; Acarli, D. Shoreline Change Monitoring in Atikhisar Reservoir by Using Remore Sensing and Geographic Information System (GIS). Fresenius Environ. Bull. 2019, 28, 4329. [Google Scholar]
  18. Mukhopadhyay, A.; Mukherjee, S.; Mukherjee, S.; Ghosh, S.; Hazra, S.; Mitra, D. Automatic shoreline detection and future prediction: A case study on Puri Coast, Bay of Bengal, India. Eur. J. Remote Sens. 2012, 45, 201–213. [Google Scholar] [CrossRef]
  19. Cao, W.; Zhou, Y.; Li, R.; Li, X. Mapping changes in coastlines and tidal flats in developing islands using the full time series of Landsat images. Remote Sens. Environ. 2020, 239, 111665. [Google Scholar] [CrossRef]
  20. Vivek, G.; Goswami, S.; Samal, R.; Choudhury, S. Monitoring of Chilika Lake mouth dynamics and quantifying rate of shoreline change using 30 m multi-temporal Landsat data. Data Brief 2019, 22, 595–600. [Google Scholar]
  21. Pardo-Pascual, J.; Sánchez-García, E.; Almonacid-Caballer, J.; Palomar-Vázquez, J.; de los Santos, E.P.; Fernández-Sarría, A.; Balaguer-Beser, Á. Assessing the accuracy of automatically extracted shorelines on microtidal beaches from Landsat 7, Landsat 8 and Sentinel-2 Imagery. Remote Sens. 2018, 10, 326. [Google Scholar] [CrossRef] [Green Version]
  22. Liu, Q.; Trinder, J.C.; Turner, I.L. Automatic super-resolution shoreline change monitoring using Landsat archival data: A case study at Narrabeen–Collaroy Beach, Australia. J. Appl. Remote Sens. 2017, 11, 016036. [Google Scholar] [CrossRef]
  23. Do, A.T.; de Vries, S.; Stive, M.J. The estimation and evaluation of shoreline locations, shoreline-change rates, and coastal volume changes derived from Landsat images. J. Coast. Res. 2019, 35, 56–71. [Google Scholar]
  24. Sánchez-García, E.; Palomar-Vázquez, J.; Pardo-Pascual, J.; Almonacid-Caballer, J.; Cabezas-Rabadán, C.; Gómez-Pujol, L. An efficient protocol for accurate and massive shoreline definition from mid-resolution satellite imagery. Coast. Eng. 2020, 160, 103732. [Google Scholar] [CrossRef]
  25. Cabezas-Rabadán, C.; Pardo-Pascual, J.E.; Palomar-Vázquez, J.; Ferreira, Ó.; Costas, S. Satellite Derived Shorelines at an Exposed Meso-tidal Beach. J. Coast. Res. 2020, 95, 1027–1031. [Google Scholar] [CrossRef]
  26. Yu, L.; Lan, J.; Zeng, Y.; Zou, J. Comparison of Land Cover Types Classification Methods Using Tiangong-2 Multispectral Image. In Proceedings of the Tiangong-2 Remote Sensing Application Conference, Beijing, China, 18 December 2018; Springer: Singapore, 2019; pp. 241–253. [Google Scholar]
  27. Qin, B.; Li, L.; Li, S. Data Quality Evaluation and Application Potential Analysis of TIANGONG-2 Wide-Band Imaging Spectrometer. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 3. [Google Scholar] [CrossRef] [Green Version]
  28. Kalkan, K.; Bayram, B.; Maktav, D.; Sunar, F. Comparison of support vector machine and object based classification methods for coastline detection. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 7, W2. [Google Scholar] [CrossRef] [Green Version]
  29. Zhang, H.; Jiang, Q.; Xu, J. Coastline Extraction Using Support Vector Machine from Remote Sensing Image. J. Multimed. 2013, 8, 175–182. [Google Scholar]
  30. Dunbar, P.; McCullough, H.; Mungov, G.; Varner, J.; Stroker, K. Tohoku earthquake and tsunami data available from the national oceanic and atmospheric administration/national geophysical data center. Geomat. Nat. Hazards Risk 2011, 2, 305–323. [Google Scholar] [CrossRef] [Green Version]
  31. Benz, H.; Ransom, C. USGS Updates Magnitude of Japan’s 2011 Tohoku Earthquake to 9.0; US Geological Survery Website; US Geological Survery: Reston, VA, USA, 2011. [Google Scholar]
  32. Liu, W.; Yamazaki, F.; Gokon, H.; Koshimura, S. Damage Detection of the 2011 Tohoku, Japan Earthquake from High-resolution SAR Intensity Images. In Proceedings of the 15th World Conference on Earthquake Engineering, Lisbon, Portugal, 24–28 September 2012. [Google Scholar]
  33. Raby, A.; Macabuag, J.; Pomonis, A.; Wilkinson, S.; Rossetto, T. Implications of the 2011 Great East Japan Tsunami on sea defence design. Int. Disaster Risk Reduct. 2015, 14, 332–346. [Google Scholar] [CrossRef] [Green Version]
  34. Baba, M. Fukushima accident: What happened? Radiat. Meas. 2013, 55, 17–21. [Google Scholar] [CrossRef]
  35. Minghelli, A.; Lei, M.; Charmasson, S.; Rey, V.; Chami, M. Monitoring suspended particle matter using GOCI satellite data after the tohoku (Japan) tsunami in 2011. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 567–576. [Google Scholar] [CrossRef]
  36. Ambe, D.; Kaeriyama, H.; Shigenobu, Y.; Fujimoto, K.; Ono, T.; Sawada, H.; Saito, H.; Miki, S.; Setou, T.; Morita, T.; et al. Five-minute resolved spatial distribution of radiocesium in sea sediment derived from the Fukushima Dai-ichi Nuclear Power Plant. J. Environ. Radioact. 2014, 138, 264–275. [Google Scholar] [CrossRef] [Green Version]
  37. Siegel, D.A.; Wang, M.; Maritorena, S.; Robinson, W. Atmospheric Correction of Satellite Ocean Color Imagery: The Black Pixel Assumption. Appl. Opt. 2000, 39, 3582–3591. [Google Scholar] [CrossRef]
  38. Lee, Z.; Carder, K.L.; Mobley, C.D.; Steward, R.G.; Patch, J.S. Hyperspectral remote sensing for shallow Waters. I. A semianalytical model. Appl. Opt. 1998, 37, 6329–6338. [Google Scholar] [CrossRef]
  39. Lee, Z.; Lubac, B.; Werdell, J.; Armone, R. An update of the quasi-analytical algorithm (QAA_v5). Int. Ocean Color Group Softw. Rep. 2009, 1–9. [Google Scholar]
  40. Gerbermann, A.; Neher, D. Reflectance of varying mixtures of a clay soil and sand. Photogramm. Eng. Remote Sens. 1979, 45, 1145–1151. [Google Scholar]
  41. Kokhanovsky, A. Spectral reflectance of whitecaps. J. Geophys. Res. Ocean. 2004, 109, C05021. [Google Scholar] [CrossRef]
  42. Foody, G.M.; Mathur, A. Toward intelligent training of supervised image classifications: Directing training data acquisition for SVM classification. Remote Sens. Environ. 2004, 93, 107–117. [Google Scholar] [CrossRef]
  43. Imakiire, T.; Koarai, M. Wide-area land subsidence caused by “the 2011 off the Pacific Coast of Tohoku Earthquake”. Soils Found. 2012, 52, 842–855. [Google Scholar] [CrossRef] [Green Version]
  44. Tsuruta, T.; Harada, H.; Misonou, T.; Matsuoka, T.; Hodotsuka, Y. Horizontal and vertical distributions of 137 Cs in seabed sediments around the river mouth near Fukushima Daiichi Nuclear Power Plant. J. Oceanogr. 2017, 73, 547–558. [Google Scholar] [CrossRef] [Green Version]
  45. Richards, J.; Jia, X. Remote Sensing Digital Image Analysis-Hardback; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  46. Wang, L.; Silván-Cárdenas, J.L.; Sousa, W.P. Neural network classification of mangrove species from multi-seasonal Ikonos imagery. Photogramm. Eng. Remote Sens. 2008, 74, 921–927. [Google Scholar] [CrossRef]
  47. Gomez, C.; Wulder, M.A.; Dawson, A.G.; Ritchie, W.; Green, D.R. Shoreline change and coastal vulnerability characterization with Landsat imagery: A case study in the Outer Hebrides, Scotland. Scott. Geogr. J. 2014, 130, 279–299. [Google Scholar] [CrossRef]
  48. Dingerson, L.M. Predicting Future Shoreline Condition Based on Land Use Trends, Logistic Regression, and Fuzzy Logic. Master’s Thesis, College of William and Mary, Williamsburg, VA, USA, 2005. [Google Scholar]
  49. Kench, P.; Nichol, S.; Smithers, S.; McLean, R.; Brander, R. Tsunami as agents of geomorphic change in mid-ocean reef islands. Geomorphology 2008, 95, 361–383. [Google Scholar] [CrossRef]
  50. Kumar, P.; Gupta, D.K.; Mishra, V.N.; Prasad, R. Comparison of support vector machine, artificial neural network, and spectral angle mapper algorithms for crop classification using LISS IV data. Int. J. Remote Sens. 2015, 36, 1604–1617. [Google Scholar] [CrossRef]
  51. Nitze, I.; Schulthess, U.; Asche, H. Comparison of machine learning algorithms random forest, artificial neural network and support vector machine to maximum likelihood for supervised crop type classification. In Proceedings of the 4th International Conference on GEographic Object Based Image Analysis, Rio de Janeiro, Brazil, 7–9 May 2012; p. 35. [Google Scholar]
Figure 1. Regional (a) and local (b) study area in Japan. The specific areas that were covered by the satellite images of this study are indicated by a red box. The Fukushima Daiichi Nuclear Power Plant is marked by the nuclear symbol (south of the area).
Figure 1. Regional (a) and local (b) study area in Japan. The specific areas that were covered by the satellite images of this study are indicated by a red box. The Fukushima Daiichi Nuclear Power Plant is marked by the nuclear symbol (south of the area).
Remotesensing 12 02664 g001
Figure 2. The southern and northern areas listed as two red boxes in Figure 1: (a) Worldview-2 images of the southern area acquired before (8 November 2010) and after the tsunami (10 February 2012), with a visual focus on the Ukedo harbor. (b) Worldview-2 images of the northern area acquired before (4 August 2010) and after the tsunami (10 April 2012), with a visual focus on the Yriage harbor.
Figure 2. The southern and northern areas listed as two red boxes in Figure 1: (a) Worldview-2 images of the southern area acquired before (8 November 2010) and after the tsunami (10 February 2012), with a visual focus on the Ukedo harbor. (b) Worldview-2 images of the northern area acquired before (4 August 2010) and after the tsunami (10 April 2012), with a visual focus on the Yriage harbor.
Remotesensing 12 02664 g002
Figure 3. Mean spectral profile of the 3 classes defined in this study: water (blue), land (orange) and foam (grey) and their standard deviation.
Figure 3. Mean spectral profile of the 3 classes defined in this study: water (blue), land (orange) and foam (grey) and their standard deviation.
Remotesensing 12 02664 g003
Figure 4. Flowchart of the classification methodology. ROI stands for “Region Of Interest”.
Figure 4. Flowchart of the classification methodology. ROI stands for “Region Of Interest”.
Remotesensing 12 02664 g004
Figure 5. (a) Reference shoreline on the Ukedo harbor and (b) reference land/ocean map on 8 November 2010.
Figure 5. (a) Reference shoreline on the Ukedo harbor and (b) reference land/ocean map on 8 November 2010.
Remotesensing 12 02664 g005
Figure 6. First line (ad): Support Vector Machine (SVM) classification on the southern area on 8 November 2010 when 3 classes (land, foam and water) were used (a) and when 2 classes (land and water) were used (c). (b,d): land/ocean maps obtained after merging water and foam pixels through the segmentation process. Second line (eh): SVM classification on the northern area on 10 August when 3 classes (land, foam and water) were used (e) and when 2 classes (land and water) were used (g). (f,h): land/ocean maps obtained after merging water and foam pixels through the segmentation process.
Figure 6. First line (ad): Support Vector Machine (SVM) classification on the southern area on 8 November 2010 when 3 classes (land, foam and water) were used (a) and when 2 classes (land and water) were used (c). (b,d): land/ocean maps obtained after merging water and foam pixels through the segmentation process. Second line (eh): SVM classification on the northern area on 10 August when 3 classes (land, foam and water) were used (e) and when 2 classes (land and water) were used (g). (f,h): land/ocean maps obtained after merging water and foam pixels through the segmentation process.
Remotesensing 12 02664 g006
Figure 7. (Top panels) Pixel classification retrieved by the 4 methods described in Section 2 (Euclidean Distance (ED), Spectral Angle Mapper (SAM), Maximum Likelihood (ML), Neural Network (NN)) on 8 November 2010 for the southern area (Figure 1); 3 classes were used: land (yellow), foam (light blue) and water (blue). (Bottom panels) Land/ocean maps obtained after merging foam and water pixels using a segmentation procedure.
Figure 7. (Top panels) Pixel classification retrieved by the 4 methods described in Section 2 (Euclidean Distance (ED), Spectral Angle Mapper (SAM), Maximum Likelihood (ML), Neural Network (NN)) on 8 November 2010 for the southern area (Figure 1); 3 classes were used: land (yellow), foam (light blue) and water (blue). (Bottom panels) Land/ocean maps obtained after merging foam and water pixels using a segmentation procedure.
Remotesensing 12 02664 g007
Figure 8. (Top panels) Pixel classification retrieved by the 4 methods described in Section 2 (ED, SAM, ML, NN) on 10 August 2010 for the northern area (Figure 1); 3 classes were used: land (yellow), foam (light blue) and water (blue). (Bottom panels) Land/ocean maps obtained after merging foam and water pixels using a segmentation procedure.
Figure 8. (Top panels) Pixel classification retrieved by the 4 methods described in Section 2 (ED, SAM, ML, NN) on 10 August 2010 for the northern area (Figure 1); 3 classes were used: land (yellow), foam (light blue) and water (blue). (Bottom panels) Land/ocean maps obtained after merging foam and water pixels using a segmentation procedure.
Remotesensing 12 02664 g008
Figure 9. (Left) Relative false negative (blue) and positive (orange) errors (in %) in the retrieved pixel classification for the southern area for the dates before and after the tsunami. (Right) Relative false negative (blue) and positive (orange) errors (in %) in the retrieved pixels classification for the northern area for the dates before and after the tsunami.
Figure 9. (Left) Relative false negative (blue) and positive (orange) errors (in %) in the retrieved pixel classification for the southern area for the dates before and after the tsunami. (Right) Relative false negative (blue) and positive (orange) errors (in %) in the retrieved pixels classification for the northern area for the dates before and after the tsunami.
Remotesensing 12 02664 g009
Table 1. Relative false negative and positive errors (in %) for the four WorldView-2 images.
Table 1. Relative false negative and positive errors (in %) for the four WorldView-2 images.
Date of Acquisition of
WV-2 Images
False Positive Error (%)False Negative Error (%)
2 Classes/3 Classes2 Classes/
3 Classes
Southern site08 Nov. 20100.88/0.831.54/0.89
10 Feb. 20120.69/0.680.30/0.19
Northern site4 Aug. 20100.92/0.591.67/0.34
10 Apr. 20120.86/0.780.78/0.22
Table 2. Relative false negative and positive errors (in %) for the four WorldView-2 images. SVM2 stands for SVM classification with 2 classes and SVM3 stands for SVM classification with 3 classes. Reference values of erosion and accretion for each site are displayed in bold.
Table 2. Relative false negative and positive errors (in %) for the four WorldView-2 images. SVM2 stands for SVM classification with 2 classes and SVM3 stands for SVM classification with 3 classes. Reference values of erosion and accretion for each site are displayed in bold.
Erosion (m2/km) of Coast)Accretion (m2/km of Coast)ESRE on ErosionESRE on Accretion
Southern site (25 km of coast)Reference values16.56.9
SVM2
SVM3
48.3
19.7
1.10
8.6
192%
19%
−84%
24%
Northern site (19 km of coast)Reference values29.127.3
SVM2
SVM3
58.2
28.6
69.1
30.7
100%
−2%
153%
13%
Table 3. Erosion and accretion pixels desired (the reference) and retrieved by the 5 methods for the two sites. The Estimated Surface Relative Error (ESRE) is also reported. Minimum values of ESRE on erosion and accretion for each site are displayed in bold.
Table 3. Erosion and accretion pixels desired (the reference) and retrieved by the 5 methods for the two sites. The Estimated Surface Relative Error (ESRE) is also reported. Minimum values of ESRE on erosion and accretion for each site are displayed in bold.
Erosion (m2/km)Accretion
(m2/km)
ESRE on ErosionESRE on Accretion
Southern site (25 km of coast)Reference values16.56.9
ED104.18.5529%22%
SAM9.464.7−43%834%
ML1119.535.86671%416%
SVM19.78.619%24%
NN23.94.545%−35%
Northern site (19 km of coast)Reference values29.127.3
ED25.162.4−14%129%
SAM39.022.734%−17%
ML64.71644.5123%5926%
SVM28.630.7−2%13%
NN41.731.843%16%

Share and Cite

MDPI and ACS Style

Minghelli, A.; Spagnoli, J.; Lei, M.; Chami, M.; Charmasson, S. Shoreline Extraction from WorldView2 Satellite Data in the Presence of Foam Pixels Using Multispectral Classification Method. Remote Sens. 2020, 12, 2664. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162664

AMA Style

Minghelli A, Spagnoli J, Lei M, Chami M, Charmasson S. Shoreline Extraction from WorldView2 Satellite Data in the Presence of Foam Pixels Using Multispectral Classification Method. Remote Sensing. 2020; 12(16):2664. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162664

Chicago/Turabian Style

Minghelli, Audrey, Jérôme Spagnoli, Manchun Lei, Malik Chami, and Sabine Charmasson. 2020. "Shoreline Extraction from WorldView2 Satellite Data in the Presence of Foam Pixels Using Multispectral Classification Method" Remote Sensing 12, no. 16: 2664. https://0-doi-org.brum.beds.ac.uk/10.3390/rs12162664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop