Next Article in Journal
Mapping Paddy Rice Using Sentinel-1 SAR Time Series in Camargue, France
Next Article in Special Issue
Integrating Imaging Spectrometer and Synthetic Aperture Radar Data for Estimating Wetland Vegetation Aboveground Biomass in Coastal Louisiana
Previous Article in Journal
Regional Variations of Land-Use Development and Land-Use/Cover Change Dynamics: A Case Study of Turkey
Previous Article in Special Issue
Deep Convolutional Capsule Network for Hyperspectral Image Spectral and Spectral-Spatial Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Source Data Fusion Based on Ensemble Learning for Rapid Building Damage Mapping during the 2018 Sulawesi Earthquake and Tsunami in Palu, Indonesia

1
Geoinformatics Unit, RIKEN Center for Advanced Intelligence Project, Tokyo 103-0027, Japan
2
International Research Institute of Disaster Science, Tohoku University, Aoba-Ku, Sendai 980-8752, Japan
*
Author to whom correspondence should be addressed.
Submission received: 8 February 2019 / Revised: 5 April 2019 / Accepted: 9 April 2019 / Published: 11 April 2019

Abstract

:
This work presents a detailed analysis of building damage recognition, employing multi-source data fusion and ensemble learning algorithms for rapid damage mapping tasks. A damage classification framework is introduced and tested to categorize the building damage following the recent 2018 Sulawesi earthquake and tsunami. Three robust ensemble learning classifiers were investigated for recognizing building damage from Synthetic Aperture Radar (SAR) and optical remote sensing datasets and their derived features. The contribution of each feature dataset was also explored, considering different combinations of sensors as well as their temporal information. SAR scenes acquired by the ALOS-2 PALSAR-2 and Sentinel-1 sensors were used. The optical Sentinel-2 and PlanetScope sensors were also included in this study. A non-local filter in the preprocessing phase was used to enhance the SAR features. Our results demonstrated that the canonical correlation forests classifier performs better in comparison to the other classifiers. In the data fusion analysis, Digital Elevation Model (DEM)- and SAR-derived features contributed the most in the overall damage classification. Our proposed mapping framework successfully classifies four levels of building damage (with overall accuracy >90%, average accuracy >67%). The proposed framework learned the damage patterns from a limited available human-interpreted building damage annotation and expands this information to map a larger affected area. This process including pre- and post-processing phases were completed in about 3 h after acquiring all raw datasets.

1. Introduction

On 28 September 2018, a massive earthquake (Mw7.5) occurred in the Sulawesi region of Indonesia. The epicenter was located approximately 80 km to the north of Palu city (Figure 1). The subsequent tsunami, up to 8 m of water height [1], inundated and destroyed several houses along the coast of Palu Bay. The ground shaking generated soil liquefaction in some areas, causing a large number of casualties and destroying many houses. As of late October 2018, 2081 casualties were reported. The urban area most affected was the area surrounding Palu Bay, reporting over 68,451 houses damaged [2].
Soon after the event, several international agencies and research institutes started rapid mapping efforts to grasp the overall damage situation and provide crucial information for rescuers, thus reducing the number of casualties. For instance, the Copernicus Programme published an initial building damage mapping estimated through visual interpretation of very high-resolution optical imagery [3]. The Geoinformatics Unit, RIKEN Center of Advanced Intelligence Project (AIP) also conducted a preliminary damage mapping using advanced machine learning technologies for building damage recognition from multi-sensor and multi-temporal remote sensing datasets [4]. These results were published online soon after the event. These mapping efforts based on earth observation technologies emphasize the key role of remote sensing imagery in disaster management, especially in the case of rapid damage assessment [5,6,7].
Techniques for building damage assessment using remote sensing data are traditionally based on change detection approaches that compute relevant features from a pair of multi-temporal images collected before and after a disaster. These methods analyze the relationship between texture, spatial, or intensity changes and the degree of damage observed after the event. Damage mapping is generally performed by setting appropriated thresholds, often set by following an expert’s experience or based on already known ground truth information. These approaches show acceptable success for damage assessment [8,9,10]. However, their transferability is generally restricted due to their site-specific thresholds and the need for an appropriate set of pre- and post-event imagery data. Considering that a suitable pair of pre- and post-event data might not be available, methodologies using only post-event information have been proposed. These frameworks explore several features derived from SAR and optical imagery using machine learning classifiers to recognize damaged structures. For instance, Shi et al. [11] applied a random forest classifier to 191 features of polarimetric, texture, and interferometric information derived from post-event very high-resolution (VHR) airborne SAR data. Their findings suggest that texture information performs better for classifying collapsed buildings. Similar works use high-resolution SAR data, such as ALOS-2 PALSAR-2 in the case of the 2015 Nepal earthquake and TerraSAR-X datasets for the 2008 Wenchuan earthquake, to derive geometric and texture features, and evaluate several classic machine learning algorithms to classify damaged buildings from post-event remote sensing data [12,13]. On the other hand, frameworks that integrate pre-event vector information and post-event VHR remote sensing data together with advanced machine learning technologies have been proposed [14,15]. Furthermore, evaluation of several algorithms for extracting building damage from earth observation data is presented in [16,17].
The conventional method for rapid building damage mapping is to analyze or visually interpret high-resolution optical images to determine the degree of damage in the affected areas. Although this methodology can provide reliable information on severely affected structures, such as collapsed buildings in the case of earthquakes and washed-away buildings in the case of tsunami events [18,19], it is still challenging to distinguish other levels of structural damage, since satellite optical imagery only observes an aerial view of the affected area. In addition, methodologies relying on optical imagery are always limited by the availability of cloud-free images [20,21]. Synthetic aperture radar (SAR) data have an advantage over optical imagery, and also complement it, due to its almost all-weather observation and side-view acquisition capabilities [22]. Layover scattering segments in SAR intensity images provide information about site-wall conditions that can be linked to the damage level in cases of disasters [23]. Furthermore, the different back-scattering mechanisms from multi-polarization SAR data provide information essential to characterizing building damage [7,24,25]. Integration of both optical and SAR data has also shown excellent results by gathering geometric building properties from pre-event optical images and analyzing them using post-event high-resolution SAR data [26]. These techniques, however, require prior knowledge about the structures for their validation, and thus are primarily considered in the case when a post-event field survey has been conducted in the affected areas.
The advances in machine learning classifiers, together with satellite remote sensing data, have recently brought much attention to their applicability to damage recognition [27,28]. For instance, using a supervised classification approach, Wieland et al. [29] evaluated the performance of Support Vector Machine (SVM) for detecting damaged buildings from high-resolution multi-temporal TerraSAR-X intensity images in the case of the 2011 Tohoku tsunami. They found that, with appropriate SVM parameter tuning and feature selection, the SVM classifier can categorize some levels of building damage. A follow-up study, using the same imagery and ground truth dataset, demonstrated that SVM could adequately distinguish three levels of building damage from high-resolution X-band SAR data [30]. Furthermore, with the same dataset, a semi-unsupervised approach utilizes the known hazard distribution of the target area for building a training dataset and logistic regression model [31]. On the other hand, the authors of [17,32] evaluated several deep-neural-network frameworks using satellite optical imagery from the 2010 Haiti earthquake and the 2011 Tohoku tsunami, obtaining generally good overall accuracies (>60%) for detecting collapsed buildings, and demonstrating the potential of combining machine learning technologies and remote sensing information for future scenarios. However, it is important to note that none of the methodologies mentioned above has been tested on recent events for damage mapping, which indicates that some research challenges remain:
  • The appropriate selection of remote sensing data, as well as the derived features that are to be fed into a machine-learning classifier. Optical and SAR imagery have their own advantages with respect to damage recognition tasks. However, the question of which data contribute better to the classification is still unknown. For instance, in the case of tsunami-induced damage where the incoming waves may affect only the building’s side-walls, SAR features are suitable for recognizing such damage patterns.
  • Most of the previous methodologies are based on supervised or semi-unsupervised learning algorithms that require a large number of high-quality training samples. This aspect limits their applicability for responding to future disasters, considering that such labeled data are not available soon after the disaster and are generally only collected several days after the event.
  • Setting parameters of machine learning classifiers. Several algorithms have proven to be robust for categorizing several degrees of damage in the case of different disasters [14,15,16,17]. Nonetheless, previous works set optimizing parameters that work properly for their specific problem settings. These conditions narrow the potential for their implementation in cases of future disasters. Thus, with respect to applicability for rapid damage mapping, there are no adequate guidelines on which algorithm performs better.
In this work, we evaluate the performance of three robust ensemble-learning-based classifiers that have shown great efficiency for image classification tasks in previous work, for recognizing building damage due to earthquakes and tsunamis. The main objective of this work is to address the first and third remaining challenges in damage recognition by fusion of remote sensing and machine learning technologies. To this end, we consider different sets of temporal remote sensing information and evaluate their ability for damage recognition. Moreover, analysis of the combinations of multi-sensor (SAR and optical imagery) datasets are also presented in this study. We use reference data that are relatively freely available for the training and testing steps. We also introduce a damage classification framework based on a systematic pre- and post-processing chain. We conduct our analysis using the dataset acquired as a response of the 2018 Sulawesi earthquake and tsunami. This work is a follow-up update of the rapid building damage mapping published a few days after the disaster occurred [4].

2. Materials

Multiple earth observation data were available in the case of the 2018 Sulawesi tsunami (Figure 1). The majority of the datasets correspond to freely accessible datasets such as Sentinel-1, Sentinel-2, OpenStreetMap, and SRTM DEM. The Sentinel Asia Program provided the ALOS-2 PALSAR-2 (Advanced Land Observing Satellite 2, Phased Array type L-band Synthetic Aperture Radar 2) data. The PlanetScope imagery was available from the disaster data program of Planet Labs, Inc., San Francisco, CA, USA.

2.1. ALOS-2 PALSAR-2

The L-band ALOS-2 PALSAR-2 SAR satellite imagery is administered by the Japan Aerospace Agency (JAXA). Following the event, JAXA acquired several SAR scenes covering the affected area. Here, we use two sets of post-event high-resolution SAR data that were captured on 1 and 3 October. Additionally, two instances of pre-event data from 2 May and 8 August were available (Table 1). These SAR scenes have the same acquisition parameter as the 3 October data. All data were acquired in StripMap observation mode (SM2) with ground sampling distance (GSD) of approximately 5 m after preprocessing. The single post-event SAR image (1 October) was provided in a product level 1.5, SAR amplitude image in a GeoTIFF format. The two pre-event SAR datasets and the post-event of 3 October were provided in a product level 1.1, Single Look Complex (SLC) with amplitude and phase information preserved using two-channel complex number data (https://www.eorc.jaxa.jp/ALOS-2/en/doc/format.htm).

2.2. Sentinel-1

As a part of the European Union’s Earth observation program, Copernicus, two C-band SAR satellites were launched in 2014 and 2016. These satellites continuously provide new medium resolution acquisition of the entire globe. The default acquisition modes are the interferometric wide (IW) swath mode for land, with a resolution of 5 m by 20 m , whereas maritime regions are acquired in the extra wide (EW) swath mode, translating to a resolution of 20 m by 40 m . In general, new data are acquired every six days for Europe and every twelve days for the rest of the Earth. Due to storage limitations on the satellites and limited downlink capacity, the twelve-day interval might not be kept in practice, evident from the latest pre-event acquisition, which occurred on 7 June 2018, almost three months before the actual disaster. Table 1 lists all other Sentinel-1 acquisitions used in this study. All data are dual-polarized, interferometric wide swath acquisitions, processed to single look complex images. This way, in addition to a simple amplitude-based analysis, the interferometric coherence of two acquisitions can be computed and studied. In total, four acquisitions were obtained, the two most recent prior to the event were acquired on 26 May and 7 June. In reaction to the disaster, data acquisitions of Sentinel-1 in this area resumed on 5 and 17 October.

2.3. Sentinel-2

Sentinel-2 is a satellite multispectral Earth observation mission operated by the European Space Agency (ESA) as a part of the EU Copernicus Programme. The Sentinel-2 imagery consists of 13 bands in the visible, near-infrared, and shortwave-infrared (VNIR-SWIR) range with a field of view of 290 km and multiple ground sampling distances (GSDs) of 10 m, 20 m, and 60 m. A revisit cycle of 5 days is achieved by a constellation with two twin satellites. We use pre-event and post-event Sentinel-2 datasets acquired on 17 September and 2 October, respectively. Blue, green, red, and near-infrared bands that have a 10-m GSD were used in our analysis to make the spatial resolution consistent with the other datasets.

2.4. PlanetScope

PlanetScope is a constellation of over 130 small satellites operated by Planet Labs. The PlanetScope imagery consists of four bands (i.e., blue, green, red, and near-infrared) at a GSD of 3 m, characterized with a daily revisit cycle. We use post-event PlanetScope imagery acquired on 1 October. The dataset was provided through the disaster data program (https://www.planet.com/disasterdata/). The image was resampled at 10 m for consistency with the other data sources.

2.5. The Shuttle Radar Topography Mission (SRTM)

During the SRTM [33,34] two antennas, one mounted on the shuttle directly, the other on a 60 m long mast attached to the space shuttle, acquired bistatic X- and C-Band SAR interferograms. Acquisition took place in an 11-day window in the year 2000, producing an almost globally available (between 58 and 60 ° latitude) digital elevation model (DEM). The resulting DEM has 16 m absolute and 6 m relative vertical accuracy, initially made publicly available with a 90 m pixel spacing. In 2015, all data of the full 30 m resolution were released to the public.

2.6. OpenStreetMap

The OpenStreetMap (OSM) Initiative creates and provides free map information of the world that is collected through the collaborative contribution of volunteers. In this work, to access the building damage, we constructed a mask of the built-up area using the OSM buildings layer [35]. This layer is composed of all human-made structures such as houses, schools, and commercial buildings. We implemented an application to automatically download and convert from OSM layer to rasterized GeoTIFF images. For our initial rapid damage mapping, we utilized the OSM data created until 29 September, one day after the earthquake. The OSM database, however, was continually updated for the affected areas. For instance, on 29 September, there were about 250,878 buildings in the area covered by the Sentinel-2 image, and, by 3 October, it increased to 261,490. Finally, one week after the earthquake, the number of buildings were about 267,735. Figure 2 shows the building layer (west of the Palu river) in the days following the event. For our damage mapping analysis, we use the built-up area layer from 15 October (270,258 buildings). The total samples correspond to the building layer in polygon-format were rasterized into 308,649 pixels using as a reference the Sentinel-2 images. It is important to mention that, in the absence of a building layer from OSM, there are techniques that can be used for extracting built-up areas from earth observation data [36,37]. This process, however, is beyond the scope of this work.

2.7. Copernicus Emergency Management Services

Copernicus is the European Union’s (EU) Earth observation project managed by the European Commission with the contribution of the European Space Agency (ESA), the EU Member States and EU Agencies. Emergency Management Service (EMS) is one of Copernicus subtopic. The objective of EMS is to provide a rapid mapping, risk, and recovery service for natural disasters (e.g., flood, earthquake, and tsunami).
In this work, we utilize the preliminary report of building damage labels (e.g., Destroyed, Damaged, Possibly Damaged, and No Damage) published on 2 October [3]. Table 2 lists the number of buildings from each damage class used in this study. The original building damage inventory, provided in point vector format, was rasterized to 10 m of GSD, using as a reference the Sentinel-2 images. This raster image is used as a reference for training and testing for the pixel-based classification analysis.

3. Methods

We propose a building damage recognition framework using multisensor, multitemporal Earth information. Our workflow is divided into three main phases (Figure 3). In the preprocessing step, the raw remote sensing data are calibrated and converted to geocoded images. In the classification phase, we perform pixel-based damage recognition using three ensemble classifiers from the input remote sensing data and derived features. In the last step, postprocessing, the building damage map is produced from the outputs of the previous phase.

3.1. Preprocessing and Feature Extraction

3.1.1. SAR Datasets

The processing procedure for the Sentinel-1 and ALOS-2 PALSAR-2 datasets was almost identical and was performed using ESA’s SNAP software [38]. Slave images were coregistered to a single master image, with the additional steps of debursting and merging of subswaths for Sentinel-1. Each dual-pol acquisition was then despeckled individually using NL-SAR [39]. In addition, pairs of subsequent acquisitions were used to compute coherence estimates, again using NL-SAR for the estimation. The benefit of nonlocal methods is twofold: despeckling improves the robustness for subsequent classification, and they provide a less biased coherence estimate [40] compared to estimators with a smaller window. Finally, all products were geocoded using the 3 s SRTM DEM.

3.1.2. Optical Datasets

We performed atmospheric and terrain correction on the two Sentinel-2 Level 1C datasets using Sen2Cor, which is a freely available processor for generating Sentinel-2 Level 2A products (http://step.esa.int/main/third-party-plugins-2/sen2cor/). Image-based atmospheric normalization [41] was applied to the PlanetScope image using the pre-event Sentinel-2 Level 2A data as the reference to reduce atmospheric effects in the PlanetScope image and align the manifolds of the multi-sensor datasets. The spectral angle distance (SAD) was calculated in the study area using the pre-event Sentinel-2 and post-event PlanetScope.

3.2. Classification

For building damage mapping, we should consider the following issues before selecting the machine learning classifier:
  • lower computational complexity;
  • tuning parameters;
  • classification capability.
Among all the machine learning classifiers, decision forest was considered the best choice based on the following advantages: fast out-of-sample prediction, minimal parameter tuning, its handling of missing data, and its ability to rank feature importance. In particular, three decision forest classifiers, including random forests (RFs) [42], rotation forests (RoFs) [43], and canonical correlation forests (CCFs) [44], were adopted.
RFs, RoFs, and CCFs are the ensemble of decision trees (DTs), where each tree contributes a vote to the final assignment label of the most frequent class. In RFs, each DT is construed on the bootstrapped training set, and randomly selected features are used to split a leaf in a DT. When Bagging is undertaken in the training process, we left about 1 / 3 training samples, namely Out-of-Bag (OOB) data, which are used to measure the importance of features [42]. For each tree (t) in the forest, the importance ( V I ) of the input features ( X i ) is calculated using Equation (1):
V I ( X i ) = 1 n t r e e t = 1 n t r e e ( e r r O O B t i e r r O O B t ) ,
where e r r O O B t i is the error that is computed by using randomly values selected from X i in O O B t , and e r r O O B t is the error computed on the O O B t samples.
RoFs first randomly divide the features into several subsets and then apply data transformation (e.g., principal component analysis, PCA) on each subset to rotate the feature axes to create diverse DTs [43]. The utilization of random selection and data transformation has the effect of increasing the diversity and accuracy of the individual DTs, which is benefical to the ensemble.
CCFs performed supervised canonical correlation analysis (CCA) on the features and labels of the bootstrapped training set to find the projections. The projections were used to create the new features of each DT [44]. Finally, the results produced by each DT were combined to generate the final result. The diversity and accuracy of the individual DTs are promoted by using CCA and Bagging techniques. All the decision forest classifiers, two parameters, including the number of DTs ( n t r e e ) and the number of features in a subset ( m t r y ), need to be empirically set.
These ensemble classifiers have been proven effective with various applications using hyperspectral [45], high-spatial resolution [46], LiDAR [47], and multi-source datasets [48]. However, far too little attention has been paid to building damage mapping using multi-source datasets and decision forests (especially for RoFs and CCFs). Thus, we examine the performance of three decision forest classifiers in the application of building damage mapping.
The raster reference data was divided into training and testing. For each damage class, 70% of the reference data were randomly selected for the training phase, and the remaining 30% were used for the testing phase. The overall accuracies (OAs), average accuracies (AAs), producer’s accuracies (PAs), and user’s accuracies (UAs) were used to evaluate classification performance. The OAs are the percentage of correctly classified points. The AAs are the mean value of individual class accuracies. PAs and UAs gave the omission and commission errors of different classes, respectively. The averaged accuracy of 20 random trials is reported to avoid a biased evaluation.

3.3. Postprocessing

In this step, we transferred the pixel-based classification to the building footprint dataset. Using the OSM building polygons, we extracted the pixels inside each building footprint. Then, the label class was assigned by a majority-voting operation. The point-based building damage inventory, downloaded from the Copernicus Project, was also converted to a building footprints dataset.

4. Experimental Results

Since different source datasets contributed to the classification result in a variety of ways, we presented different feature combination schemes in order to better understand the benefit of the multi-source datasets and to see what level of classification results could be achieved by different scenarios (seen in Table 3).
Post-event datasets and a combination of pre- and post-event datasets were used in Scenarios 1–4 and Scenarios 5–8, respectively. From Scenario 1 to Scenario 4, we respectively adopted SAR only, optical only, SAR+optical, and SAR+optical+DEM obtained from post-events. From Scenario 5 to Scenario 8, the SAR, optical, SAR+optical, and SAR+optical+DEM of pre- and post-events were respectively combined. More details can be found in Table 3. It should be noted that we also included the coherence information computed from pre- and post-event PALSAR-2 and Sentinel-1 datasets, and the SAD between post-event PlanetScope and pre-event Sentinel-2 images in the scenarios that used SAR and optical datasets, respectively. The number of DTs ( n t r e e ) and the number of features in a subset ( m t r y ) are the key parameters of the three classifiers. In [49], the above two parameters were well investigated. Moderate ensemble size could achieve very high accuracies. Additional size could not improve the classification results, but increase the computation time. Moreover, three ensemble classifiers are not sensitive to the parameter ( m t r y ) in this work. Thus, the number of DTs ( n t r e e ) and the number of features in a subset ( m t r y ) were empirically set to be 40 and the square root of the number of input features, respectively.

4.1. Classification Using the Post-Event Dataset

First, we present the classification results using only post-event datasets. Figure 4 shows the OAs and AAs achieved by different feature combinations. It can be seen that the use of only SAR (Scenario 1) or optical-derived (Scenario 2) features led to lower OAs and AAs. The combination of SAR and optical (Scenario 3) increased the OAs and AAs to higher values than those of Scenarios 1 and 2. The inclusion of DEM (Scenario 4) significantly enhanced classification performance.
Table 4 lists the class-specific accuracies using post-event datasets. It is apparent that using all the features derived from optical, SAR and DEM (Scenario 4) generated better or comparable PAs and UAs compared to those of Scenarios 1–3. The class No damage has more than ten times the training samples than other classes, leading to very high accuracies (>99%) in all scenarios. The SAR (Scenario 1) and optical (Scenario 2) generated the better results of class Destroyed and class Possibly damaged than those of other classes. By integrating all the features together (Scenario 4), we can get the comparable results of class Damaged, and much better results of class Destroyed (about + 20 % ) and class Possibly damaged (about + 22 % ).
As for the comparisons of three ensemble classifiers, CCFs outperformed RoFs and RFs in scenarios 1–4.

4.2. Classification Using Post- and Pre-Event Datasets

The potential of features derived from pre- and post-event imagery for classifying damage levels was explored. We define Scenarios 5–8 to evaluate the impact of combining the features derived from multi-temporal data. It can be easily seen from Figure 4, with the help of pre-event datasets, that Scenarios 5–8 provided better results than Scenarios 1–4. In particular, when multi-temporal features of the SAR and optical, and DEM information (Scenario 8) are considered, the RFs, RoFs, and CCFs classifiers obtained OAs of 92.12%, 92.39%, and 91.83%, respectively. The AAs for the RFs, RoFs, and CCFs classifiers are 70.12%, 71.97%, and 67.92%, respectively. These results are significantly better than those obtained by Scenario 4.
Table 5 shows the averaged PAs and UAs achieved by the combination of pre- and post-event datasets. When the features derived from pre-event were introduced, the PAs and UAs tended to have a significant improvement on all the damage types except well-classified class No damage, and class Damaged using CCFs. Concerning the PAs achieved by RoFs, class Possibly damaged had the most noticeable improvement (about + 23 % ) in Scenario 8 in comparison with Scenario 4. The most significant observation about this table is that using all the features derived from pre- and post-event datasets offered the maximum discriminative ability to separate different damage types, leading to the highest PAs and UAs.
CCFs yielded the best classification results in Scenarios 5–7, and RoFs acquired the highest accuracies in Scenario 8. These results also indicate that the CCF algorithm generally performs better than other models for damage classification tasks.

4.3. Feature Importance Analysis

To determine the contribution of each feature to the accuracy, the mean decrease in accuracy is calculated during the our-of-bag-error calculation in the decision forest scheme. This coefficient indicates that features with higher values are more important for the classification. We investigated the importance of all 37 input features in Figure 5. We conclude that the DEM data play the most crucial role in damage classification. On the other hand, SAR-derived features were found to be slightly more important than optical-derived features. For instance, the multi-temporal coherence information computed from SAR data acquired before (pre-coherence) and after the event (co-coherence) showed the second highest contribution to the classification.
Furthermore, we also analyzed the influence of each feature on the performance of different damage types. The importance of all input features for each class is illustrated in Figure 6. DEM appears the most discriminative feature for all the classes. Regarding the class Destroyed (Figure 6a), SAD and the coherence information of post-event also shows great importance in addition to DEM. In the case of the class Possibly damaged, the coherence information computed from pre-event SAR data and post-event SAR intensity share the second highest position of importance. Considering that optical features, due to their multispectral characteristics, generally provide more information in comparison with SAR features, our results indicate that the NL-SAR processing enhanced the ability of SAR data for deriving features correlated to building damage.

4.4. Building Damage Mapping

Figure 7 shows the comparison between the reference map obtained from Copernicus’s EMS [3] and the two most detailed building damage mappings, after post-processing, obtained from the CCFs classifier. As mentioned above, the CCF algorithm showed the best performance in most of the classification tasks considering each multi-temporal and multi-source data scenario. In general, both scenarios (4 and 8) are consistent with the reference map (left panel in Figure 7) in areas along the coastline, where most of the houses were washed away by the tsunami (Destroyed class). The center panels in Figure 7, however, shows some misclassifications, especially for buildings with larger footprints. These particular errors might be due to the fact that only the pixels corresponding to the part of the building that was destroyed by the incoming tsunami wave were classified as Destroyed class; thus, the remaining pixels—the majority of them—were categorized as Damaged class. The bottom panels show the color-coded correctly classified and misclassified buildings distribution. It can be seen that buildings corresponding to the Possibly Damage category are also misclassified. Taking into account that this damage class was defined by EMS using very high resolution optical imagery (Pleiades sensor), as small changes of the building’s rooftop, these errors are due to the limitations of adequately capturing small structures using 10 m of GSD remote sensing data.
On the other hand, both of our best scenarios overestimate the building damage along the Palu river, particularly for Scenario 4 (multi-source post-event data). These results are explained by the feature importance analysis, where it was shown that, among all input features, DEM has a more significant contribution to the classification because of the damage pattern observed in tsunami-affected areas. In such scenarios, structures located closer to the coastline, with low land elevation, will generally experience significant damage in contrast to buildings settled at higher elevations [50]. However, it is important to note that both scenarios successfully detected the destroyed buildings in the liquefaction areas (top panel in Figure 7). Considering that, in these areas, the land elevation has no significant variations, our results demonstrate that the CCF classifier together with multi-source and multi-sensor datasets can accurately detect multiple levels of damage due to different types of disaster mechanisms.

5. Conclusions

In this paper, a systematical analysis of ensemble learning classifiers, using multi-temporal and multi-sensor data, for building damage recognition was presented. We utilized multiple features derived from SAR and Optical datasets and evaluated three ensemble learning classifiers that have shown excellent performance for image classification in previous work. The remote sensing data was composed of four ALOS-2 PALSAR-2 scenes, four Sentinel-1 scenes, two Sentinel-2 scenes, and one PlanetScope image. Quantitative analysis was carried out to determine the best classifier algorithm. Furthermore, the combination of multi-temporal and multi-sensor features for damage recognition was analyzed. A systematic processing chain to create a building damage map was also presented. We applied our classification frameworks in a real-case scenario to categorize the damage observed in Palu, Indonesia following the recent 2018 Sulawesi tsunami. Our results demonstrated that the CCF classifier outperformed other ensemble learning models for detecting different levels of structural damage. The results also showed that building damage classification particularly benefits from SAR-derived features with NL-SAR processing, such as coherence and intensity information. Furthermore, our results indicated that damage mapping using only post-event data give acceptable accuracy. More reliable damage mapping can be achieved using multitemporal remote sensing data.
Our proposed framework learned the damage patterns from a limited available human-interpreted building damage annotation (Copernicus EMS) and expands this information to map four damage classes in the larger affected area. Future work will involve an ensemble of under- and over-sampling schemes for achieving higher accuracies in minor classes (Destroyed, Damaged, and Possibly damaged) without compromising the overall accuracy. Finally, a pre-defined database of building damage is required to address the remaining challenge of gathering initial reference data soon after disasters.

Author Contributions

B.A., J.X. and N.Y. were responsible for the overall design of the study. B.A., G.B. and N.Y. performed the data preprocessing. J.X. conducted classification experiments. S.K. supported the revision of the manuscript. All authors drafted and approved the final manuscript.

Funding

This research was funded by the Japan Society for the Promotion of Science (KAKENHI 18K18067, 17H06108, and 19K20309), the Japan Science and Technology Agency (JST)—Japan International Cooperation Agency (JICA), Science and Technology Research Partnership for Sustainable Development (SATREPS JPMJSA1406), and the JST J-Rapid Program.

Acknowledgments

The authors would like to thank JAXA for providing the ALOS-2 PALSAR-2 dataset through the Sentinel Asia Program, the SENTINEL Missions for providing the Sentinel-1 and Sentinel-2 imagery, and Planet Labs, Inc. for providing the PlanetScope imagery through the disaster data program.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Muhari, A.; Imamura, F.; Arikawa, T.; Hakim, A.R.; Afriyanto, B. Solving the Puzzle of the September 2018 Palu, Indonesia, Tsunami Mystery: Clues from the Tsunami Waveform and the Initial Field Survey Data. J. Disaster Res. 2018, 13, 1–3. [Google Scholar] [CrossRef]
  2. ASEAN Coordinating Centre for Humanitarian Assistance on Disaster Management. Situation Update No.15-Final 7.4 Earthquake and Tsunami. 2018. Available online: https://ahacentre.org/situation-update/situation-update-no-15-sulawesi-earthquake-26-october-2018/ (accessed on 30 January 2019).
  3. Copernicus Emergency Management Service. (© European Union), EMSR317. 2018. Available online: https://emergency.copernicus.eu/mapping/list-of-components/EMSR317 (accessed on 10 December 2018).
  4. Geoinformatics Unit, RIKEN AIP. Preliminary Damage Mapping Following the M7.5 Earthquake in Indonesia on 28 September. 2018. Available online: https://www.geoinformatics2018.com/post/16/ (accessed on 5 October 2018).
  5. Sun, W.; Shi, L.; Yang, J.; Li, P. Building Collapse Assessment in Urban Areas Using Texture Information From Postevent SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3792–3808. [Google Scholar] [CrossRef]
  6. Masi, A.; Chiauzzi, L.; Santarsiero, G.; Liuzzi, M.; Tramutoli, V. Seismic damage recognition based on field survey and remote sensing: general remarks and examples from the 2016 Central Italy earthquake. Nat. Hazards 2017, 86, 193–195. [Google Scholar] [CrossRef]
  7. Ji, Y.; Sumantyo, J.T.S.; Chua, M.Y.; Waqar, M.M. Earthquake/Tsunami Damage Level Mapping of Urban Areas Using Full Polarimetric SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2296–2309. [Google Scholar] [CrossRef]
  8. Liu, W.; Yamazaki, F.; Gokon, H.; Koshimura, S. Extraction of Tsunami-Flooded Areas and Damaged Buildings in the 2011 Tohoku-Oki Earthquake from TerraSAR-X Intensity Images. Earthq. Spectra 2013, 29, S183–S200. [Google Scholar] [CrossRef]
  9. Gokon, H.; Post, J.; Stein, E.; Martinis, S.; Twele, A.; Muck, M.; Geiss, C.; Koshimura, S.; Matsuoka, M. A Method for Detecting Buildings Destroyed by the 2011 Tohoku Earthquake and Tsunami Using Multitemporal TerraSAR-X Data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1277–1281. [Google Scholar] [CrossRef]
  10. Karimzadeh, S.; Matsuoka, M. Building Damage Characterization for the 2016 Amatrice Earthquake Using Ascending–Descending COSMO-SkyMed Data and Topographic Position Index. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2668–2682. [Google Scholar] [CrossRef]
  11. Shi, L.; Sun, W.; Yang, J.; Li, P.; Lu, L. Building Collapse Assessment by the Use of Postearthquake Chinese VHR Airborne SAR. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2021–2025. [Google Scholar] [CrossRef]
  12. Bai, Y.; Adriano, B.; Mas, E.; Gokon, H.; Koshimura, S. Object-Based Building Damage Assessment Methodology Using Only Post Event ALOS-2/PALSAR-2 Dual Polarimetric SAR Intensity Images. J. Disaster Res. 2017, 12, 259–271. [Google Scholar] [CrossRef]
  13. Gong, L.; Wang, C.; Wu, F.; Zhang, J.; Zhang, H.; Li, Q. Earthquake-induced building damage detection with post-event sub-meter VHR terrasar-X staring spotlight imagery. Remote Sens. 2016, 8, 887. [Google Scholar] [CrossRef]
  14. Janalipour, M.; Mohammadzadeh, A. Building Damage Detection Using Object-Based Image Analysis and ANFIS from High-Resolution Image (Case Study: BAM Earthquake, Iran). IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1937–1945. [Google Scholar] [CrossRef]
  15. Janalipour, M.; Mohammadzadeh, A. A Fuzzy-GA Based Decision Making System for Detecting Damaged Buildings from High-Spatial Resolution Optical Images. Remote Sens. 2017, 9, 349. [Google Scholar] [CrossRef]
  16. Janalipour, M.; Mohammadzadeh, A. Evaluation of effectiveness of three fuzzy systems and three texture extraction methods for building damage detection from post-event LiDAR data. Int. J. Digit. Earth 2018, 11, 1241–1268. [Google Scholar] [CrossRef]
  17. Cooner, A.; Shao, Y.; Campbell, J. Detection of Urban Damage Using Remote Sensing and Machine Learning Algorithms: Revisiting the 2010 Haiti Earthquake. Remote Sens. 2016, 8, 868. [Google Scholar] [CrossRef]
  18. Gokon, H.; Koshimura, S. Mapping of Building Damage of the 2011 Tohoku Earthquake Tsunami in Miyagi Prefecture. Coast. Eng. J. 2012, 54, 1250006. [Google Scholar] [CrossRef]
  19. Mas, E.; Bricker, J.; Kure, S.; Adriano, B.; Yi, C.; Suppasri, A.; Koshimura, S. Survey and satellite damage interpretation of the 2013 Super Typhoon Haiyan in the Philippines. Nat. Hazards Earth Syst. Sci. 2015, 15, 805–816. [Google Scholar] [CrossRef]
  20. Miura, H.; Midorikawa, S.; Matsuoka, M. Building Damage Assessment Using High-Resolution Satellite SAR Images of the 2010 Haiti Earthquake. Earthq. Spectra 2016, 32, 591–610. [Google Scholar] [CrossRef]
  21. Freire, S.; Santos, T.; Navarro, A.; Soares, F.; Silva, J.; Afonso, N.; Fonseca, A.; Tenedório, J. Introducing mapping standards in the quality assessment of buildings extracted from very high resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 2014, 90, 1–9. [Google Scholar] [CrossRef]
  22. Plank, S. Rapid Damage Assessment by Means of Multi-Temporal SAR—A Comprehensive Review and Outlook to Sentinel-1. Remote Sens. 2014, 6, 4870–4906. [Google Scholar] [CrossRef]
  23. Matsuoka, M.; Nojima, N. Building Damage Estimation by Integration of Seismic Intensity Information and Satellite L-band SAR Imagery. Remote Sens. 2010, 2, 2111–2126. [Google Scholar] [CrossRef]
  24. Yamaguchi, Y. Disaster monitoring by fully polarimetric SAR data acquired with ALOS-PALSAR. Proc. IEEE 2012, 100, 2851–2860. [Google Scholar] [CrossRef]
  25. Bai, Y.; Adriano, B.; Mas, E.; Koshimura, S. Building Damage Assessment in the 2015 Gorkha, Nepal, Earthquake Using Only Post-Event Dual Polarization Synthetic Aperture Radar Imagery. Earthq. Spectra 2017, 33, S185–S195. [Google Scholar] [CrossRef]
  26. Brunner, D.; Lemoine, G.; Bruzzone, L. Earthquake Damage Assessment of Buildings Using VHR Optical and SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2403–2420. [Google Scholar] [CrossRef]
  27. Huang, X.; Zhang, L.; Zhu, T. Building Change Detection From Multitemporal High-Resolution Remotely Sensed Images Based on a Morphological Building Index. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 105–115. [Google Scholar] [CrossRef]
  28. Bai, Y.; Adriano, B.; Mas, E.; Koshimura, S. Machine learning based building damage mapping from the ALOS-2/PALSAR-2 SAR imagery: Case study of 2016 Kumamoto earthquake. J. Disaster Res. 2017, 12. [Google Scholar] [CrossRef]
  29. Wieland, M.; Liu, W.; Yamazaki, F. Learning Change from Synthetic Aperture Radar Images: Performance Evaluation of a Support Vector Machine to Detect Earthquake and Tsunami-Induced Changes. Remote Sens. 2016, 8, 792. [Google Scholar] [CrossRef]
  30. Endo, Y.; Adriano, B.; Mas, E.; Koshimura, S. New Insights into Multiclass Damage Classification of Tsunami-Induced Building Damage from SAR Images. Remote Sens. 2018, 10, 2059. [Google Scholar] [CrossRef]
  31. Moya, L.; Marval Perez, L.; Mas, E.; Adriano, B.; Koshimura, S.; Yamazaki, F. Novel Unsupervised Classification of Collapsed Buildings Using Satellite Imagery, Hazard Scenarios and Fragility Functions. Remote Sens. 2018, 10, 296. [Google Scholar] [CrossRef]
  32. Ji, M.; Liu, L.; Buchroithner, M. Identifying Collapsed Buildings Using Post-Earthquake Satellite Imagery and Convolutional Neural Networks: A Case Study of the 2010 Haiti Earthquake. Remote Sens. 2018, 10, 1689. [Google Scholar] [CrossRef]
  33. Bamler, R. The SRTM Mission—A World-Wide 30 m Resolution DEM from SAR Interferometry in 11 Days. In Photogrammetric Week, Proceedings of the 47 Photogrammetrische Woche, Universitaet Stuttgart, Stuttgart, Germany, 20–24 September 1999; LIDO-Berichtsjahr=1999. Wichmann Verlag: Heidelberg, Germany, 1999; pp. 145–154. [Google Scholar]
  34. Rabus, B.; Eineder, M.; Roth, A.; Bamler, R. The shuttle radar topography mission—A new class of digital elevation models acquired by spaceborne radar. ISPRS J. Photogramm. Remote Sens. 2003, 57, 241–262. [Google Scholar] [CrossRef]
  35. OpenStreetMap Contributors. 2017. Available online: https://www.openstreetmap.org (accessed on 10 December 2018).
  36. Esch, T.; Thiel, M.; Schenk, A.; Roth, A.; Muller, A.; Dech, S. Delineation of Urban Footprints from TerraSAR-X Data by Analyzing Speckle Characteristics and Intensity Information. IEEE Trans. Geosci. Remote Sens. 2010, 48, 905–916. [Google Scholar] [CrossRef]
  37. Esch, T.; Schenk, A.; Ullmann, T.; Thiel, M.; Roth, A.; Dech, S. Characterization of Land Cover Types in TerraSAR-X Images by Combined Analysis of Speckle Statistics and Intensity Information. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1911–1925. [Google Scholar] [CrossRef]
  38. Agency, E.S. SNAP-ESA Sentinel Application Platform. 2018. Available online: http://step.esa.int/ (accessed on 10 December 2018).
  39. Deledalle, C.; Denis, L.; Tupin, F.; Reigber, A.; Jäger, M. NL-SAR: A Unified Nonlocal Framework for Resolution-Preserving (Pol)(In)SAR Denoising. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2021–2038. [Google Scholar] [CrossRef]
  40. Schmitt, M.; Baier, G.; Zhu, X.X. Potential of Nonlocally filtered Pursuit Monostatic TanDEM-X Data for Coastline Detection. ISPRS J. Photogramm. Remote Sens. 2019, 148, 130–141. [Google Scholar] [CrossRef]
  41. Yokoya, N.; Zhu, X.X.; Plaza, A. Multisensor coupled spectral unmixing for time-series analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2842–2857. [Google Scholar] [CrossRef]
  42. Breiman, L. Random forest. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  43. Rodriguez, J.J.; Kuncheva, L.I.; Alonso, C.J. Rotation forest: A new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1619–1630. [Google Scholar] [CrossRef]
  44. Rainforth, T.; Wood, F. Canonical Correlation Forests. arXiv, 2015; arXiv:1507.05444. [Google Scholar]
  45. Clark, M.L.; Buck-Diaz, J.; Evens, J. Mapping of forest alliances with simulated multi-seasonal hyperspectral satellite imagery. Remote Sens. Environ. 2018, 210, 490–507. [Google Scholar] [CrossRef]
  46. De Castro, A.I.; Torres-Sanchez, J.; Pena, J.M.; Jimenez-Brenes, F.M.; Csillik, O.; Lopez-Granados, F. An Automatic Random Forest-OBIA Algorithm for Early Weed Mapping between and within Crop Rows Using UAV Imagery. Remote Sens. 2018, 10, 285. [Google Scholar] [CrossRef]
  47. Fedrigo, M.; Newnham, G.J.; Coops, N.C.; Culvenor, D.S.; Bolton, D.K.; Nitschke, C.R. Predicting temperate forest stand types using only structural profiles from discrete return airborne lidar. ISPRS J. Photogramm. Remote Sens. 2018, 136, 106–119. [Google Scholar] [CrossRef]
  48. Yokoya, N.; Ghamisi, P.; Xia, J.; Sukhanov, S.; Heremans, R.; Tankoyeu, I.; Bechtel, B.; Saux, B.L.; Moser, G.; Tuia, D. Open Data for Global Multimodal Land Use Classification: Outcome of the 2017 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1363–1377. [Google Scholar] [CrossRef]
  49. Xia, J.; Yokoya, N.; Iwasaki, A. Hyperspectral Image Classification With Canonical Correlation Forests. IEEE Trans. Geosci. Remote Sens. 2017, 55, 421–431. [Google Scholar] [CrossRef]
  50. Adriano, B.; Hayashi, S.; Gokon, H.; Mas, E.; Koshimura, S. Understanding the Extreme Tsunami Inundation in Onagawa Town by the 2011 Tohoku Earthquake, Its Effects in Urban Structures and Coastal Facilities. Coast. Eng. J. 2016, 58, 1640013. [Google Scholar] [CrossRef]
Figure 1. Data used in this study. Top left panel shows the location of the coverage of each remote-sensing image and the epicenter. Second vertical panels show the post-event PlanetScope image (top) and the pre-event Sentinel-2 image (bottom). The third and fourth vertical panels show the RGB color-composited SAR data (top: post-event and bottom: pre-event) from the Sentinel-1 and ALOS-2 PALSAR-2 sensors, respectively.
Figure 1. Data used in this study. Top left panel shows the location of the coverage of each remote-sensing image and the epicenter. Second vertical panels show the post-event PlanetScope image (top) and the pre-event Sentinel-2 image (bottom). The third and fourth vertical panels show the RGB color-composited SAR data (top: post-event and bottom: pre-event) from the Sentinel-1 and ALOS-2 PALSAR-2 sensors, respectively.
Remotesensing 11 00886 g001
Figure 2. Examples of the OSM building layer in the days after the event. In the reference map, lower-left corner, the background corresponds to the pre-event Sentinel-2 image.
Figure 2. Examples of the OSM building layer in the days after the event. In the reference map, lower-left corner, the background corresponds to the pre-event Sentinel-2 image.
Remotesensing 11 00886 g002
Figure 3. Research workflow for building damage mapping using multi-source and multi-temporal remote sensing data.
Figure 3. Research workflow for building damage mapping using multi-source and multi-temporal remote sensing data.
Remotesensing 11 00886 g003
Figure 4. (a) OAs and (b) AAs with different scenarios using RFs, RoFs and CCFs.
Figure 4. (a) OAs and (b) AAs with different scenarios using RFs, RoFs and CCFs.
Remotesensing 11 00886 g004
Figure 5. Importance of the 37 input features provided by different sources.
Figure 5. Importance of the 37 input features provided by different sources.
Remotesensing 11 00886 g005
Figure 6. Feature importance of each damage type by using mean decrease in accuracy.
Figure 6. Feature importance of each damage type by using mean decrease in accuracy.
Remotesensing 11 00886 g006aRemotesensing 11 00886 g006b
Figure 7. Comparison of building damage mapping from Scenario 4 (only multi-source post-event dataset) and Scenario 8 (multi-source pre- and post-event dataset) obtained from the CCF classifier. Left panels show the rasterized building damage inventory after post-processing using point-data from the Copernicus project and building polygons from OSM. Bottom panels show color-coded maps of the correctly classified and mis-classified.
Figure 7. Comparison of building damage mapping from Scenario 4 (only multi-source post-event dataset) and Scenario 8 (multi-source pre- and post-event dataset) obtained from the CCF classifier. Left panels show the rasterized building damage inventory after post-processing using point-data from the Copernicus project and building polygons from OSM. Bottom panels show color-coded maps of the correctly classified and mis-classified.
Remotesensing 11 00886 g007
Table 1. Description of the remote sensing data used in this study.
Table 1. Description of the remote sensing data used in this study.
DatasetAcquisition DateSensorImages Bands
Pre-event2018-05-02ALOS-2 PALSAR-2HH and HV
2018-05-26Sentinel-1VV and VH
2018-06-07Sentinel-1VV and VH
2018-08-08ALOS-2 PALSAR-2HH and HV
2018-09-17Sentinel-2R, G, B, and NIR
Post-event2018-10-01PlanetScopeR, G, B, and NIR
2018-10-01ALOS-2 PALSAR-2HH and HV
2018-10-02Sentinel-2R, G, B, and NIR
2018-10-03ALOS-2 PALSAR-2HH and HV
2018-10-05Sentinel-1VV and VH
2018-10-17Sentinel-1VV and VH
Table 2. Number of train and test samples.
Table 2. Number of train and test samples.
ClassTrainTest
Destroyed29961284
Damaged31471348
Possibly damaged36251553
No damage43,05618,453
Table 3. Different combinations of datasets used in this work.
Table 3. Different combinations of datasets used in this work.
Pre-EventPost-EventOthers
S1S2ALOS-2S1S2ALOS-2PlanetDEM
Scenario 1
Scenario 2
Scenario 3
Scenario 4
Scenario 5
Scenario 6
Scenario 7
Scenario 8
Note: S1: Sentinel-1 datasets; S2: Sentinel-2 datasets.
Table 4. Classification accuracies achieved from 20 random trials by using RFs, RoFs and CCFs classifiers. The input feature combinations derived from post-event datasets covers Scenarios 1–4.
Table 4. Classification accuracies achieved from 20 random trials by using RFs, RoFs and CCFs classifiers. The input feature combinations derived from post-event datasets covers Scenarios 1–4.
ClassRFsRoFsCCFs
PA%UA%PA%UA%PA%UA%
Scenario 1DE35.7770.8235.7068.0240.1871.44
DA4.7137.657.0831.245.5541.18
PD21.6147.2522.9745.2520.2548.01
ND98.3685.9897.6886.3798.3586.10
OA: 83.97 ± 0.14OA: 83.65 ± 0.10OA: 84.17 ± 0.13
AA: 40.11 ± 0.52AA: 40.86 ± 0.41AA: 41.08 ± 0.53
Scenario 2DE30.8077.9835.9577.7040.9378.00
DA4.0932.196.2832.024.9434.94
PD0.8919.851.0422.371.2924.15
ND99.1183.5598.8484.0298.9284.16
OA: 82.84 ± 0.11OA: 83.05 ± 0.09OA: 83.34 ± 0.09
AA: 33.72 ± 0.37AA: 35.52 ± 0.34AA: 36.52 ± 0.33
Scenario 3DE45.2083.6244.8581.8959.0085.91
DA8.8646.9914.3741.008.9849.07
PD22.2450.5725.3947.4619.5851.53
ND98.6686.6797.8987.5598.7887.13
OA: 85.04 ± 0.14OA: 84.93 ± 0.16OA: 85.74 ± 0.14
AA: 43.74 ± 0.49AA: 45.62 ± 0.53AA: 46.58 ± 0.49
Scenario 4DE55.7884.6955.2585.6666.0286.74
DA65.0860.5464.4460.3360.8862.82
PD43.2162.0443.0761.4341.1562.12
ND98.9294.7798.9194.6699.1794.70
OA: 90.64 ± 0.15OA: 90.55 ± 0.18OA: 91.03 ± 0.14
AA: 65.75 ± 0.55AA: 65.42 ± 0.88AA: 66.81 ± 0.55
Note: DE = Destroyed; DA = Damaged; PD = Possibly Damaged; ND = No Damage.
Table 5. Classification accuracies achieved from 20 random trials by using RFs, RoFs and CCFs classifiers. The input feature combinations derived from post- and pre-event datasets covers Scenarios 5–8.
Table 5. Classification accuracies achieved from 20 random trials by using RFs, RoFs and CCFs classifiers. The input feature combinations derived from post- and pre-event datasets covers Scenarios 5–8.
ClassRFsRoFsCCFs
PA%UA%PA%UA%PA%UA%
Scenario 5DE39.5182.7040.0878.6947.6283.43
DA5.4153.239.0146.176.6657.23
PD37.0260.7232.0658.0036.7062.82
ND99.2887.4998.7887.3799.2887.91
OA: 86.03 ± 0.12OA: 85.53 ± 0.19OA: 86.54 ± 0.14
AA: 45.30 ± 0.50AA: 44.98 ± 0.67AA: 47.57 ± 0.46
Scenario 6DE50.2883.1156.4585.3662.3485.25
DA8.4043.0212.4043.6210.8847.20
PD2.2933.012.7832.963.2638.54
ND99.1785.1698.9585.8399.1186.04
OA: 84.35 ± 0.11OA: 84.79 ± 0.13OA: 85.20 ± 0.11
AA: 40.03 ± 0.44AA: 42.64 ± 0.49AA: 43.90 ± 0.41
Scenario 7DE53.6486.5354.8086.6464.4089.21
DA11.5462.9019.8853.4712.5366.29
PD38.6262.3732.3556.9436.8064.60
ND99.2588.7798.6189.0299.4689.22
OA: 87.28 ± 0.19OA: 86.90 ± 0.16OA: 88.00 ± 0.16
AA: 50.76 ± 0.74AA: 51.41 ± 0.61AA: 53.30 ± 0.59
Scenario 8DE60.7487.1863.4786.0367.9789.25
DA66.2963.1168.8662.4953.0666.32
PD54.0670.6656.4271.0950.8969.61
ND99.4095.8499.1496.4399.7794.68
OA: 92.12 ± 0.16OA: 92.39 ± 0.16OA: 91.83 ± 0.23
AA: 70.12 ± 0.68AA: 71.97 ± 0.68AA: 67.92 ± 0.99
Note: DE = Destroyed; DA = Damaged; PD = Possibly Damaged; ND = No Damage.

Share and Cite

MDPI and ACS Style

Adriano, B.; Xia, J.; Baier, G.; Yokoya, N.; Koshimura, S. Multi-Source Data Fusion Based on Ensemble Learning for Rapid Building Damage Mapping during the 2018 Sulawesi Earthquake and Tsunami in Palu, Indonesia. Remote Sens. 2019, 11, 886. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070886

AMA Style

Adriano B, Xia J, Baier G, Yokoya N, Koshimura S. Multi-Source Data Fusion Based on Ensemble Learning for Rapid Building Damage Mapping during the 2018 Sulawesi Earthquake and Tsunami in Palu, Indonesia. Remote Sensing. 2019; 11(7):886. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070886

Chicago/Turabian Style

Adriano, Bruno, Junshi Xia, Gerald Baier, Naoto Yokoya, and Shunichi Koshimura. 2019. "Multi-Source Data Fusion Based on Ensemble Learning for Rapid Building Damage Mapping during the 2018 Sulawesi Earthquake and Tsunami in Palu, Indonesia" Remote Sensing 11, no. 7: 886. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11070886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop