remotesensing-logo

Journal Browser

Journal Browser

Recent Advances in Land Cover Classification and Change Detection in 2D and 3D

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 69723

Special Issue Editor


E-Mail Website
Guest Editor
Signal Processing, Inc., Rockville, MD 20850-3563, USA
Interests: electronic nose; image demosacing; speech processing; image processing; remote sensing; deep learning; fault-tolerant control; fault diagnostics and prognostics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleague,

Accurate digital surface model (DSM) is important for many applications, including urban planning, land surveying before construction, urban change monitoring, etc. Lidar and radar have been widely used for obtaining the DSM information. Moreover, advances in stereo imaging using airborne and satellite imagers have also enabled the creation of DSM using optical images. At the same time, high resolution color, multispectral (MS), and hyperspectral (HS) images are also available for land cover classification and change detection applications. Some airborne imagers can achieve centimetre resolutions and satellite images also achieve sub-meter resolution. In this special issue, we aim at presenting current state-of-the-art and most recent advances in land cover classification and change detection in 2D and 3D. Some practical applications are also included. Potential topics include:

  • DSM generation using airborne and satellite stereo imagers
  • New technologies in Lidar and radar for DSM generation
  • Land cover classification using optical, multispectral, and hyperspectral
  • Land cover classification by fusing MS or HS images with DSM
  • Change detection using optical, multispectral, and hyperspectral
  • Change detection by fusing optical, MSI, and HIS images with DSM
  • Digital terrain model (DTM) extraction by removing vegetation and man-made structures

Dr. Chiman Kwan
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • DSM
  • DTM
  • Land cover classification
  • Change detection
  • Lidar
  • Stereo imaging
  • Optical
  • Multispectral
  • Hyperspectral

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 9514 KiB  
Article
Volumetric Analysis of the Landslide in Abe Barek, Afghanistan Based on Nonlinear Mapping of Stereo Satellite Imagery-Derived DEMs
by Mujeeb Rahman Atefi and Hiroyuki Miura
Remote Sens. 2021, 13(3), 446; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13030446 - 27 Jan 2021
Cited by 11 | Viewed by 3200
Abstract
On 2 May 2014, a large-scale landslide in Abe Barek, Badakhshan, Afghanistan, produced extensive damage to the buildings and killed hundreds of people. Evaluations of the extent and the volume of the displaced materials are vital for post-disaster management activities. In this study, [...] Read more.
On 2 May 2014, a large-scale landslide in Abe Barek, Badakhshan, Afghanistan, produced extensive damage to the buildings and killed hundreds of people. Evaluations of the extent and the volume of the displaced materials are vital for post-disaster management activities. In this study, we present the applicability of a nonlinear geometric correction technique for decreasing the undesired registration errors between pre- and post-event digital elevation models (DEMs) generated from high-resolution stereo pair satellite imagery, identifying landslide affected areas, and quantifying the landslide volume from DEMs of difference (DoD) analysis. The nonlinear mapping method consists of shifting vector generation in subareas of the DEMs, consensus operations, and interpolation of the shifting vectors. The quality assessment confirmed that the method outperformed the simple DoD technique by eliminating a large-scale of geometric errors in an unaffected area. We estimated the volume of the landslide as 1.05 × 106 m3 from the DoD corrected by the nonlinear method, and discussed the relationship between the area and volume compared to those of the previous studies. Full article
Show Figures

Graphical abstract

18 pages, 8314 KiB  
Article
Spatial Temporal Analysis of Traffic Patterns during the COVID-19 Epidemic by Vehicle Detection Using Planet Remote-Sensing Satellite Images
by Yulu Chen, Rongjun Qin, Guixiang Zhang and Hessah Albanwan
Remote Sens. 2021, 13(2), 208; https://doi.org/10.3390/rs13020208 - 08 Jan 2021
Cited by 41 | Viewed by 5808
Abstract
The spread of the COVID-19 since the end of 2019 has reached an epidemic level and has quickly become a global public health crisis. During this period, the responses for COVID-19 were highly diverse and decentralized across countries and regions. Understanding the dynamics [...] Read more.
The spread of the COVID-19 since the end of 2019 has reached an epidemic level and has quickly become a global public health crisis. During this period, the responses for COVID-19 were highly diverse and decentralized across countries and regions. Understanding the dynamics of human mobility change at high spatial temporal resolution is critical for assessing the impacts of non-pharmaceutical interventions (such as stay-at-home orders, regional lockdowns and travel restrictions) during the pandemic. However, this requires collecting traffic data at scale, which is time-consuming, cost-prohibitive and often not available (e.g., in underdeveloped countries). Therefore, spatiotemporal analysis through processing periodical remote-sensing images is very beneficial to enable efficient monitoring at the global scale. In this paper, we present a novel study that utilizes high temporal Planet multispectral images (from November 2019 to September 2020, on average 7.1 days of frequency) to detect traffic density in multiple cities through a proposed morphology-based vehicle detection method and evaluate how the traffic data collected in such a manner reflect mobility pattern changes in response to COVID-19. Our experiments at city-scale detection, demonstrate that our proposed vehicle detection method over this 3 m resolution data is able to achieve a detection level at an accuracy of 68.26% in most of the images, and the observations’ trends coincide with existing public data of where available (lockdown duration, traffic volume, etc.), further suggesting that such high temporal Planet data with global coverage (although not with the best resolution), with well-devised detection algorithms, can sufficiently provide traffic details for trend analysis to better facilitate informed decision making for extreme events at the global level. Full article
Show Figures

Graphical abstract

21 pages, 25409 KiB  
Article
Mapping Land Use from High Resolution Satellite Images by Exploiting the Spatial Arrangement of Land Cover Objects
by Mengmeng Li and Alfred Stein
Remote Sens. 2020, 12(24), 4158; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12244158 - 18 Dec 2020
Cited by 26 | Viewed by 7278
Abstract
Spatial information regarding the arrangement of land cover objects plays an important role in distinguishing the land use types at land parcel or local neighborhood levels. This study investigates the use of graph convolutional networks (GCNs) in order to characterize spatial arrangement features [...] Read more.
Spatial information regarding the arrangement of land cover objects plays an important role in distinguishing the land use types at land parcel or local neighborhood levels. This study investigates the use of graph convolutional networks (GCNs) in order to characterize spatial arrangement features for land use classification from high resolution remote sensing images, with particular interest in comparing land use classifications between different graph-based methods and between different remote sensing images. We examine three kinds of graph-based methods, i.e., feature engineering, graph kernels, and GCNs. Based upon the extracted arrangement features and features regarding the spatial composition of land cover objects, we formulated ten land use classifications. We tested those on two different remote sensing images, which were acquired from GaoFen-2 (with a spatial resolution of 0.8 m) and ZiYuan-3 (of 2.5 m) satellites in 2020 on Fuzhou City, China. Our results showed that land use classifications that are based on the arrangement features derived from GCNs achieved the highest classification accuracy than using graph kernels and handcrafted graph features for both images. We also found that the contribution to separating land use types by arrangement features varies between GaoFen-2 and ZiYuan-3 images, due to the difference in the spatial resolution. This study offers a set of approaches for effectively mapping land use types from (very) high resolution satellite images. Full article
Show Figures

Graphical abstract

29 pages, 7904 KiB  
Article
An Accurate Vegetation and Non-Vegetation Differentiation Approach Based on Land Cover Classification
by Chiman Kwan, David Gribben, Bulent Ayhan, Jiang Li, Sergio Bernabe and Antonio Plaza
Remote Sens. 2020, 12(23), 3880; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12233880 - 26 Nov 2020
Cited by 34 | Viewed by 3096
Abstract
Accurate vegetation detection is important for many applications, such as crop yield estimation, land cover land use monitoring, urban growth monitoring, drought monitoring, etc. Popular conventional approaches to vegetation detection incorporate the normalized difference vegetation index (NDVI), which uses the red and near [...] Read more.
Accurate vegetation detection is important for many applications, such as crop yield estimation, land cover land use monitoring, urban growth monitoring, drought monitoring, etc. Popular conventional approaches to vegetation detection incorporate the normalized difference vegetation index (NDVI), which uses the red and near infrared (NIR) bands, and enhanced vegetation index (EVI), which uses red, NIR, and the blue bands. Although NDVI and EVI are efficient, their accuracies still have room for further improvement. In this paper, we propose a new approach to vegetation detection based on land cover classification. That is, we first perform an accurate classification of 15 or more land cover types. The land covers such as grass, shrub, and trees are then grouped into vegetation and other land cover types such as roads, buildings, etc. are grouped into non-vegetation. Similar to NDVI and EVI, only RGB and NIR bands are needed in our proposed approach. If Laser imaging, Detection, and Ranging (LiDAR) data are available, our approach can also incorporate LiDAR in the detection process. Results using a well-known dataset demonstrated that the proposed approach is feasible and achieves more accurate vegetation detection than both NDVI and EVI. In particular, a Support Vector Machine (SVM) approach performed 6% better than NDVI and 50% better than EVI in terms of overall accuracy (OA). Full article
Show Figures

Graphical abstract

26 pages, 7570 KiB  
Article
Progressive Domain Adaptation for Change Detection Using Season-Varying Remote Sensing Images
by Rong Kou, Bo Fang, Gang Chen and Lizhe Wang
Remote Sens. 2020, 12(22), 3815; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12223815 - 20 Nov 2020
Cited by 13 | Viewed by 2766
Abstract
The development of artificial intelligence technology has prompted an immense amount of researches on improving the performance of change detection approaches. Existing deep learning-driven methods generally regard changes as a specific type of land cover, and try to identify them relying on the [...] Read more.
The development of artificial intelligence technology has prompted an immense amount of researches on improving the performance of change detection approaches. Existing deep learning-driven methods generally regard changes as a specific type of land cover, and try to identify them relying on the powerful expression capabilities of neural networks. However, in practice, different types of land cover changes are generally influenced by environmental factors at different degrees. Furthermore, seasonal variation-induced spectral differences seriously interfere with those of real changes in different land cover types. All these problems pose great challenges for season-varying change detection because the real and seasonal variation-induced changes are technically difficult to separate by a single end-to-end model. In this paper, by embedding a convolutional long short-term memory (ConvLSTM) network into a conditional generative adversarial network (cGAN), we develop a novel method, named progressive domain adaptation (PDA), for change detection using season-varying remote sensing images. In our idea, two cascaded modules, progressive translation and group discrimination, are introduced to progressively translate pre-event images from their own domain to the post-event one, where their seasonal features are consistent and their intrinsic land cover distribution features are retained. By training this hybrid multi-model framework with certain reference change maps, the seasonal variation-induced changes between paired images are effectively suppressed, and meanwhile the natural and human activity-caused changes are greatly emphasized. Extensive experiments on two types of season-varying change detection datasets and a comparison with other state-of-the-art methods verify the effectiveness and competitiveness of our proposed PDA. Full article
Show Figures

Graphical abstract

23 pages, 8483 KiB  
Article
Vegetation Detection Using Deep Learning and Conventional Methods
by Bulent Ayhan, Chiman Kwan, Bence Budavari, Liyun Kwan, Yan Lu, Daniel Perez, Jiang Li, Dimitrios Skarlatos and Marinos Vlachos
Remote Sens. 2020, 12(15), 2502; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12152502 - 04 Aug 2020
Cited by 55 | Viewed by 12169
Abstract
Land cover classification with the focus on chlorophyll-rich vegetation detection plays an important role in urban growth monitoring and planning, autonomous navigation, drone mapping, biodiversity conservation, etc. Conventional approaches usually apply the normalized difference vegetation index (NDVI) for vegetation detection. In this paper, [...] Read more.
Land cover classification with the focus on chlorophyll-rich vegetation detection plays an important role in urban growth monitoring and planning, autonomous navigation, drone mapping, biodiversity conservation, etc. Conventional approaches usually apply the normalized difference vegetation index (NDVI) for vegetation detection. In this paper, we investigate the performance of deep learning and conventional methods for vegetation detection. Two deep learning methods, DeepLabV3+ and our customized convolutional neural network (CNN) were evaluated with respect to their detection performance when training and testing datasets originated from different geographical sites with different image resolutions. A novel object-based vegetation detection approach, which utilizes NDVI, computer vision, and machine learning (ML) techniques, is also proposed. The vegetation detection methods were applied to high-resolution airborne color images which consist of RGB and near-infrared (NIR) bands. RGB color images alone were also used with the two deep learning methods to examine their detection performances without the NIR band. The detection performances of the deep learning methods with respect to the object-based detection approach are discussed and sample images from the datasets are used for demonstrations. Full article
Show Figures

Graphical abstract

17 pages, 5261 KiB  
Article
Deep Learning for Land Cover Classification Using Only a Few Bands
by Chiman Kwan, Bulent Ayhan, Bence Budavari, Yan Lu, Daniel Perez, Jiang Li, Sergio Bernabe and Antonio Plaza
Remote Sens. 2020, 12(12), 2000; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12122000 - 22 Jun 2020
Cited by 47 | Viewed by 4860
Abstract
There is an emerging interest in using hyperspectral data for land cover classification. The motivation behind using hyperspectral data is the notion that increasing the number of narrowband spectral channels would provide richer spectral information and thus help improve the land cover classification [...] Read more.
There is an emerging interest in using hyperspectral data for land cover classification. The motivation behind using hyperspectral data is the notion that increasing the number of narrowband spectral channels would provide richer spectral information and thus help improve the land cover classification performance. Although hyperspectral data with hundreds of channels provide detailed spectral signatures, the curse of dimensionality might lead to degradation in the land cover classification performance. Moreover, in some practical applications, hyperspectral data may not be available due to cost, data storage, or bandwidth issues, and RGB and near infrared (NIR) could be the only image bands available for land cover classification. Light detection and ranging (LiDAR) data is another type of data to assist land cover classification especially if the land covers of interest have different heights. In this paper, we examined the performance of two Convolutional Neural Network (CNN)-based deep learning algorithms for land cover classification using only four bands (RGB+NIR) and five bands (RGB+NIR+LiDAR), where these limited number of image bands were augmented using Extended Multi-attribute Profiles (EMAP). The deep learning algorithms were applied to a well-known dataset used in the 2013 IEEE Geoscience and Remote Sensing Society (GRSS) Data Fusion Contest. With EMAP augmentation, the two deep learning algorithms were observed to achieve better land cover classification performance using only four bands as compared to that using all 144 hyperspectral bands. Full article
Show Figures

Graphical abstract

21 pages, 14054 KiB  
Article
A Coarse-to-Fine Deep Learning Based Land Use Change Detection Method for High-Resolution Remote Sensing Images
by Mingchang Wang, Haiming Zhang, Weiwei Sun, Sheng Li, Fengyan Wang and Guodong Yang
Remote Sens. 2020, 12(12), 1933; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12121933 - 15 Jun 2020
Cited by 42 | Viewed by 5112
Abstract
In recent decades, high-resolution (HR) remote sensing images have shown considerable potential for providing detailed information for change detection. The traditional change detection methods based on HR remote sensing images mostly only detect a single land type or only the change range, and [...] Read more.
In recent decades, high-resolution (HR) remote sensing images have shown considerable potential for providing detailed information for change detection. The traditional change detection methods based on HR remote sensing images mostly only detect a single land type or only the change range, and cannot simultaneously detect the change of all object types and pixel-level range changes in the area. To overcome this difficulty, we propose a new coarse-to-fine deep learning-based land-use change detection method. We independently created a new scene classification dataset called NS-55, and innovatively considered the adaptation relationship between the convolutional neural network (CNN) and the scene complexity by selecting the CNN that best fit the scene complexity. The CNN trained by NS-55 was used to detect the category of the scene, define the final category of the scene according to the majority voting method, and obtain the changed scene by comparison to obtain the so-called coarse change result. Then, we created a multi-scale threshold (MST) method, which is a new method for obtaining high-quality training samples. We used the high-quality samples selected by MST to train the deep belief network to obtain the pixel-level range change detection results. By mapping coarse scene changes to range changes, we could obtain fine multi-type land-use change detection results. Experiments were conducted on the Multi-temporal Scene Wuhan dataset and aerial images of a particular area of Dapeng New District, Shenzhen, where promising results were achieved by the proposed method. This demonstrates that the proposed method is practical, easy-to-implement, and the NS-55 dataset is physically justified. The proposed method has the potential to be applied in the large scale land use fine change detection problem and qualitative and quantitative research on land use/cover change based on HR remote sensing data. Full article
Show Figures

Graphical abstract

23 pages, 32285 KiB  
Article
An Object-Based Bidirectional Method for Integrated Building Extraction and Change Detection between Multimodal Point Clouds
by Chenguang Dai, Zhenchao Zhang and Dong Lin
Remote Sens. 2020, 12(10), 1680; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12101680 - 24 May 2020
Cited by 14 | Viewed by 3634
Abstract
Building extraction and change detection are two important tasks in the remote sensing domain. Change detection between airborne laser scanning data and photogrammetric data is vulnerable to dense matching errors, mis-alignment errors and data gaps. This paper proposes an unsupervised object-based method for [...] Read more.
Building extraction and change detection are two important tasks in the remote sensing domain. Change detection between airborne laser scanning data and photogrammetric data is vulnerable to dense matching errors, mis-alignment errors and data gaps. This paper proposes an unsupervised object-based method for integrated building extraction and change detection. Firstly, terrain, roofs and vegetation are extracted from the precise laser point cloud, based on “bottom-up” segmentation and clustering. Secondly, change detection is performed in an object-based bidirectional manner: Heightened buildings and demolished buildings are detected by taking the laser scanning data as reference, while newly-built buildings are detected by taking the dense matching data as reference. Experiments on two urban data sets demonstrate its effectiveness and robustness. The object-based change detection achieves a recall rate of 92.31% and a precision rate of 88.89% for the Rotterdam dataset; it achieves a recall rate of 85.71% and a precision rate of 100% for the Enschede dataset. It can not only extract unchanged building footprints, but also assign heightened or demolished labels to the changed buildings. Full article
Show Figures

Graphical abstract

17 pages, 8163 KiB  
Article
Quantifying Seagrass Distribution in Coastal Water with Deep Learning Models
by Daniel Perez, Kazi Islam, Victoria Hill, Richard Zimmerman, Blake Schaeffer, Yuzhong Shen and Jiang Li
Remote Sens. 2020, 12(10), 1581; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12101581 - 16 May 2020
Cited by 12 | Viewed by 3634
Abstract
Coastal ecosystems are critically affected by seagrass, both economically and ecologically. However, reliable seagrass distribution information is lacking in nearly all parts of the world because of the excessive costs associated with its assessment. In this paper, we develop two deep learning models [...] Read more.
Coastal ecosystems are critically affected by seagrass, both economically and ecologically. However, reliable seagrass distribution information is lacking in nearly all parts of the world because of the excessive costs associated with its assessment. In this paper, we develop two deep learning models for automatic seagrass distribution quantification based on 8-band satellite imagery. Specifically, we implemented a deep capsule network (DCN) and a deep convolutional neural network (CNN) to assess seagrass distribution through regression. The DCN model first determines whether seagrass is presented in the image through classification. Second, if seagrass is presented in the image, it quantifies the seagrass through regression. During training, the regression and classification modules are jointly optimized to achieve end-to-end learning. The CNN model is strictly trained for regression in seagrass and non-seagrass patches. In addition, we propose a transfer learning approach to transfer knowledge in the trained deep models at one location to perform seagrass quantification at a different location. We evaluate the proposed methods in three WorldView-2 satellite images taken from the coastal area in Florida. Experimental results show that the proposed deep DCN and CNN models performed similarly and achieved much better results than a linear regression model and a support vector machine. We also demonstrate that using transfer learning techniques for the quantification of seagrass significantly improved the results as compared to directly applying the deep models to new locations. Full article
Show Figures

Figure 1

28 pages, 63411 KiB  
Article
Improving Land Cover Classification Using Extended Multi-Attribute Profiles (EMAP) Enhanced Color, Near Infrared, and LiDAR Data
by Chiman Kwan, David Gribben, Bulent Ayhan, Sergio Bernabe, Antonio Plaza and Massimo Selva
Remote Sens. 2020, 12(9), 1392; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12091392 - 28 Apr 2020
Cited by 20 | Viewed by 2919
Abstract
Hyperspectral (HS) data have found a wide range of applications in recent years. Researchers observed that more spectral information helps land cover classification performance in many cases. However, in some practical applications, HS data may not be available, due to cost, data storage, [...] Read more.
Hyperspectral (HS) data have found a wide range of applications in recent years. Researchers observed that more spectral information helps land cover classification performance in many cases. However, in some practical applications, HS data may not be available, due to cost, data storage, or bandwidth issues. Instead, users may only have RGB and near infrared (NIR) bands available for land cover classification. Sometimes, light detection and ranging (LiDAR) data may also be available to assist land cover classification. A natural research problem is to investigate how well land cover classification can be achieved under the aforementioned data constraints. In this paper, we investigate the performance of land cover classification while only using four bands (RGB+NIR) or five bands (RGB+NIR+LiDAR). A number of algorithms have been applied to a well-known dataset (2013 IEEE Geoscience and Remote Sensing Society Data Fusion Contest). One key observation is that some algorithms can achieve better land cover classification performance by using only four bands as compared to that of using all 144 bands in the original hyperspectral data with the help of synthetic bands generated by Extended Multi-attribute Profiles (EMAP). Moreover, LiDAR data do improve the land cover classification performance even further. Full article
Show Figures

Figure 1

19 pages, 6141 KiB  
Article
Object-Based Change Detection of Very High Resolution Images by Fusing Pixel-Based Change Detection Results Using Weighted Dempster–Shafer Theory
by Youkyung Han, Aisha Javed, Sejung Jung and Sicong Liu
Remote Sens. 2020, 12(6), 983; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12060983 - 18 Mar 2020
Cited by 27 | Viewed by 5341
Abstract
Change detection (CD), one of the primary applications of multi-temporal satellite images, is the process of identifying changes in the Earth’s surface occurring over a period of time using images of the same geographic area on different dates. CD is divided into pixel-based [...] Read more.
Change detection (CD), one of the primary applications of multi-temporal satellite images, is the process of identifying changes in the Earth’s surface occurring over a period of time using images of the same geographic area on different dates. CD is divided into pixel-based change detection (PBCD) and object-based change detection (OBCD). Although PBCD is more popular due to its simple algorithms and relatively easy quantitative analysis, applying this method in very high resolution (VHR) images often results in misdetection or noise. Because of this, researchers have focused on extending the PBCD results to the OBCD map in VHR images. In this paper, we present a proposed weighted Dempster-Shafer theory (wDST) fusion method to generate the OBCD by combining multiple PBCD results. The proposed wDST approach automatically calculates and assigns a certainty weight for each object of the PBCD result while considering the stability of the object. Moreover, the proposed wDST method can minimize the tendency of the number of changed objects to decrease or increase based on the ratio of changed pixels to the total pixels in the image when the PBCD result is extended to the OBCD result. First, we performed co-registration between the VHR multitemporal images to minimize the geometric dissimilarity. Then, we conducted the image segmentation of the co-registered pair of multitemporal VHR imagery. Three change intensity images were generated using change vector analysis (CVA), iteratively reweighted-multivariate alteration detection (IRMAD), and principal component analysis (PCA). These three intensity images were exploited to generate different binary PBCD maps, after which the maps were fused with the segmented image using the wDST to generate the OBCD map. Finally, the accuracy of the proposed CD technique was assessed by using a manually digitized map. Two VHR multitemporal datasets were used to test the proposed approach. Experimental results confirmed the superiority of the proposed method by comparing the existing PBCD methods and the OBCD method using the majority voting technique. Full article
Show Figures

Graphical abstract

17 pages, 3334 KiB  
Article
Fully Convolutional Networks with Multiscale 3D Filters and Transfer Learning for Change Detection in High Spatial Resolution Satellite Images
by Ahram Song and Jaewan Choi
Remote Sens. 2020, 12(5), 799; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12050799 - 02 Mar 2020
Cited by 21 | Viewed by 3441
Abstract
Remote sensing images having high spatial resolution are acquired, and large amounts of data are extracted from their region of interest. For processing these images, objects of various sizes, from very small neighborhoods to large regions composed of thousands of pixels, should be [...] Read more.
Remote sensing images having high spatial resolution are acquired, and large amounts of data are extracted from their region of interest. For processing these images, objects of various sizes, from very small neighborhoods to large regions composed of thousands of pixels, should be considered. To this end, this study proposes change detection method using transfer learning and recurrent fully convolutional networks with multiscale three-dimensional (3D) filters. The initial convolutional layer of the change detection network with multiscale 3D filters was designed to extract spatial and spectral features of materials having different sizes; the layer exploits pre-trained weights and biases of semantic segmentation network trained on an open benchmark dataset. The 3D filter sizes were defined in a specialized way to extract spatial and spectral information, and the optimal size of the filter was determined using highly accurate semantic segmentation results. To demonstrate the effectiveness of the proposed method, binary change detection was performed on images obtained from multi-temporal Korea multipurpose satellite-3A. Results revealed that the proposed method outperformed the traditional deep learning-based change detection methods and the change detection accuracy improved using multiscale 3D filters and transfer learning. Full article
Show Figures

Figure 1

Other

Jump to: Research

17 pages, 6253 KiB  
Technical Note
Flood Detection Using Multi-Modal and Multi-Temporal Images: A Comparative Study
by Kazi Aminul Islam, Mohammad Shahab Uddin, Chiman Kwan and Jiang Li
Remote Sens. 2020, 12(15), 2455; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12152455 - 30 Jul 2020
Cited by 20 | Viewed by 5091
Abstract
Natural disasters such as flooding can severely affect human life and property. To provide rescue through an emergency response team, we need an accurate flooding assessment of the affected area after the event. Traditionally, it requires a lot of human resources to obtain [...] Read more.
Natural disasters such as flooding can severely affect human life and property. To provide rescue through an emergency response team, we need an accurate flooding assessment of the affected area after the event. Traditionally, it requires a lot of human resources to obtain an accurate estimation of a flooded area. In this paper, we compared several traditional machine-learning approaches for flood detection including multi-layer perceptron (MLP), support vector machine (SVM), deep convolutional neural network (DCNN) with recent domain adaptation-based approaches, based on a multi-modal and multi-temporal image dataset. Specifically, we used SPOT-5 and RADAR images from the flood event that occurred in November 2000 in Gloucester, UK. Experimental results show that the domain adaptation-based approach, semi-supervised domain adaptation (SSDA) with 20 labeled data samples, achieved slightly better values of the area under the precision-recall (PR) curve (AUC) of 0.9173 and F1 score of 0.8846 than those by traditional machine approaches. However, SSDA required much less labor for ground-truth labeling and should be recommended in practice. Full article
Show Figures

Graphical abstract

Back to TopTop