Special Issue "Application of Artificial Intelligence in Land Use and Land Cover Mapping"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 31 December 2021.

Special Issue Editors

Dr. Sawaid Abbas
E-Mail Website1 Website2
Guest Editor
Department of the Land Surveying and Geo-informatics, The Hong Kong Polytechnic University, Hong Kong, China
Interests: remote sensing; artificial intelligence; cloud computing; big data; pattern recognition; sustainable development goals; landcover dynamics; urban ecology; tropical forest recovery; land degradation
Prof. Dr. Janet E. Nichol
E-Mail Website
Guest Editor
Department of Geography, School of Global Studies, University of Sussex, Brighton, UK
Interests: geo-informatics; environmental change; landscape change; urban climate; air pollution; climate change; tropical forest ecology; aerosols
Special Issues and Collections in MDPI journals
Dr. Faisal M. Qamer
E-Mail Website
Guest Editor
International Centre for Integrated Mountain Development (ICIMOD), Kathmandu, Nepal
Interests: remote sensing; land degradation; croplands; droughts; environmental monitoring; machine learning
Prof. Dr. Jianchu Xu
E-Mail Website
Guest Editor
Kunming Institute of Botany, Chinese Academy of Sciences, Kunming, China
Interests: land cover land use change; landscape restoration; biodiversity conservatio

Special Issue Information

Dear Colleagues,

Land cover monitoring provides critical insight to monitor and achieve Sustainable Development Goals (SDGs). During the last four decades, remote sensing has transformed from observation and monitoring to advanced understanding, for managing the planet’s resources. Land-use change data from satellite remote sensing, along with climate modelling and socio-economic indicators are playing vital roles in advancing interdisciplinary research.

The huge amount of data currently produced by modern Earth Observation (EO) satellite missions and Unmanned Aerial Vehicles (UAVs), availability of high-performance computing platforms and the development of artificial intelligence (AI) provide new opportunities to advance our knowledge about patterns of resource distribution and resource use. Machine learning approaches tailored for Earth Observation data can effectively support the challenges of spatial and temporal domain adaptation, hyperspectral data mining, integration of multi-source information and large-volume data analysis.

Considering these advances, this Special Issue invites manuscripts that present new developments and methodologies, best practices, and applications related to land use, land cover mapping and modelling through the implementation of artificial intelligence (fuzzy logic, neural networks, machine learning, deep learning, evolutionary computation, etc). We welcome submissions that provide the community with the most recent advancements on all aspects mentioned above. We welcome, Original Research Articles, Reviews, Letters, Technical Notes, as well as Highlight articles for a broader audience. 

Dr. Sawaid Abbas
Prof. Janet E. Nichol
Dr. Faisal M. Qamer
Prof. Jianchu Xu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Land Cover Land Use
  • Monitoring Change
  • Artificial Intelligence
  • Machine Learning and Deep Learning
  • Pattern Recognition and Data Mining
  • Hyper Temporal Mapping
  • High-Resolution Urban Landscape Mapping
  • Hyperspectral Remote Sensing
  • Biophysical and Social Data Integration
  • Sustainable Development Goals (SDGs)

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Integrating Phenological and Geographical Information with Artificial Intelligence Algorithm to Map Rubber Plantations in Xishuangbanna
Remote Sens. 2021, 13(14), 2793; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142793 - 16 Jul 2021
Viewed by 501
Abstract
Most natural rubber trees (Hevea brasiliensis) are grown on plantations, making rubber an important industrial crop. Rubber plantations are also an important source of household income for over 20 million people. The accurate mapping of rubber plantations is important for both [...] Read more.
Most natural rubber trees (Hevea brasiliensis) are grown on plantations, making rubber an important industrial crop. Rubber plantations are also an important source of household income for over 20 million people. The accurate mapping of rubber plantations is important for both local governments and the global market. Remote sensing has been a widely used approach for mapping rubber plantations, typically using optical remote sensing data obtained at the regional scale. Improving the efficiency and accuracy of rubber plantation maps has become a research hotspot in rubber-related literature. To improve the classification efficiency, researchers have combined the phenology, geography, and texture of rubber trees with spectral information. Among these, there are three main classifiers: maximum likelihood, QUEST decision tree, and random forest methods. However, until now, no comparative studies have been conducted for the above three classifiers. Therefore, in this study, we evaluated the mapping accuracy based on these three classifiers, using four kinds of data input: Landsat spectral information, phenology–Landsat spectral information, topography–Landsat spectral information, and phenology–topography–Landsat spectral information. We found that the random forest method had the highest mapping accuracy when compared with the maximum likelihood and QUEST decision tree methods. We also found that adding either phenology or topography could improve the mapping accuracy for rubber plantations. When either phenology or topography were added as parameters within the random forest method, the kappa coefficient increased by 5.5% and 6.2%, respectively, compared to the kappa coefficient for the baseline Landsat spectral band data input. The highest accuracy was obtained from the addition of both phenology–topography–Landsat spectral bands to the random forest method, achieving a kappa coefficient of 97%. We therefore mapped rubber plantations in Xishuangbanna using the random forest method, with the addition of phenology and topography information from 1990–2020. Our results demonstrated the usefulness of integrating phenology and topography for mapping rubber plantations. The machine learning approach showed great potential for accurate regional mapping, particularly by incorporating plant habitat and ecological information. We found that during 1990–2020, the total area of rubber plantations had expanded to over three times their former area, while natural forests had lost 17.2% of their former area. Full article
Show Figures

Figure 1

Article
Land Use and Land Cover Mapping Using RapidEye Imagery Based on a Novel Band Attention Deep Learning Method in the Three Gorges Reservoir Area
Remote Sens. 2021, 13(6), 1225; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13061225 - 23 Mar 2021
Viewed by 721
Abstract
Land use/land cover (LULC) change has been recognized as one of the most important indicators to study ecological and environmental changes. Remote sensing provides an effective way to map and monitor LULC change in real time and for large areas. However, with the [...] Read more.
Land use/land cover (LULC) change has been recognized as one of the most important indicators to study ecological and environmental changes. Remote sensing provides an effective way to map and monitor LULC change in real time and for large areas. However, with the increasing spatial resolution of remote sensing imagery, traditional classification approaches cannot fully represent the spectral and spatial information from objects and thus have limitations in classification results, such as the “salt and pepper” effect. Nowadays, the deep semantic segmentation methods have shown great potential to solve this challenge. In this study, we developed an adaptive band attention (BA) deep learning model based on U-Net to classify the LULC in the Three Gorges Reservoir Area (TGRA) combining RapidEye imagery and topographic information. The BA module adaptively weighted input bands in convolution layers to address the different importance of the bands. By comparing the performance of our model with two typical traditional pixel-based methods including classification and regression tree (CART) and random forest (RF), we found a higher overall accuracy (OA) and a higher Intersection over Union (IoU) for all classification categories using our model. The OA and mean IoU of our model were 0.77 and 0.60, respectively, with the BA module and were 0.75 and 0.58, respectively, without the BA module. The OA and mean IoU of CART and RF were both below 0.51 and 0.30, respectively, although RF slightly outperformed CART. Our model also showed a reasonable classification accuracy in independent areas well outside the training area, which indicates the strong model generalizability in the spatial domain. This study demonstrates the novelty of our proposed model for large-scale LULC mapping using high-resolution remote sensing data, which well overcomes the limitations of traditional classification approaches and suggests the consideration of band weighting in convolution layers. Full article
Show Figures

Graphical abstract

Communication
A Novel Framework Based on Mask R-CNN and Histogram Thresholding for Scalable Segmentation of New and Old Rural Buildings
Remote Sens. 2021, 13(6), 1070; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13061070 - 11 Mar 2021
Cited by 5 | Viewed by 860
Abstract
Mapping new and old buildings are of great significance for understanding socio-economic development in rural areas. In recent years, deep neural networks have achieved remarkable building segmentation results in high-resolution remote sensing images. However, the scarce training data and the varying geographical environments [...] Read more.
Mapping new and old buildings are of great significance for understanding socio-economic development in rural areas. In recent years, deep neural networks have achieved remarkable building segmentation results in high-resolution remote sensing images. However, the scarce training data and the varying geographical environments have posed challenges for scalable building segmentation. This study proposes a novel framework based on Mask R-CNN, named Histogram Thresholding Mask Region-Based Convolutional Neural Network (HTMask R-CNN), to extract new and old rural buildings even when the label is scarce. The framework adopts the result of single-object instance segmentation from the orthodox Mask R-CNN. Further, it classifies the rural buildings into new and old ones based on a dynamic grayscale threshold inferred from the result of a two-object instance segmentation task where training data is scarce. We found that the framework can extract more buildings and achieve a much higher mean Average Precision (mAP) than the orthodox Mask R-CNN model. We tested the novel framework’s performance with increasing training data and found that it converged even when the training samples were limited. This framework’s main contribution is to allow scalable segmentation by using significantly fewer training samples than traditional machine learning practices. That makes mapping China’s new and old rural buildings viable. Full article
Show Figures

Figure 1

Article
Classification of Very-High-Spatial-Resolution Aerial Images Based on Multiscale Features with Limited Semantic Information
Remote Sens. 2021, 13(3), 364; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13030364 - 21 Jan 2021
Cited by 1 | Viewed by 650
Abstract
Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised [...] Read more.
Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised learning approaches have performed well, collecting and annotating datasets for every task are extremely laborious, especially for those fully supervised cases where the pixel-level ground-truth labels are dense. In this work, we propose a new object-oriented deep learning framework that leverages residual networks with different depths to learn adjacent feature representations by embedding a multibranch architecture in the deep learning pipeline. The idea is to exploit limited training data at different neighboring scales to make a tradeoff between weak semantics and strong feature representations for operational land cover mapping tasks. We draw from established geographic object-based image analysis (GEOBIA) as an auxiliary module to reduce the computational burden of spatial reasoning and optimize the classification boundaries. We evaluated the proposed approach on two subdecimeter-resolution datasets involving both urban and rural landscapes. It presented better classification accuracy (88.9%) compared to traditional object-based deep learning methods and achieves an excellent inference time (11.3 s/ha). Full article
Show Figures

Figure 1

Article
Development of Land Cover Classification Model Using AI Based FusionNet Network
Remote Sens. 2020, 12(19), 3171; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12193171 - 27 Sep 2020
Cited by 1 | Viewed by 1332
Abstract
Prompt updates of land cover maps are important, as spatial information of land cover is widely used in many areas. However, current manual digitizing methods are time consuming and labor intensive, hindering rapid updates of land cover maps. The objective of this study [...] Read more.
Prompt updates of land cover maps are important, as spatial information of land cover is widely used in many areas. However, current manual digitizing methods are time consuming and labor intensive, hindering rapid updates of land cover maps. The objective of this study was to develop an artificial intelligence (AI) based land cover classification model that allows for rapid land cover classification from high-resolution remote sensing (HRRS) images. The model comprises of three modules: pre-processing, land cover classification, and post-processing modules. The pre-processing module separates the HRRS image into multiple aspects by overlapping 75% using the sliding window algorithm. The land cover classification module was developed using the convolutional neural network (CNN) concept, based the FusionNet network and used to assign a land cover type to the separated HRRS images. Post-processing module determines ultimate land cover types by summing up the separated land cover result from the land cover classification module. Model training and validation were conducted to evaluate the performance of the developed model. The land cover maps and orthographic images of 547.29 km2 in area from the Jeonnam province in Korea were used to train the model. For model validation, two spatial and temporal different sites, one from Subuk-myeon of Jeonnam province in 2018 and the other from Daseo-myeon of Chungbuk province in 2016, were randomly chosen. The model performed reasonably well, demonstrating overall accuracies of 0.81 and 0.71, and kappa coefficients of 0.75 and 0.64, for the respective validation sites. The model performance was better when only considering the agricultural area by showing overall accuracy of 0.83 and kappa coefficients of 0.73. It was concluded that the developed model may assist rapid land cover update especially for agricultural areas and incorporation field boundary lineation is suggested as future study to further improve the model accuracy. Full article
Show Figures

Graphical abstract

Article
A Feature Space Constraint-Based Method for Change Detection in Heterogeneous Images
Remote Sens. 2020, 12(18), 3057; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12183057 - 18 Sep 2020
Cited by 2 | Viewed by 1045
Abstract
With the development of remote sensing technologies, change detection in heterogeneous images becomes much more necessary and significant. The main difficulty lies in how to make input heterogeneous images comparable so that the changes can be detected. In this paper, we propose an [...] Read more.
With the development of remote sensing technologies, change detection in heterogeneous images becomes much more necessary and significant. The main difficulty lies in how to make input heterogeneous images comparable so that the changes can be detected. In this paper, we propose an end-to-end heterogeneous change detection method based on the feature space constraint. First, considering that the input heterogeneous images are in two distinct feature spaces, two encoders with the same structure are used to extract features, respectively. A decoder is used to obtain the change map from the extracted features. Then, the Gram matrices, which include the correlations between features, are calculated to represent different feature spaces, respectively. The squared Euclidean distance between Gram matrices, termed as feature space loss, is used to constrain the extracted features. After that, a combined loss function consisting of the binary cross entropy loss and feature space loss is designed for training the model. Finally, the change detection results between heterogeneous images can be obtained when the model is trained well. The proposed method can constrain the features of two heterogeneous images to the same feature space while keeping their unique features so that the comparability between features can be enhanced and better detection results can be achieved. Experiments on two heterogeneous image datasets consisting of optical and SAR images demonstrate the effectiveness and superiority of the proposed method. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

Technical Note
Extracting the Tailings Ponds from High Spatial Resolution Remote Sensing Images by Integrating a Deep Learning-Based Model
Remote Sens. 2021, 13(4), 743; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13040743 - 17 Feb 2021
Viewed by 1140
Abstract
Due to a lack of data and practical models, few studies have extracted tailings pond margins in large areas. In addition, there is no public dataset of tailings ponds available for relevant research. This study proposed a new deep learning-based framework for extracting [...] Read more.
Due to a lack of data and practical models, few studies have extracted tailings pond margins in large areas. In addition, there is no public dataset of tailings ponds available for relevant research. This study proposed a new deep learning-based framework for extracting tailings pond margins from high spatial resolution (HSR) remote sensing images by combining You Only Look Once (YOLO) v4 and the random forest algorithm. At the same time, we created an open source tailings pond dataset based on HSR remote sensing images. Taking Tongling city as the study area, the proposed model can detect tailings pond locations with high accuracy and efficiency from a large HSR remote sensing image (precision = 99.6%, recall = 89.9%, mean average precision = 89.7%). An optimal random forest model and morphological processing were utilized to further extract accurate tailings pond margins from the target areas. The final map of the entire study area was obtained with high accuracy. Compared with the random forest algorithm, the total extraction time was reduced by nearly 99%. This study can be beneficial to mine monitoring and ecological environmental governance. Full article
Show Figures

Graphical abstract

Letter
Uncertainty-Based Human-in-the-Loop Deep Learning for Land Cover Segmentation
Remote Sens. 2020, 12(22), 3836; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12223836 - 23 Nov 2020
Cited by 1 | Viewed by 782
Abstract
In recent years, different deep learning techniques were applied to segment aerial and satellite images. Nevertheless, state of the art techniques for land cover segmentation does not provide accurate results to be used in real applications. This is a problem faced by institutions [...] Read more.
In recent years, different deep learning techniques were applied to segment aerial and satellite images. Nevertheless, state of the art techniques for land cover segmentation does not provide accurate results to be used in real applications. This is a problem faced by institutions and companies that want to replace time-consuming and exhausting human work with AI technology. In this work, we propose a method that combines deep learning with a human-in-the-loop strategy to achieve expert-level results at a low cost. We use a neural network to segment the images. In parallel, another network is used to measure uncertainty for predicted pixels. Finally, we combine these neural networks with a human-in-the-loop approach to produce correct predictions as if developed by human photointerpreters. Applying this methodology shows that we can increase the accuracy of land cover segmentation tasks while decreasing human intervention. Full article
Show Figures

Figure 1

Back to TopTop