Machine Learning and Remote Sensing for Automatic Map Creation and Update

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Earth Sciences".

Deadline for manuscript submissions: closed (30 December 2021) | Viewed by 10126

Special Issue Editor


E-Mail Website
Guest Editor
Norwegian Institute of Bioeconomy Research (NIBIO), Norway
Interests: big data; bioeconomy; forestry; agriculture; land monitoring; climate

Special Issue Information

Dear Colleagues,

It is my pleasure to announce a new Special Issue of Applied Sciences dedicated to the application of machine learning methods (including deep learning) and remote sensing for the automatic generation or update of maps.

In recent years, we have experienced an exponential increase in remote sensing datasets derived from different sources (satellites, airplanes, UAVs) at different resolutions (up to few cm) based on different sensors (single bands sensors, hyperspectral cameras, LIDAR, etc.). At the same time, parallel developments in IT allow for the storage of very large datasets (up to petabytes) and their efficient processing (through HPC, distributed computing, use of GPUs). This has allowed for the development and diffusion of many libraries and packages implementing machine learning algorithm in a very efficient way. It has, therefore, become possible to use machine learning (including deep learning methods such as convolutional neural networks) for spatial datasets with the aim of increase the level of automaticity for the creation of new maps or in updating existing maps.

In taking this perspective, this Special Issue aims to contribute to the field by presenting the most relevant advances in this research area. The following are some of the topics proposed for this Special Issue (but not limited to):

  • Land cover mapping and land cover changes;
  • Forest resources mapping (both quantity and quality);
  • Crop/vegetation mapping (both quantity and quality);
  • Natural hazards (e.g., presence of disease, drought);
  • Hydrology;
  • Landscape monitoring;
  • Soil monitoring/geology.

Applications can be related to the use of any type of remote sensing data on in any geographical area (including developing countries).

I look forward to your contribution and to read about your latest research.

Dr. Jonathan Rizzi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • remote sensing
  • big data
  • deep learning
  • land cover mapping
  • forest mapping
  • agriculture mapping
  • satellite
  • UAV
  • natural hazards

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 111730 KiB  
Article
Integrating Convolutional Neural Network and Multiresolution Segmentation for Land Cover and Land Use Mapping Using Satellite Imagery
by Saziye Ozge Atik and Cengizhan Ipbuker
Appl. Sci. 2021, 11(12), 5551; https://0-doi-org.brum.beds.ac.uk/10.3390/app11125551 - 15 Jun 2021
Cited by 26 | Viewed by 3639
Abstract
Depletion of natural resources, population growth, urban migration, and expanding drought conditions are some of the reasons why environmental monitoring programs are required and regularly produced and updated. Additionally, the usage of artificial intelligence in the geospatial field of Earth observation (EO) and [...] Read more.
Depletion of natural resources, population growth, urban migration, and expanding drought conditions are some of the reasons why environmental monitoring programs are required and regularly produced and updated. Additionally, the usage of artificial intelligence in the geospatial field of Earth observation (EO) and regional land monitoring missions is a challenging issue. In this study, land cover and land use mapping was performed using the proposed CNN–MRS model. The CNN–MRS model consisted of two main steps: CNN-based land cover classification and enhancing the classification with spatial filter and multiresolution segmentation (MRS). Different band numbers of Sentinel-2A imagery and multiple patch sizes (32 × 32, 64 × 64, and 128 × 128 pixels) were used in the first experiment. The algorithms were evaluated in terms of overall accuracy, precision, recall, F1-score, and kappa coefficient. The highest overall accuracy was obtained with the proposed approach as 97.31% in Istanbul test site area and 98.44% in Kocaeli test site area. The accuracies revealed the efficiency of the CNN–MRS model for land cover map production in large areas. The McNemar test measured the significance of the models used. In the second experiment, with the Zurich Summer dataset, the overall accuracy of the proposed approach was obtained as 92.03%. The results are compared quantitatively with state-of-the-art CNN model results and related works. Full article
Show Figures

Figure 1

23 pages, 34874 KiB  
Article
Multi-Feature Fusion and Adaptive Kernel Combination for SAR Image Classification
by Xiaoying Wu, Xianbin Wen, Haixia Xu, Liming Yuan and Changlun Guo
Appl. Sci. 2021, 11(4), 1603; https://0-doi-org.brum.beds.ac.uk/10.3390/app11041603 - 10 Feb 2021
Cited by 2 | Viewed by 1625
Abstract
Synthetic aperture radar (SAR) image classification is an important task in remote sensing applications. However, it is challenging due to the speckle embedding in SAR imaging, which significantly degrades the classification performance. To address this issue, a new SAR image classification framework based [...] Read more.
Synthetic aperture radar (SAR) image classification is an important task in remote sensing applications. However, it is challenging due to the speckle embedding in SAR imaging, which significantly degrades the classification performance. To address this issue, a new SAR image classification framework based on multi-feature fusion and adaptive kernel combination is proposed in this paper. Expressing pixel similarity by non-negative logarithmic likelihood difference, the generalized neighborhoods are newly defined. The adaptive kernel combination is designed on them to dynamically explore multi-feature information that is robust to speckle noise. Then, local consistency optimization is further applied to enhance label spatial smoothness during classification. By simultaneously utilizing adaptive kernel combination and local consistency optimization for the first time, the texture feature information, context information within features, generalized spatial information between features, and complementary information among features is fully integrated to ensure accurate and smooth classification. Compared with several state-of-the-art methods on synthetic and real SAR images, the proposed method demonstrates better performance in visual effects and classification quality, as the image edges and details are better preserved according to the experimental results. Full article
Show Figures

Figure 1

18 pages, 3328 KiB  
Article
A Deep Learning-Based Solution for Large-Scale Extraction of the Secondary Road Network from High-Resolution Aerial Orthoimagery
by Calimanut-Ionut Cira, Ramón Alcarria, Miguel-Ángel Manso-Callejo and Francisco Serradilla
Appl. Sci. 2020, 10(20), 7272; https://0-doi-org.brum.beds.ac.uk/10.3390/app10207272 - 17 Oct 2020
Cited by 20 | Viewed by 4032
Abstract
Secondary roads represent the largest part of the road network. However, due to the absence of clearly defined edges, presence of occlusions, and differences in widths, monitoring and mapping them represents a great effort for public administration. We believe that recent advancements in [...] Read more.
Secondary roads represent the largest part of the road network. However, due to the absence of clearly defined edges, presence of occlusions, and differences in widths, monitoring and mapping them represents a great effort for public administration. We believe that recent advancements in machine vision allow the extraction of these types of roads from high-resolution remotely sensed imagery and can enable the automation of the mapping operation. In this work, we leverage these advances and propose a deep learning-based solution capable of efficiently extracting the surface area of secondary roads at a large scale. The solution is based on hybrid segmentation models trained with high-resolution remote sensing imagery divided in tiles of 256 × 256 pixels and their correspondent segmentation masks, resulting in increases in performance metrics of 2.7–3.5% when compared to the original architectures. The best performing model achieved Intersection over Union and F1 scores of maximum 0.5790 and 0.7120, respectively, with a minimum loss of 0.4985 and was integrated on a web platform which handles the evaluation of large areas, the association of the semantic predictions with geographical coordinates, the conversion of the tiles’ format and the generation of geotiff results compatible with geospatial databases. Full article
Show Figures

Graphical abstract

Back to TopTop