Special Issue "Techniques and Applications of UAV-Based Photogrammetric 3D Mapping"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 28 February 2022.

Special Issue Editors

Dr. Wanshou Jiang
E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430072, China
Interests: image processing; photogrammetry mapping and remote sensing
Dr. San Jiang
E-Mail Website
Guest Editor
School of Computer Science, China University of Geosciences, Wuhan 430074, China
Interests: image matching; SfM-based aerial triangulation; 3D reconstruction
Dr. Xiongwu Xiao
E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan 430079, China
Interests: remote sensing of UAVs and satellites; photogrammetry and computer vision; image and graphics processing; simultaneous localization and mapping (SLAM); deep learning and artificial intelligence

Special Issue Information

Dear Colleagues,

3D mapping plays a critical role in varying photogrammetric applications. In the last decade, unmanned aerial vehicle (UAV) images have become one of the most important remote sensing data sources due to the high flexibility of UAV platforms and the extensive usage of low-cost cameras. Besides, the rapid development of recent techniques, e.g., SfM (Structure from Motion) for off-line image orientation, SLAM (Simultaneous Localization and Mapping) for on-line UAV navigation, and the deep learning (DL) embedded 3D reconstruction pipeline, has promoted UAV-based 3D mapping towards the direction of automation and intelligence. Recent years have been witnessing the explosive development of UAV-based photogrammetric 3D mapping techniques and their wide applications from traditional surveying and mapping to other related fields, e.g., automatic driving, structure inspection.

The aim of this special issue focuses on the techniques for UAV-based 3D mapping, especially for trajectory planning for UAV data acquisition in complex environments, recent algorithms for feature matching of aerial-ground images, SfM and SLAM for efficient image orientation, as well as the usage of DL techniques in the 3D mapping pipeline, and the applications of UAV-based 3D mapping, such as crack detection of civil structures, automatic inspection of transmission lines, precision management of crops, and archaeological and cultural heritage, and so on.

Dr. Wanshou Jiang
Dr. San Jiang
Dr. Xiongwu Xiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV
  • trajectory planning
  • photogrammetry
  • aerial triangulation
  • dense Image matching
  • 3D mapping
  • structure from Motion
  • simultaneous localization and mapping
  • deep learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Semantic Segmentation of 3D Point Cloud Based on Spatial Eight-Quadrant Kernel Convolution
Remote Sens. 2021, 13(16), 3140; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163140 - 08 Aug 2021
Viewed by 371
Abstract
In order to deal with the problem that some existing semantic segmentation networks for 3D point clouds generally have poor performance on small objects, a Spatial Eight-Quadrant Kernel Convolution (SEQKC) algorithm is proposed to enhance the ability of the network for extracting fine-grained [...] Read more.
In order to deal with the problem that some existing semantic segmentation networks for 3D point clouds generally have poor performance on small objects, a Spatial Eight-Quadrant Kernel Convolution (SEQKC) algorithm is proposed to enhance the ability of the network for extracting fine-grained features from 3D point clouds. As a result, the semantic segmentation accuracy of small objects in indoor scenes can be improved. To be specific, in the spherical space of the point cloud neighborhoods, a kernel point with attached weights is constructed in each octant, the distances between the kernel point and the points in its neighborhood are calculated, and the distance and the kernel points’ weights are used together to weight the point cloud features in the neighborhood space. In this case, the relationship between points are modeled, so that the local fine-grained features of the point clouds can be extracted by the SEQKC. Based on the SEQKC, we design a downsampling module for point clouds, and embed it into classical semantic segmentation networks (PointNet++, PointSIFT and PointConv) for semantic segmentation. Experimental results on benchmark dataset ScanNet V2 show that SEQKC-based PointNet++, PointSIFT and PointConv outperform the original networks about 1.35–2.12% in terms of MIoU, and they effectively improve the semantic segmentation performance of the networks for small objects of indoor scenes, e.g., the segmentation accuracy of small object “picture” is improved from 0.70% of PointNet++ to 10.37% of SEQKC-PointNet++. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Figure 1

Article
Building Multi-Feature Fusion Refined Network for Building Extraction from High-Resolution Remote Sensing Images
Remote Sens. 2021, 13(14), 2794; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142794 - 16 Jul 2021
Cited by 1 | Viewed by 625
Abstract
Deep learning approaches have been widely used in building automatic extraction tasks and have made great progress in recent years. However, the missing detection and wrong detection causing by spectrum confusion is still a great challenge. The existing fully convolutional networks (FCNs) cannot [...] Read more.
Deep learning approaches have been widely used in building automatic extraction tasks and have made great progress in recent years. However, the missing detection and wrong detection causing by spectrum confusion is still a great challenge. The existing fully convolutional networks (FCNs) cannot effectively distinguish whether the feature differences are from one building or the building and its adjacent non-building objects. In order to overcome the limitations, a building multi-feature fusion refined network (BMFR-Net) was presented in this paper to extract buildings accurately and completely. BMFR-Net is based on an encoding and decoding structure, mainly consisting of two parts: the continuous atrous convolution pyramid (CACP) module and the multiscale output fusion constraint (MOFC) structure. The CACP module is positioned at the end of the contracting path and it effectively minimizes the loss of effective information in multiscale feature extraction and fusion by using parallel continuous small-scale atrous convolution. To improve the ability to aggregate semantic information from the context, the MOFC structure performs predictive output at each stage of the expanding path and integrates the results into the network. Furthermore, the multilevel joint weighted loss function effectively updates parameters well away from the output layer, enhancing the learning capacity of the network for low-level abstract features. The experimental results demonstrate that the proposed BMFR-Net outperforms the other five state-of-the-art approaches in both visual interpretation and quantitative evaluation. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Graphical abstract

Article
Progressive Structure from Motion by Iteratively Prioritizing and Refining Match Pairs
Remote Sens. 2021, 13(12), 2340; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122340 - 15 Jun 2021
Viewed by 493
Abstract
Structure from motion (SfM) has been treated as a mature technique to carry out the task of image orientation and 3D reconstruction. However, it is an ongoing challenge to obtain correct reconstruction results from image sets consisting of problematic match pairs. This paper [...] Read more.
Structure from motion (SfM) has been treated as a mature technique to carry out the task of image orientation and 3D reconstruction. However, it is an ongoing challenge to obtain correct reconstruction results from image sets consisting of problematic match pairs. This paper investigated two types of problematic match pairs, stemming from repetitive structures and very short baselines. We built a weighted view-graph based on all potential match pairs and propose a progressive SfM method (PRMP-PSfM) that iteratively prioritizes and refines its match pairs (or edges). The method has two main steps: initialization and expansion. Initialization is developed for reliable seed reconstruction. Specifically, we prioritize a subset of match pairs by the union of multiple independent minimum spanning trees and refine them by the idea of cycle consistency inference (CCI), which aims to infer incorrect edges by analyzing the geometric consistency over cycles of the view-graph. The seed reconstruction is progressively expanded by iteratively adding new minimum spanning trees and refining the corresponding match pairs, and the expansion terminates when a certain completeness of the block is achieved. Results from evaluations on several public datasets demonstrate that PRMP-PSfM can successfully accomplish the image orientation task for datasets with repetitive structures and very short baselines and can obtain better or similar accuracy of reconstruction results compared to several state-of-the-art incremental and hierarchical SfM methods. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Figure 1

Article
Reconstruction of Complex Roof Semantic Structures from 3D Point Clouds Using Local Convexity and Consistency
Remote Sens. 2021, 13(10), 1946; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13101946 - 17 May 2021
Viewed by 483
Abstract
Three-dimensional (3D) building models are closely related to human activities in urban environments. Due to the variations in building styles and complexity in roof structures, automatically reconstructing 3D buildings with semantics and topology information still faces big challenges. In this paper, we present [...] Read more.
Three-dimensional (3D) building models are closely related to human activities in urban environments. Due to the variations in building styles and complexity in roof structures, automatically reconstructing 3D buildings with semantics and topology information still faces big challenges. In this paper, we present an automated modeling approach that can semantically decompose and reconstruct the complex building light detection and ranging (LiDAR) point clouds into simple parametric structures, and each generated structure is an unambiguous roof semantic unit without overlapping planar primitive. The proposed method starts by extracting roof planes using a multi-label energy minimization solution, followed by constructing a roof connection graph associated with proximity, similarity, and consistency attributes. Furthermore, a progressive decomposition and reconstruction algorithm is introduced to generate explicit semantic subparts and hierarchical representation of an isolated building. The proposed approach is performed on two various datasets and compared with the state-of-the-art reconstruction techniques. The experimental modeling results, including the assessment using the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark LiDAR datasets, demonstrate that the proposed modeling method can efficiently decompose complex building models into interpretable semantic structures. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Graphical abstract

Review

Jump to: Research

Review
Review of Wide-Baseline Stereo Image Matching Based on Deep Learning
Remote Sens. 2021, 13(16), 3247; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163247 - 17 Aug 2021
Viewed by 388
Abstract
Strong geometric and radiometric distortions often exist in optical wide-baseline stereo images, and some local regions can include surface discontinuities and occlusions. Digital photogrammetry and computer vision researchers have focused on automatic matching for such images. Deep convolutional neural networks, which can express [...] Read more.
Strong geometric and radiometric distortions often exist in optical wide-baseline stereo images, and some local regions can include surface discontinuities and occlusions. Digital photogrammetry and computer vision researchers have focused on automatic matching for such images. Deep convolutional neural networks, which can express high-level features and their correlation, have received increasing attention for the task of wide-baseline image matching, and learning-based methods have the potential to surpass methods based on handcrafted features. Therefore, we focus on the dynamic study of wide-baseline image matching and review the main approaches of learning-based feature detection, description, and end-to-end image matching. Moreover, we summarize the current representative research using stepwise inspection and dissection. We present the results of comprehensive experiments on actual wide-baseline stereo images, which we use to contrast and discuss the advantages and disadvantages of several state-of-the-art deep-learning algorithms. Finally, we conclude with a description of the state-of-the-art methods and forecast developing trends with unresolved challenges, providing a guide for future work. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Figure 1

Back to TopTop