remotesensing-logo

Journal Browser

Journal Browser

Techniques and Applications of UAV-Based Photogrammetric 3D Mapping

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 43089

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430072, China
Interests: image registering; image classification; change detection; 3D reconstruction
Special Issues, Collections and Topics in MDPI journals
1. School of Computer Sciences, China University of Geosciences, Wuhan 430074, China
2. Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong 999077, China
Interests: image retrieval; image matching; structure from motion; multi-view stereo; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430072, China
Interests: SLAM and real-time photogrammetry; multi-source data fusion; 3D reconstruction; building extraction and intelligent 3D mapping
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

3D mapping plays a critical role in varying photogrammetric applications. In the last decade, unmanned aerial vehicle (UAV) images have become one of the most important remote sensing data sources due to the high flexibility of UAV platforms and the extensive usage of low-cost cameras. Besides, the rapid development of recent techniques, e.g., SfM (Structure from Motion) for off-line image orientation, SLAM (Simultaneous Localization and Mapping) for on-line UAV navigation, and the deep learning (DL) embedded 3D reconstruction pipeline, has promoted UAV-based 3D mapping towards the direction of automation and intelligence. Recent years have been witnessing the explosive development of UAV-based photogrammetric 3D mapping techniques and their wide applications from traditional surveying and mapping to other related fields, e.g., automatic driving, structure inspection.

The aim of this special issue focuses on the techniques for UAV-based 3D mapping, especially for trajectory planning for UAV data acquisition in complex environments, recent algorithms for feature matching of aerial-ground images, SfM and SLAM for efficient image orientation, as well as the usage of DL techniques in the 3D mapping pipeline, and the applications of UAV-based 3D mapping, such as crack detection of civil structures, automatic inspection of transmission lines, precision management of crops, and archaeological and cultural heritage, and so on.

Dr. Wanshou Jiang
Dr. San Jiang
Dr. Xiongwu Xiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV
  • trajectory planning
  • photogrammetry
  • aerial triangulation
  • dense Image matching
  • 3D mapping
  • structure from Motion
  • simultaneous localization and mapping
  • deep learning

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 170 KiB  
Editorial
Editorial on Special Issue “Techniques and Applications of UAV-Based Photogrammetric 3D Mapping”
by Wanshou Jiang, San Jiang and Xiongwu Xiao
Remote Sens. 2022, 14(15), 3804; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14153804 - 07 Aug 2022
Viewed by 1097
Abstract
Recently, 3D mapping has begun to play an increasingly important role in photogrammetric applications [...] Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)

Research

Jump to: Editorial, Review

19 pages, 10027 KiB  
Article
An Accurate Digital Subsidence Model for Deformation Detection of Coal Mining Areas Using a UAV-Based LiDAR
by Junliang Zheng, Wanqiang Yao, Xiaohu Lin, Bolin Ma and Lingxiao Bai
Remote Sens. 2022, 14(2), 421; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14020421 - 17 Jan 2022
Cited by 13 | Viewed by 2732
Abstract
Coal mine surface subsidence detection determines the damage degree of coal mining, which is of great importance for the mitigation of hazards and property loss. Therefore, it is very important to detect deformation during coal mining. Currently, there are many methods used to [...] Read more.
Coal mine surface subsidence detection determines the damage degree of coal mining, which is of great importance for the mitigation of hazards and property loss. Therefore, it is very important to detect deformation during coal mining. Currently, there are many methods used to detect deformations in coal mining areas. However, with most of them, the accuracy is difficult to guarantee in mountainous areas, especially for shallow seam mining, which has the characteristics of active, rapid, and high-intensity surface subsidence. In response to these problems, we made a digital subsidence model (DSuM) for deformation detection in coal mining areas based on airborne light detection and ranging (LiDAR). First, the entire point cloud of the study area was obtained by coarse to fine registration. Second, noise points were removed by multi-scale morphological filtering, and the progressive triangulation filtering classification (PTFC) algorithm was used to obtain the ground point cloud. Third, the DEM was generated from the clean ground point cloud, and an accurate DSuM was obtained through multiple periods of DEM difference calculations. Then, data mining was conducted based on the DSuM to obtain parameters such as the maximum surface subsidence value, a subsidence contour map, the subsidence area, and the subsidence boundary angle. Finally, the accuracy of the DSuM was analyzed through a comparison with ground checkpoints (GCPs). The results show that the proposed method can achieve centimeter-level accuracy, which makes the data a good reference for mining safety considerations and subsequent restoration of the ecological environment. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Figure 1

20 pages, 13125 KiB  
Article
Automatic, Multiview, Coplanar Extraction for CityGML Building Model Texture Mapping
by Haiqing He, Jing Yu, Penggen Cheng, Yuqian Wang, Yufeng Zhu, Taiqing Lin and Guoqiang Dai
Remote Sens. 2022, 14(1), 50; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14010050 - 23 Dec 2021
Cited by 6 | Viewed by 3250
Abstract
Most 3D CityGML building models in street-view maps (e.g., Google, Baidu) lack texture information, which is generally used to reconstruct real-scene 3D models by photogrammetric techniques, such as unmanned aerial vehicle (UAV) mapping. However, due to its simplified building model and inaccurate location [...] Read more.
Most 3D CityGML building models in street-view maps (e.g., Google, Baidu) lack texture information, which is generally used to reconstruct real-scene 3D models by photogrammetric techniques, such as unmanned aerial vehicle (UAV) mapping. However, due to its simplified building model and inaccurate location information, the commonly used photogrammetric method using a single data source cannot satisfy the requirement of texture mapping for the CityGML building model. Furthermore, a single data source usually suffers from several problems, such as object occlusion. We proposed a novel approach to achieve CityGML building model texture mapping by multiview coplanar extraction from UAV remotely sensed or terrestrial images to alleviate these problems. We utilized a deep convolutional neural network to filter out object occlusion (e.g., pedestrians, vehicles, and trees) and obtain building-texture distribution. Point-line-based features are extracted to characterize multiview coplanar textures in 2D space under the constraint of a homography matrix, and geometric topology is subsequently conducted to optimize the boundary of textures by using a strategy combining Hough-transform and iterative least-squares methods. Experimental results show that the proposed approach enables texture mapping for building façades to use 2D terrestrial images without the requirement of exterior orientation information; that is, different from the photogrammetric method, a collinear equation is not an essential part to capture texture information. In addition, the proposed approach can significantly eliminate blurred and distorted textures of building models, so it is suitable for automatic and rapid texture updates. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Graphical abstract

20 pages, 82780 KiB  
Article
DP-MVS: Detail Preserving Multi-View Surface Reconstruction of Large-Scale Scenes
by Liyang Zhou, Zhuang Zhang, Hanqing Jiang, Han Sun, Hujun Bao and Guofeng Zhang
Remote Sens. 2021, 13(22), 4569; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13224569 - 13 Nov 2021
Cited by 11 | Viewed by 4532
Abstract
This paper presents an accurate and robust dense 3D reconstruction system for detail preserving surface modeling of large-scale scenes from multi-view images, which we named DP-MVS. Our system performs high-quality large-scale dense reconstruction, which preserves geometric details for thin structures, especially for linear [...] Read more.
This paper presents an accurate and robust dense 3D reconstruction system for detail preserving surface modeling of large-scale scenes from multi-view images, which we named DP-MVS. Our system performs high-quality large-scale dense reconstruction, which preserves geometric details for thin structures, especially for linear objects. Our framework begins with a sparse reconstruction carried out by an incremental Structure-from-Motion. Based on the reconstructed sparse map, a novel detail preserving PatchMatch approach is applied for depth estimation of each image view. The estimated depth maps of multiple views are then fused to a dense point cloud in a memory-efficient way, followed by a detail-aware surface meshing method to extract the final surface mesh of the captured scene. Experiments on ETH3D benchmark show that the proposed method outperforms other state-of-the-art methods on F1-score, with the running time more than 4 times faster. More experiments on large-scale photo collections demonstrate the effectiveness of the proposed framework for large-scale scene reconstruction in terms of accuracy, completeness, memory saving, and time efficiency. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Graphical abstract

20 pages, 14611 KiB  
Article
Developing a Method to Extract Building 3D Information from GF-7 Data
by Jingyuan Wang, Xinli Hu, Qingyan Meng, Linlin Zhang, Chengyi Wang, Xiangchen Liu and Maofan Zhao
Remote Sens. 2021, 13(22), 4532; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13224532 - 11 Nov 2021
Cited by 21 | Viewed by 3017
Abstract
The three-dimensional (3D) information of buildings can describe the horizontal and vertical development of a city. The GaoFen-7 (GF-7) stereo-mapping satellite can provide multi-view and multi-spectral satellite images, which can clearly describe the fine spatial details within urban areas, while the feasibility of [...] Read more.
The three-dimensional (3D) information of buildings can describe the horizontal and vertical development of a city. The GaoFen-7 (GF-7) stereo-mapping satellite can provide multi-view and multi-spectral satellite images, which can clearly describe the fine spatial details within urban areas, while the feasibility of extracting building 3D information from GF-7 image remains understudied. This article establishes an automated method for extracting building footprints and height information from GF-7 satellite imagery. First, we propose a multi-stage attention U-Net (MSAU-Net) architecture for building footprint extraction from multi-spectral images. Then, we generate the point cloud from the multi-view image and construct normalized digital surface model (nDSM) to represent the height of off-terrain objects. Finally, the building height is extracted from the nDSM and combined with the results of building footprints to obtain building 3D information. We select Beijing as the study area to test the proposed method, and in order to verify the building extraction ability of MSAU-Net, we choose GF-7 self-annotated building dataset and a public dataset (WuHan University (WHU) Building Dataset) for model testing, while the accuracy is evaluated in detail through comparison with other models. The results are summarized as follows: (1) In terms of building footprint extraction, our method can achieve intersection-over-union indicators of 89.31% and 80.27% for the WHU Dataset and GF-7 self-annotated datasets, respectively; these values are higher than the results of other models. (2) The root mean square between the extracted building height and the reference building height is 5.41 m, and the mean absolute error is 3.39 m. In summary, our method could be useful for accurate and automatic 3D building information extraction from GF-7 satellite images, and have good application potential. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Figure 1

24 pages, 8211 KiB  
Article
Camera Self-Calibration with GNSS Constrained Bundle Adjustment for Weakly Structured Long Corridor UAV Images
by Wei Huang, San Jiang and Wanshou Jiang
Remote Sens. 2021, 13(21), 4222; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13214222 - 21 Oct 2021
Cited by 7 | Viewed by 2612
Abstract
Camera self-calibration determines the precision and robustness of AT (aerial triangulation) for UAV (unmanned aerial vehicle) images. The UAV images collected from long transmission line corridors are critical configurations, which may lead to the “bowl effect” with camera self-calibration. To solve such problems, [...] Read more.
Camera self-calibration determines the precision and robustness of AT (aerial triangulation) for UAV (unmanned aerial vehicle) images. The UAV images collected from long transmission line corridors are critical configurations, which may lead to the “bowl effect” with camera self-calibration. To solve such problems, traditional methods rely on more than three GCPs (ground control points), while this study designs a new self-calibration method with only one GCP. First, existing camera distortion models are grouped into two categories, i.e., physical and mathematical models, and their mathematical formulas are exploited in detail. Second, within an incremental SfM (Structure from Motion) framework, a camera self-calibration method is designed, which combines the strategies for initializing camera distortion parameters and fusing high-precision GNSS (Global Navigation Satellite System) observations. The former is achieved by using an iterative optimization algorithm that progressively optimizes camera parameters; the latter is implemented through inequality constrained BA (bundle adjustment). Finally, by using four UAV datasets collected from two sites with two data acquisition modes, the proposed algorithm is comprehensively analyzed and verified, and the experimental results demonstrate that the proposed method can dramatically alleviate the “bowl effect” of self-calibration for weakly structured long corridor UAV images, and the horizontal and vertical accuracy can reach 0.04 m and 0.05 m, respectively, when using one GCP. In addition, compared with open-source and commercial software, the proposed method achieves competitive or better performance. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Graphical abstract

24 pages, 13472 KiB  
Article
Accelerated Multi-View Stereo for 3D Reconstruction of Transmission Corridor with Fine-Scale Power Line
by Wei Huang, San Jiang, Sheng He and Wanshou Jiang
Remote Sens. 2021, 13(20), 4097; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13204097 - 13 Oct 2021
Cited by 7 | Viewed by 2136
Abstract
Fast reconstruction of power lines and corridors is a critical task in UAV (unmanned aerial vehicle)-based inspection of high-voltage transmission corridors. However, recent dense matching algorithms suffer the problem of low efficiency when processing large-scale high-resolution UAV images. This study proposes an efficient [...] Read more.
Fast reconstruction of power lines and corridors is a critical task in UAV (unmanned aerial vehicle)-based inspection of high-voltage transmission corridors. However, recent dense matching algorithms suffer the problem of low efficiency when processing large-scale high-resolution UAV images. This study proposes an efficient dense matching method for the 3D reconstruction of high-voltage transmission corridors with fine-scale power lines. First, an efficient random red-black checkerboard propagation is proposed, which utilizes the neighbor pixels with the most similar color to propagate plane parameters. To combine the pixel-wise view selection strategy adopted in Colmap with the efficient random red-black checkerboard propagation, the updating schedule for inferring visible probability is improved; second, strategies for decreasing the number of matching cost computations are proposed, which can reduce the unnecessary hypotheses for verification. The number of neighbor pixels necessary to propagate plane parameters is reduced with the increase of iterations, and the number of the combinations of depth and normal is reduced for the pixel with better matching cost in the plane refinement step; third, an efficient GPU (graphics processing unit)-based depth map fusion method is proposed, which employs a weight function based on the reprojection errors to fuse the depth map. Finally, experiments are conducted by using three UAV datasets, and the results indicate that the proposed method can maintain the completeness of power line reconstruction with high efficiency when compared to other PatchMatch-based methods. In addition, two benchmark datasets are used to verify that the proposed method can achieve a better F1 score, 4–7 times faster than Colmap. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Figure 1

21 pages, 1294 KiB  
Article
URNet: A U-Shaped Residual Network for Lightweight Image Super-Resolution
by Yuntao Wang, Lin Zhao, Liman Liu, Huaifei Hu and Wenbing Tao
Remote Sens. 2021, 13(19), 3848; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13193848 - 26 Sep 2021
Cited by 12 | Viewed by 2828
Abstract
It is extremely important and necessary for low computing power or portable devices to design more lightweight algorithms for image super-resolution (SR). Recently, most SR methods have achieved outstanding performance by sacrificing computational cost and memory storage, or vice versa. To address this [...] Read more.
It is extremely important and necessary for low computing power or portable devices to design more lightweight algorithms for image super-resolution (SR). Recently, most SR methods have achieved outstanding performance by sacrificing computational cost and memory storage, or vice versa. To address this problem, we introduce a lightweight U-shaped residual network (URNet) for fast and accurate image SR. Specifically, we propose a more effective feature distillation pyramid residual group (FDPRG) to extract features from low-resolution images. The FDPRG can effectively reuse the learned features with dense shortcuts and capture multi-scale information with a cascaded feature pyramid block. Based on the U-shaped structure, we utilize a step-by-step fusion strategy to improve the performance of feature fusion of different blocks. This strategy is different from the general SR methods which only use a single Concat operation to fuse the features of all basic blocks. Moreover, a lightweight asymmetric residual non-local block is proposed to model the global context information and further improve the performance of SR. Finally, a high-frequency loss function is designed to alleviate smoothing image details caused by pixel-wise loss. Simultaneously, the proposed modules and high-frequency loss function can be easily plugged into multiple mature architectures to improve the performance of SR. Extensive experiments on multiple natural image datasets and remote sensing image datasets show the URNet achieves a better trade-off between image SR performance and model complexity against other state-of-the-art SR methods. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Graphical abstract

15 pages, 4737 KiB  
Article
Automatic Reconstruction of Building Façade Model from Photogrammetric Mesh Model
by Yunsheng Zhang, Chi Zhang, Siyang Chen and Xueye Chen
Remote Sens. 2021, 13(19), 3801; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13193801 - 22 Sep 2021
Cited by 7 | Viewed by 2207
Abstract
Three-dimensional (3D) building façade model reconstruction is of great significance in urban applications and real-world visualization. This paper presents a newly developed method for automatically generating a 3D regular building façade model from the photogrammetric mesh model. To this end, the contour is [...] Read more.
Three-dimensional (3D) building façade model reconstruction is of great significance in urban applications and real-world visualization. This paper presents a newly developed method for automatically generating a 3D regular building façade model from the photogrammetric mesh model. To this end, the contour is tracked on irregular triangulation, and then the local contour tree method based on the topological relationship is employed to represent the topological structure of the photogrammetric mesh model. Subsequently, the segmented contour groups are found by analyzing the topological relationship of the contours, and the original mesh model is divided into various components from bottom to top through the iteration process. After that, each component is iteratively and robustly abstracted into cuboids. Finally, the parameters of each cuboid are adjusted to be close to the original mesh model, and a lightweight polygonal mesh model is taken from the adjusted cuboid. Typical buildings and a whole scene of photogrammetric mesh models are exploited to assess the proposed method quantitatively and qualitatively. The obtained results reveal that the proposed method can derive a regular façade model from a photogrammetric mesh model with a certain accuracy. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Figure 1

16 pages, 1423 KiB  
Article
Semantic Segmentation of 3D Point Cloud Based on Spatial Eight-Quadrant Kernel Convolution
by Liman Liu, Jinjin Yu, Longyu Tan, Wanjuan Su, Lin Zhao and Wenbing Tao
Remote Sens. 2021, 13(16), 3140; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163140 - 08 Aug 2021
Cited by 3 | Viewed by 1957
Abstract
In order to deal with the problem that some existing semantic segmentation networks for 3D point clouds generally have poor performance on small objects, a Spatial Eight-Quadrant Kernel Convolution (SEQKC) algorithm is proposed to enhance the ability of the network for extracting fine-grained [...] Read more.
In order to deal with the problem that some existing semantic segmentation networks for 3D point clouds generally have poor performance on small objects, a Spatial Eight-Quadrant Kernel Convolution (SEQKC) algorithm is proposed to enhance the ability of the network for extracting fine-grained features from 3D point clouds. As a result, the semantic segmentation accuracy of small objects in indoor scenes can be improved. To be specific, in the spherical space of the point cloud neighborhoods, a kernel point with attached weights is constructed in each octant, the distances between the kernel point and the points in its neighborhood are calculated, and the distance and the kernel points’ weights are used together to weight the point cloud features in the neighborhood space. In this case, the relationship between points are modeled, so that the local fine-grained features of the point clouds can be extracted by the SEQKC. Based on the SEQKC, we design a downsampling module for point clouds, and embed it into classical semantic segmentation networks (PointNet++, PointSIFT and PointConv) for semantic segmentation. Experimental results on benchmark dataset ScanNet V2 show that SEQKC-based PointNet++, PointSIFT and PointConv outperform the original networks about 1.35–2.12% in terms of MIoU, and they effectively improve the semantic segmentation performance of the networks for small objects of indoor scenes, e.g., the segmentation accuracy of small object “picture” is improved from 0.70% of PointNet++ to 10.37% of SEQKC-PointNet++. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Graphical abstract

24 pages, 14902 KiB  
Article
Building Multi-Feature Fusion Refined Network for Building Extraction from High-Resolution Remote Sensing Images
by Shuhao Ran, Xianjun Gao, Yuanwei Yang, Shaohua Li, Guangbin Zhang and Ping Wang
Remote Sens. 2021, 13(14), 2794; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142794 - 16 Jul 2021
Cited by 19 | Viewed by 3475
Abstract
Deep learning approaches have been widely used in building automatic extraction tasks and have made great progress in recent years. However, the missing detection and wrong detection causing by spectrum confusion is still a great challenge. The existing fully convolutional networks (FCNs) cannot [...] Read more.
Deep learning approaches have been widely used in building automatic extraction tasks and have made great progress in recent years. However, the missing detection and wrong detection causing by spectrum confusion is still a great challenge. The existing fully convolutional networks (FCNs) cannot effectively distinguish whether the feature differences are from one building or the building and its adjacent non-building objects. In order to overcome the limitations, a building multi-feature fusion refined network (BMFR-Net) was presented in this paper to extract buildings accurately and completely. BMFR-Net is based on an encoding and decoding structure, mainly consisting of two parts: the continuous atrous convolution pyramid (CACP) module and the multiscale output fusion constraint (MOFC) structure. The CACP module is positioned at the end of the contracting path and it effectively minimizes the loss of effective information in multiscale feature extraction and fusion by using parallel continuous small-scale atrous convolution. To improve the ability to aggregate semantic information from the context, the MOFC structure performs predictive output at each stage of the expanding path and integrates the results into the network. Furthermore, the multilevel joint weighted loss function effectively updates parameters well away from the output layer, enhancing the learning capacity of the network for low-level abstract features. The experimental results demonstrate that the proposed BMFR-Net outperforms the other five state-of-the-art approaches in both visual interpretation and quantitative evaluation. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Graphical abstract

23 pages, 19006 KiB  
Article
Progressive Structure from Motion by Iteratively Prioritizing and Refining Match Pairs
by Teng Xiao, Qingsong Yan, Weile Ma and Fei Deng
Remote Sens. 2021, 13(12), 2340; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13122340 - 15 Jun 2021
Cited by 3 | Viewed by 2476
Abstract
Structure from motion (SfM) has been treated as a mature technique to carry out the task of image orientation and 3D reconstruction. However, it is an ongoing challenge to obtain correct reconstruction results from image sets consisting of problematic match pairs. This paper [...] Read more.
Structure from motion (SfM) has been treated as a mature technique to carry out the task of image orientation and 3D reconstruction. However, it is an ongoing challenge to obtain correct reconstruction results from image sets consisting of problematic match pairs. This paper investigated two types of problematic match pairs, stemming from repetitive structures and very short baselines. We built a weighted view-graph based on all potential match pairs and propose a progressive SfM method (PRMP-PSfM) that iteratively prioritizes and refines its match pairs (or edges). The method has two main steps: initialization and expansion. Initialization is developed for reliable seed reconstruction. Specifically, we prioritize a subset of match pairs by the union of multiple independent minimum spanning trees and refine them by the idea of cycle consistency inference (CCI), which aims to infer incorrect edges by analyzing the geometric consistency over cycles of the view-graph. The seed reconstruction is progressively expanded by iteratively adding new minimum spanning trees and refining the corresponding match pairs, and the expansion terminates when a certain completeness of the block is achieved. Results from evaluations on several public datasets demonstrate that PRMP-PSfM can successfully accomplish the image orientation task for datasets with repetitive structures and very short baselines and can obtain better or similar accuracy of reconstruction results compared to several state-of-the-art incremental and hierarchical SfM methods. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Figure 1

25 pages, 8750 KiB  
Article
Reconstruction of Complex Roof Semantic Structures from 3D Point Clouds Using Local Convexity and Consistency
by Pingbo Hu, Yiming Miao and Miaole Hou
Remote Sens. 2021, 13(10), 1946; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13101946 - 17 May 2021
Cited by 8 | Viewed by 2726
Abstract
Three-dimensional (3D) building models are closely related to human activities in urban environments. Due to the variations in building styles and complexity in roof structures, automatically reconstructing 3D buildings with semantics and topology information still faces big challenges. In this paper, we present [...] Read more.
Three-dimensional (3D) building models are closely related to human activities in urban environments. Due to the variations in building styles and complexity in roof structures, automatically reconstructing 3D buildings with semantics and topology information still faces big challenges. In this paper, we present an automated modeling approach that can semantically decompose and reconstruct the complex building light detection and ranging (LiDAR) point clouds into simple parametric structures, and each generated structure is an unambiguous roof semantic unit without overlapping planar primitive. The proposed method starts by extracting roof planes using a multi-label energy minimization solution, followed by constructing a roof connection graph associated with proximity, similarity, and consistency attributes. Furthermore, a progressive decomposition and reconstruction algorithm is introduced to generate explicit semantic subparts and hierarchical representation of an isolated building. The proposed approach is performed on two various datasets and compared with the state-of-the-art reconstruction techniques. The experimental modeling results, including the assessment using the International Society for Photogrammetry and Remote Sensing (ISPRS) benchmark LiDAR datasets, demonstrate that the proposed modeling method can efficiently decompose complex building models into interpretable semantic structures. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Graphical abstract

Review

Jump to: Editorial, Research

22 pages, 28390 KiB  
Review
Review of Wide-Baseline Stereo Image Matching Based on Deep Learning
by Guobiao Yao, Alper Yilmaz, Fei Meng and Li Zhang
Remote Sens. 2021, 13(16), 3247; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163247 - 17 Aug 2021
Cited by 18 | Viewed by 4433
Abstract
Strong geometric and radiometric distortions often exist in optical wide-baseline stereo images, and some local regions can include surface discontinuities and occlusions. Digital photogrammetry and computer vision researchers have focused on automatic matching for such images. Deep convolutional neural networks, which can express [...] Read more.
Strong geometric and radiometric distortions often exist in optical wide-baseline stereo images, and some local regions can include surface discontinuities and occlusions. Digital photogrammetry and computer vision researchers have focused on automatic matching for such images. Deep convolutional neural networks, which can express high-level features and their correlation, have received increasing attention for the task of wide-baseline image matching, and learning-based methods have the potential to surpass methods based on handcrafted features. Therefore, we focus on the dynamic study of wide-baseline image matching and review the main approaches of learning-based feature detection, description, and end-to-end image matching. Moreover, we summarize the current representative research using stepwise inspection and dissection. We present the results of comprehensive experiments on actual wide-baseline stereo images, which we use to contrast and discuss the advantages and disadvantages of several state-of-the-art deep-learning algorithms. Finally, we conclude with a description of the state-of-the-art methods and forecast developing trends with unresolved challenges, providing a guide for future work. Full article
(This article belongs to the Special Issue Techniques and Applications of UAV-Based Photogrammetric 3D Mapping)
Show Figures

Figure 1

Back to TopTop