Special Issue "3D Reconstruction Based on Remote Sensing Imagery and Lidar Point Cloud"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 August 2021).

Special Issue Editors

Dr. Lingli Zhu
E-Mail Website
Guest Editor
Finnish Geospatial Research Institute FGI – Department of Remote Sensing and Photogrammetry, Geodeetinrinne 2, FI-02430 Masala, Finland
Interests: point cloud processing; 3D model reconstruction; virtual reality; augmented reality; photogrammetry; remote sensing; computer vision; machine learning
Special Issues and Collections in MDPI journals
Prof. Dr. Jonathan Li
E-Mail Website
Guest Editor
Geospatial Sensing and Data Intelligence Lab, Faculty of Environment, University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1, Canada
Interests: LiDAR remote sensing; point cloud understanding; deep learning; 3D vision; HD maps for smart cities and autonomous vehicles
Special Issues and Collections in MDPI journals
Prof. Dr. Sylvie Daniel
E-Mail Website
Guest Editor
Department of Geomatics Sciences, Université Laval, 1055 avenue du séminaire, Quebec City, QC G1V 0A6, Canada
Interests: data acquisition (images and LiDAR and bathymetric point cloud); image and point cloud processing; 3D modeling and representation; augmented reality; data fusion; artificial intelligence
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing data-based 3D reconstruction is a very important research topic. It involves the fields of remote sensing, photogrammetry, computer vision, graphics, and machine learning. 3D reconstruction through remote sensing data is of great significance in environmental understanding, autonomous driving, visual perception, object recognition, robotic navigation, 3D modeling, and other fields and will also bring wide applications to urban planning, vision-based navigation, industries such as industrial manufacturing and intelligent control, education, healthcare, and entertainment.

Although 3D reconstruction has been studied for decades, it is still a challenge. The challenges come from i) datasets: in recent years, sensor platforms have become more and more flexible, and datasets from complex scenes are available; ii) methods: there are new capabilities and opportunities in data fusion/integration methods; in addition, in recent years, deep learning methods have become more and more popular, and the automation level has increased; and iii) applications: in recent years, many new applications have emerged, such as digital twins for smart cities, simulations for training and education, virtual reality for healthcare, and so on. The evolution in datasets, methods, and applications requires updated technologies to serve the needs of society.

We would like to invite you to submit articles on your recent research linked to the title of this Special Issue, which is “3D Reconstruction Based on Remote Sensing Imagery and LiDAR Point Cloud”. Contributions may focus on (but not be limited to) the following topics:

  • 3D reconstruction from remote sensing images (satellite/aerial images, UAV images, terrestrial images/close range images);
  • 3D reconstruction from point cloud, including indoor and outdoor;
  • 3D reconstruction from a single image;
  • 3D reconstruction from multiview images;
  • 3D reconstruction from multimodality data;
  • 3D reconstruction from multitemporal data;
  • 3D reconstruction from crowdsources;
  • 3D reconstruction from videos;
  • 3D reconstruction for robotic mapping;
  • 3D reconstruction for digital twins;
  • 3D reconstruction for AR and VR;
  • 3D reconstruction for archeology;
  • 3D reconstruction for gaming.
Dr. Lingli Zhu
Prof. Dr. Jonathan Li
Prof. Sylvie Daniel
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D reconstruction
  • remote sensing images
  • LiDAR point cloud
  • satellite images
  • aerial images
  • UAV images
  • close range images
  • single images
  • depth images
  • multiview images
  • multimodality datasets
  • multitemporal datasets
  • crowdsources
  • videos
  • digital twins
  • robotic mapping
  • virtual reality

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Topologically Consistent Reconstruction for Complex Indoor Structures from Point Clouds
Remote Sens. 2021, 13(19), 3844; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13193844 (registering DOI) - 26 Sep 2021
Abstract
Indoor structures are composed of ceilings, walls and floors that need to be modeled for a variety of applications. This paper proposes an approach to reconstructing models of indoor structures in complex environments. First, semantic pre-processing, including segmentation and occlusion construction, is applied [...] Read more.
Indoor structures are composed of ceilings, walls and floors that need to be modeled for a variety of applications. This paper proposes an approach to reconstructing models of indoor structures in complex environments. First, semantic pre-processing, including segmentation and occlusion construction, is applied to segment the input point clouds to generate semantic patches of structural primitives with uniform density. Then, a primitives extraction method with detected boundary is introduced to approximate both the mathematical surface and the boundary of the patches. Finally, a constraint-based model reconstruction is applied to achieve the final topologically consistent structural model. Under this framework, both the geometric and structural constraints are considered in a holistic manner to assure topologic regularity. Experiments were carried out with both synthetic and real-world datasets. The accuracy of the proposed method achieved an overall reconstruction quality of approximately 4.60 cm of root mean square error (RMSE) and 94.10% Intersection over Union (IoU) of the input point cloud. The development can be applied for structural reconstruction of various complex indoor environments. Full article
Show Figures

Figure 1

Article
High-Resolution Terrain Modeling Using Airborne LiDAR Data with Transfer Learning
Remote Sens. 2021, 13(17), 3448; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173448 - 31 Aug 2021
Viewed by 437
Abstract
This study presents a novel workflow for automated Digital Terrain Model (DTM) extraction from Airborne LiDAR point clouds based on a convolutional neural network (CNN), considering a transfer learning approach. The workflow consists of three parts: feature image generation, transfer learning using ResNet, [...] Read more.
This study presents a novel workflow for automated Digital Terrain Model (DTM) extraction from Airborne LiDAR point clouds based on a convolutional neural network (CNN), considering a transfer learning approach. The workflow consists of three parts: feature image generation, transfer learning using ResNet, and interpolation. First, each point is transformed into a featured image based on its elevation differences with neighboring points. Then, the feature images are classified into ground and non-ground using ImageNet pretrained ResNet models. The ground points are extracted by remapping each feature image to its corresponding points. Last, the extracted ground points are interpolated to generate a continuous elevation surface. We compared the proposed workflow with two traditional filters, namely the Progressive Morphological Filter (PMF) and the Progressive Triangulated Irregular Network Densification (PTD). Our results show that the proposed workflow establishes an advantageous DTM extraction accuracy with yields of only 0.52%, 4.84%, and 2.43% for Type I, Type II, and the total error, respectively. In comparison, Type I, Type II, and the total error for PMF are 7.82%, 11.60%, and 9.48% and for PTD 1.55%, 5.37%, and 3.22%, respectively. The root means square error (RMSE) for the 1 m resolution interpolated DTM is only 7.3 cm. Moreover, we conducted a qualitative analysis to investigate the reliability and limitations of the proposed workflow. Full article
Show Figures

Graphical abstract

Article
3DRIED: A High-Resolution 3-D Millimeter-Wave Radar Dataset Dedicated to Imaging and Evaluation
Remote Sens. 2021, 13(17), 3366; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13173366 - 25 Aug 2021
Viewed by 353
Abstract
Millimeter-wave (MMW) 3-D imaging technology is becoming a research hotspot in the field of safety inspection, intelligent driving, etc., due to its all-day, all-weather, high-resolution and non-destruction feature. Unfortunately, due to the lack of a complete 3-D MMW radar dataset, many urgent theories [...] Read more.
Millimeter-wave (MMW) 3-D imaging technology is becoming a research hotspot in the field of safety inspection, intelligent driving, etc., due to its all-day, all-weather, high-resolution and non-destruction feature. Unfortunately, due to the lack of a complete 3-D MMW radar dataset, many urgent theories and algorithms (e.g., imaging, detection, classification, clustering, filtering, and others) cannot be fully verified. To solve this problem, this paper develops an MMW 3-D imaging system and releases a high-resolution 3-D MMW radar dataset for imaging and evaluation, named as 3DRIED. The dataset contains two different types of data patterns, which are the raw echo data and the imaging results, respectively, wherein 81 high-quality raw echo data are presented mainly for near-field safety inspection. These targets cover dangerous metal objects such as knives and guns. Free environments and concealed environments are considered in experiments. Visualization results are presented with corresponding 2-D and 3-D images; the pixels of the 3-D images are 512×512×6. In particular, the presented 3DRIED is generated by the W-band MMW radar with a center frequency of 79GHz, and the theoretical 3-D resolution reaches 2.8 mm × 2.8 mm × 3.75 cm. Notably, 3DRIED has 5 advantages: (1) 3-D raw data and imaging results; (2) high-resolution; (3) different targets; (4) applicability for evaluation and analysis of different post processing. Moreover, the numerical evaluation of high-resolution images with different types of 3-D imaging algorithms, such as range migration algorithm (RMA), compressed sensing algorithm (CSA) and deep neural networks, can be used as baselines. Experimental results reveal that the dataset can be utilized to verify and evaluate the aforementioned algorithms, demonstrating the benefits of the proposed dataset. Full article
Show Figures

Graphical abstract

Article
Terrestrial Videogrammetry for Deriving Key Forest Inventory Data: A Case Study in Plantation
Remote Sens. 2021, 13(16), 3138; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163138 - 08 Aug 2021
Viewed by 429
Abstract
Computer vision technology has promoted the rapid development of forest observation equipment, and video photography (videogrammetry) has provided new ideas and means for forestry investigation. According to the characteristics of videogrammetry, a spiral observation method is proposed. Meanwhile, a new point cloud data [...] Read more.
Computer vision technology has promoted the rapid development of forest observation equipment, and video photography (videogrammetry) has provided new ideas and means for forestry investigation. According to the characteristics of videogrammetry, a spiral observation method is proposed. Meanwhile, a new point cloud data processing method is proposed, which extracts a point cloud at the diameter at breast height (DBH) section and determines the DBH of trees through cylinder fitting and circle fitting, according to the characteristics of the point cloud model and the real situation of occlusion in the sampled area, and then calculates the biomass. Through a large number of experiments, a more effective and relatively high-precision method for DBH extraction is obtained. Compared with the field survey data, the bias% of DBH extracted by videogrammetry was −3.19~2.87%, and the RMSE% was 5.52~7.76%. Compared with the TLS data, the bias% of −4.78~2.38%, and the RMSE% was 5.63~9.87%. The above-ground biomass (AGB) estimates from the videogrammetry showed strong agreement with the reference values with concordance correlation coefficient (CCC) and the RMSE values of 0.97 and 19.8 kg. Meanwhile, the AGB estimate from TLS agrees with the CCC values and the RMSE of 0.97 and 17.23 kg. Videogrammetry is not only cheap, low cost, and fast, but also can be observed in a relatively complex forest environment, with strong anti-interference ability. The experimental results prove that its accuracy is comparable to TLS and photogrammetry. Thus this work is quite valuable in a forest resources survey. We believe that the calculation accuracy of our new method can fully meet the needs of the forest survey. Full article
Show Figures

Figure 1

Article
Multi-Scene Building Height Estimation Method Based on Shadow in High Resolution Imagery
Remote Sens. 2021, 13(15), 2862; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152862 - 21 Jul 2021
Viewed by 475
Abstract
Accurately building height estimation from remote sensing imagery is an important and challenging task. However, the existing shadow-based building height estimation methods have large errors due to the complex environment in remote sensing imagery. In this paper, we propose a multi-scene building height [...] Read more.
Accurately building height estimation from remote sensing imagery is an important and challenging task. However, the existing shadow-based building height estimation methods have large errors due to the complex environment in remote sensing imagery. In this paper, we propose a multi-scene building height estimation method based on shadow in high resolution imagery. First, the shadow of building is classified and described by analyzing the features of building shadow in remote sensing imagery. Second, a variety of shadow-based building height estimation models is established in different scenes. In addition, a method of shadow regularization extraction is proposed, which can solve the problem of mutual adhesion shadows in dense building areas effectively. Finally, we propose a method for shadow length calculation combines with the fish net and the pauta criterion, which means that the large error caused by the complex shape of building shadow can be avoided. Multi-scene areas are selected for experimental analysis to prove the validity of our method. The experiment results show that the accuracy rate is as high as 96% within 2 m of absolute error of our method. In addition, we compared our proposed approach with the existing methods, and the results show that the absolute error of our method are reduced by 1.24 m–3.76 m, which can achieve high-precision estimation of building height. Full article
Show Figures

Graphical abstract

Article
Workflow for Off-Site Bridge Inspection Using Automatic Damage Detection-Case Study of the Pahtajokk Bridge
Remote Sens. 2021, 13(14), 2665; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13142665 - 07 Jul 2021
Viewed by 567
Abstract
For the inspection of structures, particularly bridges, it is becoming common to replace humans with autonomous systems that use unmanned aerial vehicles (UAV). In this paper, a framework for autonomous bridge inspection using a UAV is proposed with a four-step workflow: (a) data [...] Read more.
For the inspection of structures, particularly bridges, it is becoming common to replace humans with autonomous systems that use unmanned aerial vehicles (UAV). In this paper, a framework for autonomous bridge inspection using a UAV is proposed with a four-step workflow: (a) data acquisition with an efficient UAV flight path, (b) computer vision comprising training, testing and validation of convolutional neural networks (ConvNets), (c) point cloud generation using intelligent hierarchical dense structure from motion (DSfM), and (d) damage quantification. This workflow starts with planning the most efficient flight path that allows for capturing of the minimum number of images required to achieve the maximum accuracy for the desired defect size, then followed by bridge and damage recognition. Three types of autonomous detection are used: masking the background of the images, detecting areas of potential damage, and pixel-wise damage segmentation. Detection of bridge components by masking extraneous parts of the image, such as vegetation, sky, roads or rivers, can improve the 3D reconstruction in the feature detection and matching stages. In addition, detecting damaged areas involves the UAV capturing close-range images of these critical regions, and damage segmentation facilitates damage quantification using 2D images. By application of DSfM, a denser and more accurate point cloud can be generated for these detected areas, and aligned to the overall point cloud to create a digital model of the bridge. Then, this generated point cloud is evaluated in terms of outlier noise, and surface deviation. Finally, damage that has been detected is quantified and verified, based on the point cloud generated using the Terrestrial Laser Scanning (TLS) method. The results indicate this workflow for autonomous bridge inspection has potential. Full article
Show Figures

Figure 1

Back to TopTop