remotesensing-logo

Journal Browser

Journal Browser

Latest Development in 3D Mapping Using Modern Remote Sensing Technologies

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 27429

Special Issue Editor


E-Mail Website
Guest Editor
Lyles School of Civil Engineering, Purdue University, West Lafayette, IN 47907, USA
Interests: photogrammetry; laser scanning; mobile mapping systems; system calibration; computer vision; unmanned aerial mapping systems; multisensor/multiplatform data integration
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent advances in remote sensing technologies—in terms of a wider range of passive and active sensing modalities operating in different portions of the spectrum, improved georeferencing capabilities, and easier deployment of manned/unmanned airborne and terrestrial platforms—are providing the research community with unprecedented geospatial data characterized by high geometric, radiometric, spectral, and temporal resolution. This Special Issue aims at highlighting advances in sensing and georeferencing modalities onboard traditional and emerging platforms, as well as their impact on addressing the needs of traditional and new mapping applications. This Special Issue is also addressing innovative manipulation of remote sensing data (i.e., evolution from data, information, to knowledge). Contributions related to the following topics are encouraged:

  1. Improved georeferencing of mobile mapping technologies in GNSS-challenging environments (e.g., intermittent access and/or complete denial of GNSS signal);
  2. Image and LiDAR-based simultaneous localization and mapping (SLAM);
  3. Integration of GNSS/INS, image, and LiDAR data for improved trajectory estimation of mobile mapping systems;
  4. Mobile mapping using multimodal sensing onboard unmanned aerial vehicles;
  5. Mobile mapping using multimodal sensing onboard unmanned ground vehicles;
  6. Structure from motion in challenging environments (object space with low texture and/or repetitive patterns);
  7. Manipulation of heterogeneous point cloud data (e.g., registration, segmentation, classification, object extraction);
  8. Integration/fusion of imaging and ranging remote sensing data for better representation of mapped environments;
  9. Calibration of GNSS/INS-assisted multisensor, multiplatform mapping systems;
  10. Quality control of mobile mapping products;
  11. Mobile mapping for traditional and emerging applications (e.g., digital agriculture, forest inventory, transportation management, infrastructure monitoring, environmental protection, archaeological documentation, indoor mapping);
  12. Simultaneous processing of multiscale, multitemporal, multimodal remote sensing data;
  13. Machine/transfer learning in aid of 3D mapping applications;
  14. Scalable implementation of swarms of mobile mapping platforms.

Prof. Dr. Ayman F. Habib
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Mobile mapping
  • Georeferencing
  • LiDAR
  • Applications
  • Point cloud
  • Machine learning
  • Transfer learning
  • Structure from motion
  • Fusion
  • Calibration
  • Unmanned aerial vehicles
  • Unmanned ground vehicles
  • 2D/3D SLAM
  • Quality control

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

2 pages, 172 KiB  
Editorial
Editorial for the Special Issue “Latest Development in 3D Mapping Using Modern Remote Sensing Technologies”
by Ayman F. Habib
Remote Sens. 2023, 15(4), 1109; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15041109 - 17 Feb 2023
Viewed by 837
Abstract
Recent advances in remote sensing technologies have provided the research community with unprecedented geospatial data characterized by high geometric, radiometric, spectral, and temporal resolution [...] Full article

Research

Jump to: Editorial, Other

34 pages, 36446 KiB  
Article
An Image-Aided Sparse Point Cloud Registration Strategy for Managing Stockpiles in Dome Storage Facilities
by Jidong Liu, Seyyed Meghdad Hasheminasab, Tian Zhou, Raja Manish and Ayman Habib
Remote Sens. 2023, 15(2), 504; https://0-doi-org.brum.beds.ac.uk/10.3390/rs15020504 - 14 Jan 2023
Cited by 5 | Viewed by 1651
Abstract
Stockpile volume estimation plays a critical role in several industrial/commercial bulk material management applications. LiDAR systems are commonly used for this task. Thanks to Global Navigation Satellite System (GNSS) signal availability in outdoor environments, Uncrewed Aerial Vehicles (UAV) equipped with LiDAR are frequently [...] Read more.
Stockpile volume estimation plays a critical role in several industrial/commercial bulk material management applications. LiDAR systems are commonly used for this task. Thanks to Global Navigation Satellite System (GNSS) signal availability in outdoor environments, Uncrewed Aerial Vehicles (UAV) equipped with LiDAR are frequently adopted for the derivation of dense point clouds, which can be used for stockpile volume estimation. For indoor facilities, static LiDAR scanners are usually used for the acquisition of point clouds from multiple locations. Acquired point clouds are then registered to a common reference frame. Registration of such point clouds can be established through the deployment of registration targets, which is not practical for scalable implementation. For scans in facilities bounded by planar walls/roofs, features can be automatically extracted/matched and used for the registration process. However, monitoring stockpiles stored in dome facilities remains to be a challenging task. This study introduces an image-aided fine registration strategy of acquired sparse point clouds in dome facilities, where roof and roof stringers are extracted, matched, and modeled as quadratic surfaces and curves. These features are then used in a Least Squares Adjustment (LSA) procedure to derive well-aligned LiDAR point clouds. Planar features, if available, can also be used in the registration process. Registered point clouds can then be used for accurate volume estimation of stockpiles. The proposed approach is evaluated using datasets acquired by a recently developed camera-assisted LiDAR mapping platform—Stockpile Monitoring and Reporting Technology (SMART). Experimental results from three datasets indicate the capability of the proposed approach in producing well-aligned point clouds acquired inside dome facilities, with a feature fitting error in the 0.03–0.08 m range. Full article
Show Figures

Figure 1

16 pages, 9517 KiB  
Article
Building Floorplan Reconstruction Based on Integer Linear Programming
by Qiting Wang, Zunjie Zhu, Ruolin Chen, Wei Xia and Chenggang Yan
Remote Sens. 2022, 14(18), 4675; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14184675 - 19 Sep 2022
Cited by 4 | Viewed by 1782
Abstract
The reconstruction of the floorplan for a building requires the creation of a two-dimensional floorplan from a 3D model. This task is widely employed in interior design and decoration. In reality, the structures of indoor environments are complex with much clutter and occlusions, [...] Read more.
The reconstruction of the floorplan for a building requires the creation of a two-dimensional floorplan from a 3D model. This task is widely employed in interior design and decoration. In reality, the structures of indoor environments are complex with much clutter and occlusions, making it difficult to reconstruct a complete and accurate floorplan. It is well known that a suitable dataset is a key point to drive an effective algorithm, while existing datasets of floorplan reconstruction are synthetic and small. Without reliable accumulations of real datasets, the robustness of methods to real scene reconstruction is weakened. In this paper, we first annotate a large-scale realistic benchmark, which contains RGBD image sequences and 3D models of 80 indoor scenes with more than 10,000 square meters. We also introduce a framework for the floorplan reconstruction with mesh-based point cloud normalization. The loose-Manhattan constraint is performed in our optimization process, and the optimal floorplan is reconstructed via constraint integer programming. The experimental results on public and our own datasets demonstrate that the proposed method outperforms FloorNet and Floor-SP. Full article
Show Figures

Figure 1

28 pages, 14914 KiB  
Article
Generalized LiDAR Intensity Normalization and Its Positive Impact on Geometric and Learning-Based Lane Marking Detection
by Yi-Ting Cheng, Yi-Chun Lin and Ayman Habib
Remote Sens. 2022, 14(17), 4393; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14174393 - 03 Sep 2022
Cited by 4 | Viewed by 2409
Abstract
Light Detection and Ranging (LiDAR) data collected by mobile mapping systems (MMS) have been utilized to detect lane markings through intensity-based approaches. As LiDAR data continue to be used for lane marking extraction, greater emphasis is being placed on enhancing the utility of [...] Read more.
Light Detection and Ranging (LiDAR) data collected by mobile mapping systems (MMS) have been utilized to detect lane markings through intensity-based approaches. As LiDAR data continue to be used for lane marking extraction, greater emphasis is being placed on enhancing the utility of the intensity values. Typically, intensity correction/normalization approaches are conducted prior to lane marking extraction. The goal of intensity correction is to adjust the intensity values of a LiDAR unit using geometric scanning parameters (i.e., range or incidence angle). Intensity normalization aims at adjusting the intensity readings of a LiDAR unit based on the assumption that intensity values across laser beams/LiDAR units/MMS should be similar for the same object. As MMS technology develops, correcting/normalizing intensity values across different LiDAR units on the same system and/or different MMS is necessary for lane marking extraction. This study proposes a generalized correction/normalization approach for handling single-beam/multi-beam LiDAR scanners onboard single or multiple MMS. The generalized approach is developed while considering the intensity values of asphalt and concrete pavement. For a performance evaluation of the proposed approach, geometric/morphological and deep/transfer-learning-based lane marking extraction with and without intensity correction/normalization is conducted. The evaluation shows that the proposed approach improves the performance of lane marking extraction (e.g., the F1-score of a U-net model can change from 0.1% to 86.2%). Full article
Show Figures

Graphical abstract

15 pages, 33394 KiB  
Article
Efficient Dual-Branch Bottleneck Networks of Semantic Segmentation Based on CCD Camera
by Jiehao Li, Yingpeng Dai, Xiaohang Su and Weibin Wu
Remote Sens. 2022, 14(16), 3925; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14163925 - 12 Aug 2022
Cited by 13 | Viewed by 1346
Abstract
This paper investigates a novel Efficient Dual-branch Bottleneck Network (EDBNet) to perform real-time semantic segmentation tasks on mobile robot systems based on CCD camera. To remedy the non-linear connection between the input and the output, a small-scale and shallow module called the Efficient [...] Read more.
This paper investigates a novel Efficient Dual-branch Bottleneck Network (EDBNet) to perform real-time semantic segmentation tasks on mobile robot systems based on CCD camera. To remedy the non-linear connection between the input and the output, a small-scale and shallow module called the Efficient Dual-branch Bottleneck (EDB) module is established. The EDB unit consists of two branches with different dilation rates, and each branch widens the non-linear layers. This module helps to simultaneously extract local and situational information while maintaining a minimal set of parameters. Moreover, the EDBNet, which is built on the EDB unit, is intended to enhance accuracy, inference speed, and parameter flexibility. It employs dilated convolution with a high dilation rate to increase the receptive field and three downsampling procedures to maintain feature maps with superior spatial resolution. Additionally, the EDBNet uses effective convolutions and compresses the network layer to reduce computational complexity, which is an efficient technique to capture a great deal of information while keeping a rapid computing speed. Finally, using the CamVid and Cityscapes datasets, we obtain Mean Intersection over Union (MIoU) results of 68.58 percent and 71.21 percent, respectively, with just 1.03 million parameters and faster performance on a single GTX 1070Ti card. These results also demonstrate the effectiveness of the practical mobile robot system. Full article
Show Figures

Figure 1

22 pages, 13301 KiB  
Article
FastFusion: Real-Time Indoor Scene Reconstruction with Fast Sensor Motion
by Zunjie Zhu, Zhefeng Xu, Ruolin Chen, Tingyu Wang, Can Wang, Chenggang Yan and Feng Xu
Remote Sens. 2022, 14(15), 3551; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14153551 - 24 Jul 2022
Cited by 2 | Viewed by 2167
Abstract
Real-time 3D scene reconstruction has attracted a great amount of attention in the fields of augmented reality, virtual reality and robotics. Previous works usually assumed slow sensor motions to avoid large interframe differences and strong image blur, but this limits the applicability of [...] Read more.
Real-time 3D scene reconstruction has attracted a great amount of attention in the fields of augmented reality, virtual reality and robotics. Previous works usually assumed slow sensor motions to avoid large interframe differences and strong image blur, but this limits the applicability of the techniques in real cases. In this study, we propose an end-to-end 3D reconstruction system that combines color, depth and inertial measurements to achieve a robust reconstruction with fast sensor motions. We involved an extended Kalman filter (EKF) to fuse RGB-D-IMU data and jointly optimize feature correspondences, camera poses and scene geometry by using an iterative method. A novel geometry-aware patch deformation technique is proposed to adapt the changes in patch features in the image domain, leading to highly accurate feature tracking with fast sensor motions. In addition, we maintained the global consistency of the reconstructed model by achieving loop closure with submap-based depth image encoding and 3D map deformation. The experiments revealed that our patch deformation method improves the accuracy of feature tracking, that our improved loop detection method is more efficient than the original method and that our system possesses superior 3D reconstruction results compared with the state-of-the-art solutions in handling fast camera motions. Full article
Show Figures

Figure 1

30 pages, 25229 KiB  
Article
A Method for Efficient Quality Control and Enhancement of Mobile Laser Scanning Data
by Slaven Kalenjuk and Werner Lienhart
Remote Sens. 2022, 14(4), 857; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14040857 - 11 Feb 2022
Cited by 5 | Viewed by 2726
Abstract
The increasing demand for 3D geospatial data is driving the development of new products. Laser scanners are becoming more mobile, affordable, and user-friendly. With the increased number of systems and service providers on the market, the scope of mobile laser scanning (MLS) applications [...] Read more.
The increasing demand for 3D geospatial data is driving the development of new products. Laser scanners are becoming more mobile, affordable, and user-friendly. With the increased number of systems and service providers on the market, the scope of mobile laser scanning (MLS) applications has expanded dramatically in recent years. However, quality control measures are not keeping pace with the flood of data. Evaluating MLS surveys of long corridors with control points is expensive and, as a result, is frequently neglected. However, information on data quality is crucial, particularly for safety-critical tasks in infrastructure engineering. In this paper, we propose an efficient method for the quality control of MLS point clouds. Based on point cloud discrepancies, we estimate the transformation parameters profile-wise. The elegance of the approach lies in its ability to detect and correct small, high-frequency errors. To demonstrate its potential, we apply the method to real-world data collected with two high-end, car-mounted MLSs. The field study revealed tremendous systematic variations of two passes following tunnels, varied co-registration quality of two scanners, and local inhomogeneities due to poor positioning quality. In each case, the method succeeds in mitigating errors and thus in enhancing quality. Full article
Show Figures

Figure 1

20 pages, 11623 KiB  
Article
Towards Urban Scene Semantic Segmentation with Deep Learning from LiDAR Point Clouds: A Case Study in Baden-Württemberg, Germany
by Yanling Zou, Holger Weinacker and Barbara Koch
Remote Sens. 2021, 13(16), 3220; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163220 - 13 Aug 2021
Cited by 11 | Viewed by 2713
Abstract
An accurate understanding of urban objects is critical for urban modeling, intelligent infrastructure planning and city management. The semantic segmentation of light detection and ranging (LiDAR) point clouds is a fundamental approach for urban scene analysis. Over the last years, several methods have [...] Read more.
An accurate understanding of urban objects is critical for urban modeling, intelligent infrastructure planning and city management. The semantic segmentation of light detection and ranging (LiDAR) point clouds is a fundamental approach for urban scene analysis. Over the last years, several methods have been developed to segment urban furniture with point clouds. However, the traditional processing of large amounts of spatial data has become increasingly costly, both time-wise and financially. Recently, deep learning (DL) techniques have been increasingly used for 3D segmentation tasks. Yet, most of these deep neural networks (DNNs) were conducted on benchmarks. It is, therefore, arguable whether DL approaches can achieve the state-of-the-art performance of 3D point clouds segmentation in real-life scenarios. In this research, we apply an adapted DNN (ARandLA-Net) to directly process large-scale point clouds. In particular, we develop a new paradigm for training and validation, which presents a typical urban scene in central Europe (Munzingen, Freiburg, Baden-Württemberg, Germany). Our dataset consists of nearly 390 million dense points acquired by Mobile Laser Scanning (MLS), which has a rather larger quantity of sample points in comparison to existing datasets and includes meaningful object categories that are particular to applications for smart cities and urban planning. We further assess the DNN on our dataset and investigate a number of key challenges from varying aspects, such as data preparation strategies, the advantage of color information and the unbalanced class distribution in the real world. The final segmentation model achieved a mean Intersection-over-Union (mIoU) score of 54.4% and an overall accuracy score of 83.9%. Our experiments indicated that different data preparation strategies influenced the model performance. Additional RGB information yielded an approximately 4% higher mIoU score. Our results also demonstrate that the use of weighted cross-entropy with inverse square root frequency loss led to better segmentation performance than when other losses were considered. Full article
Show Figures

Figure 1

21 pages, 5064 KiB  
Article
LiDAR Odometry and Mapping Based on Semantic Information for Outdoor Environment
by Shitong Du, Yifan Li, Xuyou Li and Menghao Wu
Remote Sens. 2021, 13(15), 2864; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13152864 - 21 Jul 2021
Cited by 12 | Viewed by 3446
Abstract
Simultaneous Localization and Mapping (SLAM) in an unknown environment is a crucial part for intelligent mobile robots to achieve high-level navigation and interaction tasks. As one of the typical LiDAR-based SLAM algorithms, the Lidar Odometry and Mapping in Real-time (LOAM) algorithm has shown [...] Read more.
Simultaneous Localization and Mapping (SLAM) in an unknown environment is a crucial part for intelligent mobile robots to achieve high-level navigation and interaction tasks. As one of the typical LiDAR-based SLAM algorithms, the Lidar Odometry and Mapping in Real-time (LOAM) algorithm has shown impressive results. However, LOAM only uses low-level geometric features without considering semantic information. Moreover, the lack of a dynamic object removal strategy limits the algorithm to obtain higher accuracy. To this end, this paper extends the LOAM pipeline by integrating semantic information into the original framework. Specifically, we first propose a two-step dynamic objects filtering strategy. Point-wise semantic labels are then used to improve feature extraction and searching for corresponding points. We evaluate the performance of the proposed method in many challenging scenarios, including highway, country and urban from the KITTI dataset. The results demonstrate that the proposed SLAM system outperforms the state-of-the-art SLAM methods in terms of accuracy and robustness. Full article
Show Figures

Graphical abstract

19 pages, 30285 KiB  
Article
Semantically Derived Geometric Constraints for MVS Reconstruction of Textureless Areas
by Elisavet Konstantina Stathopoulou, Roberto Battisti, Dan Cernea, Fabio Remondino and Andreas Georgopoulos
Remote Sens. 2021, 13(6), 1053; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13061053 - 10 Mar 2021
Cited by 23 | Viewed by 4253
Abstract
Conventional multi-view stereo (MVS) approaches based on photo-consistency measures are generally robust, yet often fail in calculating valid depth pixel estimates in low textured areas of the scene. In this study, a novel approach is proposed to tackle this challenge by leveraging semantic [...] Read more.
Conventional multi-view stereo (MVS) approaches based on photo-consistency measures are generally robust, yet often fail in calculating valid depth pixel estimates in low textured areas of the scene. In this study, a novel approach is proposed to tackle this challenge by leveraging semantic priors into a PatchMatch-based MVS in order to increase confidence and support depth and normal map estimation. Semantic class labels on image pixels are used to impose class-specific geometric constraints during multiview stereo, optimising the depth estimation on weakly supported, textureless areas, commonly present in urban scenarios of building facades, indoor scenes, or aerial datasets. Detecting dominant shapes, e.g., planes, with RANSAC, an adjusted cost function is introduced that combines and weighs both photometric and semantic scores propagating, thus, more accurate depth estimates. Being adaptive, it fills in apparent information gaps and smoothing local roughness in problematic regions while at the same time preserves important details. Experiments on benchmark and custom datasets demonstrate the effectiveness of the presented approach. Full article
Show Figures

Graphical abstract

Other

Jump to: Editorial, Research

18 pages, 8484 KiB  
Technical Note
MapCleaner: Efficiently Removing Moving Objects from Point Cloud Maps in Autonomous Driving Scenarios
by Hao Fu, Hanzhang Xue and Guanglei Xie
Remote Sens. 2022, 14(18), 4496; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14184496 - 09 Sep 2022
Cited by 7 | Viewed by 2598
Abstract
Three-dimensional (3D) point cloud maps are widely used in autonomous driving scenarios. These maps are usually generated by accumulating sequential LiDAR scans. When generating a map, moving objects (such as vehicles or moving pedestrians) will leave long trails on the assembled map. This [...] Read more.
Three-dimensional (3D) point cloud maps are widely used in autonomous driving scenarios. These maps are usually generated by accumulating sequential LiDAR scans. When generating a map, moving objects (such as vehicles or moving pedestrians) will leave long trails on the assembled map. This is undesirable and reduces the map quality. In this paper, we propose MapCleaner, an approach that can effectively remove the moving objects from the map. MapCleaner first estimates a dense and continuous terrain surface, based on which the map point cloud is then divided into a noisy part below the terrain, the terrain, and the object part above the terrain. Next, a specifically designed moving points identification algorithm is performed on the object part to find moving objects. Experiments are performed on the SemanticKITTI dataset. Results show that the proposed MapCleaner outperforms state-of-the-art approaches on all five tested SemanticKITTI sequences. MapCleaner is a learning-free method and has few parameters to tune. It is also successfully evaluated on our own dataset collected with a different type of LiDAR. Full article
Show Figures

Figure 1

Back to TopTop