Special Issue "Point Cloud Processing in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (1 May 2020).

Special Issue Editors

Prof. Dr. Wei Yao
E-Mail Website
Guest Editor
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China
Interests: LiDAR; 3D scene perception and analysis; Environmental remote sensing; Sensor fusion
Special Issues and Collections in MDPI journals
Prof. Dr. Francesco Pirotti
E-Mail Website
Guest Editor
CIRGEO - Interdepartmental Research Center of Geomatics - Department of Land, Environment, Agriculture and Forestry, University of Padova, 35020 Legnaro, PD, Italy
Interests: laser scanning; remote sensing; machine learning; geomatics engineering; photogrammetry
Special Issues and Collections in MDPI journals
Dr. Yusheng Xu
E-Mail Website1 Website2
Guest Editor
Photogrammetry and Remote Sensing, Technische Universität München, Munich, Germany
Interests: point cloud processing; spaceborne photogrammetry; computer vision; image analysis; 3D reconstruction
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Point clouds are deemed to be one of the foundational pillars in representing the 3D digital world despite irregular topology among discrete points. Recently, the advancement in sensor technologies that acquire point cloud data for a flexible and scalable geometric representation has paved the way for the development of new ideas, methodologies and solutions in countless remote sensing applications. The state-of-the-art sensors are capable of capturing and describing objects in a scene by using dense point clouds from various platforms (satellite, aerial, UAV, vehicle-borne, backpack, handheld and static terrestrial), perspectives (nadir, oblique and side-view), spectrums (multispectral), and granularity (point density and completeness). Meanwhile, the ever-expanding application areas of point cloud processing have already covered not only conventional domains in geospatial analysis, but also include manufacturing, civil engineering, construction, transportation, ecology, forestry, mechanical engineering and so on.

The Special Issue aims at contributions that focus on processing and utilizing point cloud data acquired from laser scanners and other 3D imaging systems. We are particularly interested in original papers that address innovative techniques for generating, handling and analyzing point cloud data, challenges in dealing with point cloud data in emerging remote sensing applications and developing new applications for point cloud data.

Prof. Dr.-Ing. Wei Yao
Prof. Francesco Pirotti
Prof. Dr. Naoto Yokoya
Dr. Yusheng Xu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Point cloud acquisition from the laser scanner, stereo vision, panoramas, camera phone images, oblique and satellite imagery
  • Deep learning for point cloud processing
  • Point cloud registration and segmentation
  • Feature extraction, object detection, semantic labelling, and change detection
  • Point cloud processing for indoor modelling and BIM
  • Fusion of multimodal point clouds with imagery for object classification and modelling
  • Modeling urban and natural environment from aerial and mobile LiDAR/image-based point clouds
  • Industrial applications with large-scale point clouds
  • High-performance computing for large-scale point clouds

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Lidar Data Reduction for Unmanned Systems Navigation in Urban Canyon
Remote Sens. 2020, 12(11), 1724; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12111724 - 27 May 2020
Cited by 4 | Viewed by 1214
Abstract
This paper introduces a novel protocol for managing low altitude 3D aeronautical chart data to address the unique navigational challenges and collision risks associated with populated urban environments. Based on the Open Geospatial Consortium (OGC) 3D Tiles standard for geospatial data delivery, the [...] Read more.
This paper introduces a novel protocol for managing low altitude 3D aeronautical chart data to address the unique navigational challenges and collision risks associated with populated urban environments. Based on the Open Geospatial Consortium (OGC) 3D Tiles standard for geospatial data delivery, the proposed extension, called 3D Tiles Nav., uses a navigation-centric packet structure which automatically decomposes the navigable regions of space into hyperlocal navigation cells and encodes environmental surfaces that are potentially visible from each cell. The developed method is sensor agnostic and provides the ability to quickly and conservatively encode visibility directly from a region by enabling an expanded approach to viewshed analysis. In this approach, the navigation cells themselves are used to represent the intrinsic positional uncertainty often needed for navigation. Furthermore, we present in detail this new data format and its unique features as well as a candidate framework illustrating how an Unmanned Traffic Management (UTM) system could support trajectory-based operations and performance-based navigation in the urban canyon. Our results, experiments, and simulations conclude that this data reorganization enables 3D map streaming using less bandwidth and efficient 3D map-matching systems with limited on-board compute, storage, and sensor resources. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Integration of Constructive Solid Geometry and Boundary Representation (CSG-BRep) for 3D Modeling of Underground Cable Wells from Point Clouds
Remote Sens. 2020, 12(9), 1452; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12091452 - 04 May 2020
Viewed by 1126
Abstract
The preference of three-dimensional representation of underground cable wells from two-dimensional symbols is a developing trend, and three-dimensional (3D) point cloud data is widely used due to its high precision. In this study, we utilize the characteristics of 3D terrestrial lidar point cloud [...] Read more.
The preference of three-dimensional representation of underground cable wells from two-dimensional symbols is a developing trend, and three-dimensional (3D) point cloud data is widely used due to its high precision. In this study, we utilize the characteristics of 3D terrestrial lidar point cloud data to build a CSG-BRep 3D model of underground cable wells, whose spatial topological relationship is fully considered. In order to simplify the modeling process, first, point cloud simplification is performed; then, the point cloud main axis is extracted by OBB bounding box, and lastly the point cloud orientation correction is realized by quaternion rotation. Furthermore, employing the adaptive method, the top point cloud is extracted, and it is projected for boundary extraction. Thereupon, utilizing the boundary information, we design the 3D cable well model. Finally, the cable well component model is generated by scanning the original point cloud. The experiments demonstrate that, along with the algorithm being fast, the proposed model is effective at displaying the 3D information of the actual cable wells and meets the current production demands. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Automatic 3D Landmark Extraction System Based on an Encoder–Decoder Using Fusion of Vision and LiDAR
Remote Sens. 2020, 12(7), 1142; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12071142 - 03 Apr 2020
Cited by 2 | Viewed by 1009
Abstract
To provide a realistic environment for remote sensing applications, point clouds are used to realize a three-dimensional (3D) digital world for the user. Motion recognition of objects, e.g., humans, is required to provide realistic experiences in the 3D digital world. To recognize a [...] Read more.
To provide a realistic environment for remote sensing applications, point clouds are used to realize a three-dimensional (3D) digital world for the user. Motion recognition of objects, e.g., humans, is required to provide realistic experiences in the 3D digital world. To recognize a user’s motions, 3D landmarks are provided by analyzing a 3D point cloud collected through a light detection and ranging (LiDAR) system or a red green blue (RGB) image collected visually. However, manual supervision is required to extract 3D landmarks as to whether they originate from the RGB image or the 3D point cloud. Thus, there is a need for a method for extracting 3D landmarks without manual supervision. Herein, an RGB image and a 3D point cloud are used to extract 3D landmarks. The 3D point cloud is utilized as the relative distance between a LiDAR and a user. Because it cannot contain all information the user’s entire body due to disparities, it cannot generate a dense depth image that provides the boundary of user’s body. Therefore, up-sampling is performed to increase the density of the depth image generated based on the 3D point cloud; the density depends on the 3D point cloud. This paper proposes a system for extracting 3D landmarks using 3D point clouds and RGB images without manual supervision. A depth image provides the boundary of a user’s motion and is generated by using 3D point cloud and RGB image collected by a LiDAR and an RGB camera, respectively. To extract 3D landmarks automatically, an encoder–decoder model is trained with the generated depth images, and the RGB images and 3D landmarks are extracted from these images with the trained encoder model. The method of extracting 3D landmarks using RGB depth (RGBD) images was verified experimentally, and 3D landmarks were extracted to evaluate the user’s motions with RGBD images. In this manner, landmarks could be extracted according to the user’s motions, rather than by extracting them using the RGB images. The depth images generated by the proposed method were 1.832 times denser than the up-sampling-based depth images generated with bilateral filtering. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Multi-Label Learning based Semi-Global Matching Forest
Remote Sens. 2020, 12(7), 1069; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12071069 - 26 Mar 2020
Cited by 2 | Viewed by 1179
Abstract
Semi-Global Matching (SGM) approximates a 2D Markov Random Field (MRF) via multiple 1D scanline optimizations, which serves as a good trade-off between accuracy and efficiency in dense matching. Nevertheless, the performance is limited due to the simple summation of the aggregated costs from [...] Read more.
Semi-Global Matching (SGM) approximates a 2D Markov Random Field (MRF) via multiple 1D scanline optimizations, which serves as a good trade-off between accuracy and efficiency in dense matching. Nevertheless, the performance is limited due to the simple summation of the aggregated costs from all 1D scanline optimizations for the final disparity estimation. SGM-Forest improves the performance of SGM by training a random forest to predict the best scanline according to each scanline’s disparity proposal. The disparity estimated by the best scanline acts as reference to adaptively adopt close proposals for further post-processing. However, in many cases more than one scanline is capable of providing a good prediction. Training the random forest with only one scanline labeled may limit or even confuse the learning procedure when other scanlines can offer similar contributions. In this paper, we propose a multi-label classification strategy to further improve SGM-Forest. Each training sample is allowed to be described by multiple labels (or zero label) if more than one (or none) scanline gives a proper prediction. We test the proposed method on stereo matching datasets, from Middlebury, ETH3D, EuroSDR image matching benchmark, and the 2019 IEEE GRSS data fusion contest. The result indicates that under the framework of SGM-Forest, the multi-label strategy outperforms the single-label scheme consistently. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Point Cloud Semantic Segmentation Using a Deep Learning Framework for Cultural Heritage
Remote Sens. 2020, 12(6), 1005; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12061005 - 20 Mar 2020
Cited by 40 | Viewed by 2859
Abstract
In the Digital Cultural Heritage (DCH) domain, the semantic segmentation of 3D Point Clouds with Deep Learning (DL) techniques can help to recognize historical architectural elements, at an adequate level of detail, and thus speed up the process of modeling of historical buildings [...] Read more.
In the Digital Cultural Heritage (DCH) domain, the semantic segmentation of 3D Point Clouds with Deep Learning (DL) techniques can help to recognize historical architectural elements, at an adequate level of detail, and thus speed up the process of modeling of historical buildings for developing BIM models from survey data, referred to as HBIM (Historical Building Information Modeling). In this paper, we propose a DL framework for Point Cloud segmentation, which employs an improved DGCNN (Dynamic Graph Convolutional Neural Network) by adding meaningful features such as normal and colour. The approach has been applied to a newly collected DCH Dataset which is publicy available: ArCH (Architectural Cultural Heritage) Dataset. This dataset comprises 11 labeled points clouds, derived from the union of several single scans or from the integration of the latter with photogrammetric surveys. The involved scenes are both indoor and outdoor, with churches, chapels, cloisters, porticoes and loggias covered by a variety of vaults and beared by many different types of columns. They belong to different historical periods and different styles, in order to make the dataset the least possible uniform and homogeneous (in the repetition of the architectural elements) and the results as general as possible. The experiments yield high accuracy, demonstrating the effectiveness and suitability of the proposed approach. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Evaluating Thermal Attribute Mapping Strategies for Oblique Airborne Photogrammetric System AOS-Tx8
Remote Sens. 2020, 12(1), 112; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12010112 - 30 Dec 2019
Cited by 2 | Viewed by 1069
Abstract
Thermal imagery is widely used in various fields of remote sensing. In this study, a novel processing scheme is developed to process the data acquired by the oblique airborne photogrammetric system AOS-Tx8 consisting of four thermal cameras and four RGB cameras with the [...] Read more.
Thermal imagery is widely used in various fields of remote sensing. In this study, a novel processing scheme is developed to process the data acquired by the oblique airborne photogrammetric system AOS-Tx8 consisting of four thermal cameras and four RGB cameras with the goal of large-scale area thermal attribute mapping. In order to merge 3D RGB data and 3D thermal data, registration is conducted in four steps: First, thermal and RGB point clouds are generated independently by applying structure from motion (SfM) photogrammetry to both the thermal and RGB imagery. Next, a coarse point cloud registration is performed by the support of georeferencing data (global positioning system, GPS). Subsequently, a fine point cloud registration is conducted by octree-based iterative closest point (ICP). Finally, three different texture mapping strategies are compared. Experimental results showed that the global image pose refinement outperforms the other two strategies at registration accuracy between thermal imagery and RGB point cloud. Potential building thermal leakages in large areas can be fast detected in the generated texture mapping results. Furthermore, a combination of the proposed workflow and the oblique airborne system allows for a detailed thermal analysis of building roofs and facades. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Low Overlapping Point Cloud Registration Using Line Features Detection
Remote Sens. 2020, 12(1), 61; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12010061 - 23 Dec 2019
Cited by 7 | Viewed by 1348
Abstract
Modern robotic exploratory strategies assume multi-agent cooperation that raises a need for an effective exchange of acquired scans of the environment with the absence of a reliable global positioning system. In such situations, agents compare the scans of the outside world to determine [...] Read more.
Modern robotic exploratory strategies assume multi-agent cooperation that raises a need for an effective exchange of acquired scans of the environment with the absence of a reliable global positioning system. In such situations, agents compare the scans of the outside world to determine if they overlap in some region, and if they do so, they determine the right matching between them. The process of matching multiple point-cloud scans is called point-cloud registration. Using the existing point-cloud registration approaches, a good match between any two-point-clouds is achieved if and only if there exists a large overlap between them, however, this limits the advantage of using multiple robots, for instance, for time-effective 3D mapping. Hence, a point-cloud registration approach is highly desirable if it can work with low overlapping scans. This work proposes a novel solution for the point-cloud registration problem with a very low overlapping area between the two scans. In doing so, no initial relative positions of the point-clouds are assumed. Most of the state-of-the-art point-cloud registration approaches iteratively match keypoints in the scans, which is computationally expensive. In contrast to the traditional approaches, a more efficient line-features-based point-cloud registration approach is proposed in this work. This approach, besides reducing the computational cost, avoids the problem of high false-positive rate of existing keypoint detection algorithms, which becomes especially significant in low overlapping point-cloud registration. The effectiveness of the proposed approach is demonstrated with the help of experiments. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Pole-Like Street Furniture Segmentation and Classification in Mobile LiDAR Data by Integrating Multiple Shape-Descriptor Constraints
Remote Sens. 2019, 11(24), 2920; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11242920 - 06 Dec 2019
Cited by 3 | Viewed by 1148
Abstract
Nowadays, mobile laser scanning is widely used for understanding urban scenes, especially for extraction and recognition of pole-like street furniture, such as lampposts, traffic lights and traffic signs. However, the start-of-art methods may generate low segmentation accuracy in the overlapping scenes, and the [...] Read more.
Nowadays, mobile laser scanning is widely used for understanding urban scenes, especially for extraction and recognition of pole-like street furniture, such as lampposts, traffic lights and traffic signs. However, the start-of-art methods may generate low segmentation accuracy in the overlapping scenes, and the object classification accuracy can be highly influenced by the large discrepancy in instance number of different objects in the same scene. To address these issues, we present a complete paradigm for pole-like street furniture segmentation and classification using mobile LiDAR (light detection and ranging) point cloud. First, we propose a 3D density-based segmentation algorithm which considers two different conditions including isolated furniture and connected furniture in overlapping scenes. After that, a vertical region grow algorithm is employed for component splitting and a new shape distribution estimation method is proposed to obtain more accurate global shape descriptors. For object classification, an integrated shape constraint based on the splitting result of pole-like street furniture (SplitISC) is introduced and integrated into a retrieval procedure. Two test datasets are used to verify the performance and effectiveness of the proposed method. The experimental results demonstrate that the proposed method can achieve better classification results from both sites than the existing shape distribution method. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
A Novel Method for Plane Extraction from Low-Resolution Inhomogeneous Point Clouds and its Application to a Customized Low-Cost Mobile Mapping System
Remote Sens. 2019, 11(23), 2789; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11232789 - 26 Nov 2019
Cited by 6 | Viewed by 1481
Abstract
Over the last decade, increasing demands for building interior mapping have brought the challenge of effectively and efficiently acquiring geometric information. Most mobile mapping methods rely on the integration of Simultaneous Localization And Mapping (SLAM) and costly Inertial Measurement Units (IMUs). Meanwhile, the [...] Read more.
Over the last decade, increasing demands for building interior mapping have brought the challenge of effectively and efficiently acquiring geometric information. Most mobile mapping methods rely on the integration of Simultaneous Localization And Mapping (SLAM) and costly Inertial Measurement Units (IMUs). Meanwhile, the methods also suffer misalignment errors caused by the low-resolution inhomogeneous point clouds captured using multi-line Mobile Laser Scanners (MLSs). While point-based alignments between such point clouds are affected by the highly dynamic moving patterns of the platform, plane-based methods are limited by the poor quality of the planes extracted, which reduce the methods’ robustness, reliability, and applicability. To alleviate these issues, we proposed and developed a method for plane extraction from low-resolution inhomogeneous point clouds. Based on the definition of virtual scanlines and the Enhanced Line Simplification (ELS) algorithm, the method extracts feature points, generates line segments, forms patches, and merges multi-direction fractions to form planes. The proposed method reduces the over-segmentation fractions caused by measurement noise and scanline curvature. A dedicated plane-to-plane point cloud alignment workflow based on the proposed plane extraction method was created to demonstrate the method’s application. The implementation of the coarse-to-fine procedure and the shortest-path initialization strategy eliminates the necessity of IMUs in mobile mapping. A mobile mapping prototype was designed to test the performance of the proposed methods. The results show that the proposed workflow and hardware system achieves centimeter-level accuracy, which suggests that it can be applied to mobile mapping and sensor fusion. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
An Efficient Encoding Voxel-Based Segmentation (EVBS) Algorithm Based on Fast Adjacent Voxel Search for Point Cloud Plane Segmentation
Remote Sens. 2019, 11(23), 2727; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11232727 - 20 Nov 2019
Cited by 10 | Viewed by 1530
Abstract
Plane segmentation is a basic yet important process in light detection and ranging (LiDAR) point cloud processing. The traditional point cloud plane segmentation algorithm is typically affected by the number of point clouds and the noise data, which results in slow segmentation efficiency [...] Read more.
Plane segmentation is a basic yet important process in light detection and ranging (LiDAR) point cloud processing. The traditional point cloud plane segmentation algorithm is typically affected by the number of point clouds and the noise data, which results in slow segmentation efficiency and poor segmentation effect. Hence, an efficient encoding voxel-based segmentation (EVBS) algorithm based on a fast adjacent voxel search is proposed in this study. First, a binary octree algorithm is proposed to construct the voxel as the segmentation object and code the voxel, which can compute voxel features quickly and accurately. Second, a voxel-based region growing algorithm is proposed to cluster the corresponding voxel to perform the initial point cloud segmentation, which can improve the rationality of seed selection. Finally, a refining point method is proposed to solve the problem of under-segmentation in unlabeled voxels by judging the relationship between the points and the segmented plane. Experimental results demonstrate that the proposed algorithm is better than the traditional algorithm in terms of computation time, extraction accuracy, and recall rate. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
RealPoint3D: Generating 3D Point Clouds from a Single Image of Complex Scenarios
Remote Sens. 2019, 11(22), 2644; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11222644 - 13 Nov 2019
Cited by 6 | Viewed by 1480
Abstract
Generating 3D point clouds from a single image has attracted full attention from researchers in the field of multimedia, remote sensing and computer vision. With the recent proliferation of deep learning, various deep models have been proposed for the 3D point cloud generation. [...] Read more.
Generating 3D point clouds from a single image has attracted full attention from researchers in the field of multimedia, remote sensing and computer vision. With the recent proliferation of deep learning, various deep models have been proposed for the 3D point cloud generation. However, they require objects to be captured with absolutely clean backgrounds and fixed viewpoints, which highly limits their application in the real environment. To guide 3D point cloud generation, we propose a novel network, RealPoint3D, to integrate prior 3D shape knowledge into the network. Taking additional 3D information, RealPoint3D can handle 3D object generation from a single real image captured from any viewpoint and complex background. Specifically, provided a query image, we retrieve the nearest shape model from a pre-prepared 3D model database. Then, the image, together with the retrieved shape model, is fed into RealPoint3D to generate a fine-grained 3D point cloud. We evaluated the proposed RealPoint3D on the ShapeNet dataset and ObjectNet3D dataset for the 3D point cloud generation. Experimental results and comparisons with state-of-the-art methods demonstrate that our framework achieves superior performance. Furthermore, our proposed framework works well for real images in complex backgrounds (the image has the remaining objects in addition to the reconstructed object, and the reconstructed object may be occluded or truncated) with various viewing angles. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Towards Automatic Segmentation and Recognition of Multiple Precast Concrete Elements in Outdoor Laser Scan Data
Remote Sens. 2019, 11(11), 1383; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11111383 - 10 Jun 2019
Cited by 8 | Viewed by 1658
Abstract
To date, to improve construction quality and efficiency and reduce environmental pollution, the use of precast concrete elements (PCEs) has become popular in civil engineering. As PCEs are manufactured in a batch manner and possess complicated shapes, traditional manual inspection methods cannot meet [...] Read more.
To date, to improve construction quality and efficiency and reduce environmental pollution, the use of precast concrete elements (PCEs) has become popular in civil engineering. As PCEs are manufactured in a batch manner and possess complicated shapes, traditional manual inspection methods cannot meet today’s requirements in terms of production rate of PCEs. The manual inspection of PCEs needs to be conducted one by one after the production, resulting in the excessive storage of finished PCEs in the storage yards. Therefore, many studies have proposed the use of terrestrial laser scanners (TLSs) for the quality inspection of PCEs. However, all these studies focus on the data of a single PCE or a single surface of PCE, which is acquired from a unique or predefined scanning angle. It is thus still inefficient and impractical in reality, where hundred types of PCEs with different properties may exist. Taking this cue, this study proposes to scan multiple PCEs simultaneously to improve the inspection efficiency by using TLSs. In particular, a segmentation and recognition approach is proposed to automatically extract and identify the different types of PCEs in a large amount of outdoor laser scan data. For the data segmentation, 3D data is first converted into 2D images. Image processing is then combined with radially bounded nearest neighbor graph (RBNN) algorithm to speed up the laser scan data segmentation. For the PCE recognition, based on the as-designed models of PCEs in building information modeling (BIM), the proposed method uses a coarse matching and a fine matching to recognize the type of each PCE data. To the best of our knowledge, no research work has been conducted on the automatic recognition of PCEs from a million or even ten million of the outdoor laser scan points, which contain many different types of PCEs. To verify the feasibility of the proposed method, experimental studies have been conducted on the PCE outdoor laser scan data, considering the shape, type, and amount of PCEs. In total, 22 PCEs including 12 different types are involved in this paper. Experiment results confirm the effectiveness and efficiency of the proposed approach for automatic segmentation and recognition of different PCEs. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
A Multi-Primitive-Based Hierarchical Optimal Approach for Semantic Labeling of ALS Point Clouds
Remote Sens. 2019, 11(10), 1243; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101243 - 24 May 2019
Cited by 2 | Viewed by 1354
Abstract
There are normally three main steps to carrying out the labeling of airborne laser scanning (ALS) point clouds. The first step is to use appropriate primitives to represent the scanning scenes, the second is to calculate the discriminative features of each primitive, and [...] Read more.
There are normally three main steps to carrying out the labeling of airborne laser scanning (ALS) point clouds. The first step is to use appropriate primitives to represent the scanning scenes, the second is to calculate the discriminative features of each primitive, and the third is to introduce a classifier to label the point clouds. This paper investigates multiple primitives to effectively represent scenes and exploit their geometric relationships. Relationships are graded according to the properties of related primitives. Then, based on initial labeling results, a novel, hierarchical, and optimal strategy is developed to optimize semantic labeling results. The proposed approach was tested using two sets of representative ALS point clouds, namely the Vaihingen datasets and Hong Kong’s Central District dataset. The results were compared with those generated by other typical methods in previous work. Quantitative assessments for the two experimental datasets showed that the performance of the proposed approach was superior to reference methods in both datasets. The scores for correctness attained over 98% in all cases of the Vaihingen datasets and up to 96% in the Hong Kong dataset. The results reveal that our approach of labeling different classes in terms of ALS point clouds is robust and bears significance for future applications, such as 3D modeling and change detection from point clouds. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Article
Three-Dimensional Reconstruction of Structural Surface Model of Heritage Bridges Using UAV-Based Photogrammetric Point Clouds
Remote Sens. 2019, 11(10), 1204; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101204 - 21 May 2019
Cited by 31 | Viewed by 1981
Abstract
Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle [...] Read more.
Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Article
Non-Rigid Vehicle-Borne LiDAR-Assisted Aerotriangulation
Remote Sens. 2019, 11(10), 1188; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11101188 - 18 May 2019
Cited by 1 | Viewed by 1338
Abstract
VLS (Vehicle-borne Laser Scanning) can easily scan the road surface in the close range with high density. UAV (Unmanned Aerial Vehicle) can capture a wider range of ground images. Due to the complementary features of platforms of VLS and UAV, combining the two [...] Read more.
VLS (Vehicle-borne Laser Scanning) can easily scan the road surface in the close range with high density. UAV (Unmanned Aerial Vehicle) can capture a wider range of ground images. Due to the complementary features of platforms of VLS and UAV, combining the two methods becomes a more effective method of data acquisition. In this paper, a non-rigid method for the aerotriangulation of UAV images assisted by a vehicle-borne light detection and ranging (LiDAR) point cloud is proposed, which greatly reduces the number of control points and improves the automation. We convert the LiDAR point cloud-assisted aerotriangulation into a registration problem between two point clouds, which does not require complicated feature extraction and match between point cloud and images. Compared with the iterative closest point (ICP) algorithm, this method can address the non-rigid image distortion with a more rigorous adjustment model and a higher accuracy of aerotriangulation. The experimental results show that the constraint of the LiDAR point cloud ensures the high accuracy of the aerotriangulation, even in the absence of control points. The root-mean-square error (RMSE) of the checkpoints on the x, y, and z axes are 0.118 m, 0.163 m, and 0.084m, respectively, which verifies the reliability of the proposed method. As a necessary condition for joint mapping, the research based on VLS and UAV images in uncontrolled circumstances will greatly improve the efficiency of joint mapping and reduce its cost. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Graphical abstract

Article
Surfaces of Revolution (SORs) Reconstruction Using a Self-Adaptive Generatrix Line Extraction Method from Point Clouds
Remote Sens. 2019, 11(9), 1125; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11091125 - 10 May 2019
Viewed by 1294
Abstract
This paper presents an automatic reconstruction algorithm of surfaces of revolution (SORs) with a self-adaptive method for generatrix line extraction from point clouds. The proposed method does not need to calculate the normal of point clouds, which can greatly improve the efficiency and [...] Read more.
This paper presents an automatic reconstruction algorithm of surfaces of revolution (SORs) with a self-adaptive method for generatrix line extraction from point clouds. The proposed method does not need to calculate the normal of point clouds, which can greatly improve the efficiency and accuracy of SORs reconstruction. Firstly, the rotation axis of a SOR is automatically extracted by a minimum relative deviation among the three axial directions for both tall-thin and short-wide SORs. Secondly, the projection profile of a SOR is extracted by the triangulated irregular network (TIN) model and random sample consensus (RANSAC) algorithm. Thirdly, the point set of a generatrix line of a SOR is determined by searching for the extremum of coordinate Z, together with overflow points processing, and further determines the type of generatrix line by the smaller RMS errors between linear fitting and quadratic curve fitting. In order to validate the efficiency and accuracy of the proposed method, two kinds of SORs, simple SORs with a straight generatrix line and complex SORs with a curved generatrix line are selected for comparison analysis in the paper. The results demonstrate that the proposed method is robust and can reconstruct SORs with a higher accuracy and efficiency based on the point clouds. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Review

Jump to: Research

Review
A Review on Deep Learning Techniques for 3D Sensed Data Classification
Remote Sens. 2019, 11(12), 1499; https://0-doi-org.brum.beds.ac.uk/10.3390/rs11121499 - 25 Jun 2019
Cited by 58 | Viewed by 5875
Abstract
Over the past decade deep learning has driven progress in 2D image understanding. Despite these advancements, techniques for automatic 3D sensed data understanding, such as point clouds, is comparatively immature. However, with a range of important applications from indoor robotics navigation to national [...] Read more.
Over the past decade deep learning has driven progress in 2D image understanding. Despite these advancements, techniques for automatic 3D sensed data understanding, such as point clouds, is comparatively immature. However, with a range of important applications from indoor robotics navigation to national scale remote sensing there is a high demand for algorithms that can learn to automatically understand and classify 3D sensed data. In this paper we review the current state-of-the-art deep learning architectures for processing unstructured Euclidean data. We begin by addressing the background concepts and traditional methodologies. We review the current main approaches, including RGB-D, multi-view, volumetric and fully end-to-end architecture designs. Datasets for each category are documented and explained. Finally, we give a detailed discussion about the future of deep learning for 3D sensed data, using literature to justify the areas where future research would be most valuable. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Back to TopTop