Special Issue "Advances in Deep Learning Based 3D Scene Understanding from LiDAR"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 October 2021.

Special Issue Editors

Dr. Dong Chen
E-Mail Website
Guest Editor
College of Civil Engineering, Nanjing Forestry University, Nanjing 210037, China
Interests: image-and LiDAR-based segmentation and reconstruction, full-waveform LiDAR data processing, and related remote sensing applications in the field of forest ecosystems.
Special Issues and Collections in MDPI journals
Dr. Jiju Poovvancheri
E-Mail Website
Guest Editor
Department of Math and Computing Science, Saint Mary’s University, Halifax, NS B3P 2M6, Canada
Interests: computer graphics; 3D computer vision; geometric deep learning; related applications including motion capture for VR/AR and LiDAR-based urban modeling
Special Issues and Collections in MDPI journals
Dr. Zhengxin Zhang
E-Mail Website
Guest Editor
College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China
Interests: LiDAR data processing; quality analysis of geographic information systems; remote sensing image processing; algorithm development
Dr. Liqiang Zhang
E-Mail Website
Guest Editor
State Key Laboratory of Remote Sensing Science, Faculty of Geographical Science, Beijing Normal University, Beijing 100875, China
Interests: land-use change; land change modeling; spatial analysis; deep learning; climate change; sustainable development; big remote sensing data
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Rapid advancements in the Light Detection Ranging (LiDAR) technology and the recent break throughs in 3D deep learning has dramatically improved the ability to recognize physical objects and interpret the physical world at scale. Many applications such as autonomous robotics and urban planning uses real time and/or offline inference of information about the physical world and the objects therein from 3D point clouds. In general, 3D scene understanding problem consists of a set of sub-problems including scan registration, segmentation, recognition of objects and the scene modeling. Driven by the increasing availability of annotated public data, e.g., KITTI, Toronto3D, RoofND or Semantic3D, remote sensing community is more and more shifting towards machine learning/deep learning algorithms to efficiently solve these fundamental problems in physical world interpretation. This special issue aims to provide a forum for disseminating the recent advances in the research and applications of 3D scene understanding from LiDAR scans, especially with a focus on deep learning algorithms. This issue calls for machine learning/deep learning models, data sets and any specific tools for data generation or annotation for LiDAR based scene understanding, object classification and modeling.

Dr. Dong Chen
Dr. Jiju Poovvancheri
Dr. Zhengxin Zhang
Dr. Liqiang Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep Learning for LiDAR processing
  • LiDAR scan processing
  • LiDAR registration
  • Object detection
  • 3D object recognition
  • LiDAR segmentation
  • LiDAR classification
  • Scene understanding
  • 3D scene modeling
  • Dynamic object tracking
  • Tree classification/modeling
  • Road segmentation
  • Building reconstruction
  • Ubiquitous point clouds interpretation
  • Point clouds filtering

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
LiDAR-Based SLAM under Semantic Constraints in Dynamic Environments
Remote Sens. 2021, 13(18), 3651; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13183651 - 13 Sep 2021
Viewed by 248
Abstract
Facing the realistic demands of the application environment of robots, the application of simultaneous localisation and mapping (SLAM) has gradually moved from static environments to complex dynamic environments, while traditional SLAM methods usually result in pose estimation deviations caused by errors in data [...] Read more.
Facing the realistic demands of the application environment of robots, the application of simultaneous localisation and mapping (SLAM) has gradually moved from static environments to complex dynamic environments, while traditional SLAM methods usually result in pose estimation deviations caused by errors in data association due to the interference of dynamic elements in the environment. This problem is effectively solved in the present study by proposing a SLAM approach based on light detection and ranging (LiDAR) under semantic constraints in dynamic environments. Four main modules are used for the projection of point cloud data, semantic segmentation, dynamic element screening, and semantic map construction. A LiDAR point cloud semantic segmentation network SANet based on a spatial attention mechanism is proposed, which significantly improves the real-time performance and accuracy of point cloud semantic segmentation. A dynamic element selection algorithm is designed and used with prior knowledge to significantly reduce the pose estimation deviations caused by SLAM dynamic elements. The results of experiments conducted on the public datasets SemanticKITTI, KITTI, and SemanticPOSS show that the accuracy and robustness of the proposed approach are significantly improved. Full article
(This article belongs to the Special Issue Advances in Deep Learning Based 3D Scene Understanding from LiDAR)
Show Figures

Figure 1

Article
Point Cloud Classification Algorithm Based on the Fusion of the Local Binary Pattern Features and Structural Features of Voxels
Remote Sens. 2021, 13(16), 3156; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163156 - 10 Aug 2021
Viewed by 346
Abstract
Point cloud classification is a key technology for point cloud applications and point cloud feature extraction is a key step towards achieving point cloud classification. Although there are many point cloud feature extraction and classification methods, and the acquisition of colored point cloud [...] Read more.
Point cloud classification is a key technology for point cloud applications and point cloud feature extraction is a key step towards achieving point cloud classification. Although there are many point cloud feature extraction and classification methods, and the acquisition of colored point cloud data has become easier in recent years, most point cloud processing algorithms do not consider the color information associated with the point cloud or do not make full use of the color information. Therefore, we propose a voxel-based local feature descriptor according to the voxel-based local binary pattern (VLBP) and fuses point cloud RGB information and geometric structure features using a random forest classifier to build a color point cloud classification algorithm. The proposed algorithm voxelizes the point cloud; divides the neighborhood of the center point into cubes (i.e., multiple adjacent sub-voxels); compares the gray information of the voxel center and adjacent sub-voxels; performs voxel global thresholding to convert it into a binary code; and uses a local difference sign–magnitude transform (LDSMT) to decompose the local difference of an entire voxel into two complementary components of sign and magnitude. Then, the VLBP feature of each point is extracted. To obtain more structural information about the point cloud, the proposed method extracts the normal vector of each point and the corresponding fast point feature histogram (FPFH) based on the normal vector. Finally, the geometric mechanism features (normal vector and FPFH) and color features (RGB and VLBP features) of the point cloud are fused, and a random forest classifier is used to classify the color laser point cloud. The experimental results show that the proposed algorithm can achieve effective point cloud classification for point cloud data from different indoor and outdoor scenes, and the proposed VLBP features can improve the accuracy of point cloud classification. Full article
(This article belongs to the Special Issue Advances in Deep Learning Based 3D Scene Understanding from LiDAR)
Show Figures

Figure 1

Article
Critical Points Extraction from Building Façades by Analyzing Gradient Structure Tensor
Remote Sens. 2021, 13(16), 3146; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163146 - 09 Aug 2021
Viewed by 573
Abstract
This paper proposes a building façade contouring method from LiDAR (Light Detection and Ranging) scans and photogrammetric point clouds. To this end, we calculate the confidence property at multiple scales for an individual point cloud to measure the point cloud’s quality. The confidence [...] Read more.
This paper proposes a building façade contouring method from LiDAR (Light Detection and Ranging) scans and photogrammetric point clouds. To this end, we calculate the confidence property at multiple scales for an individual point cloud to measure the point cloud’s quality. The confidence property is utilized in the definition of the gradient for each point. We encode the individual point gradient structure tensor, whose eigenvalues reflect the gradient variations in the local neighborhood areas. The critical point clouds representing the building façade and rooftop (if, of course, such rooftops exist) contours are then extracted by jointly analyzing dual-thresholds of the gradient and gradient structure tensor. Based on the requirements of compact representation, the initial obtained critical points are finally downsampled, thereby achieving a tradeoff between the accurate geometry and abstract representation at a reasonable level. Various experiments using representative buildings in Semantic3D benchmark and other ubiquitous point clouds from ALS DublinCity and Dutch AHN3 datasets, MLS TerraMobilita/iQmulus 3D urban analysis benchmark, UAV-based photogrammetric dataset, and GeoSLAM ZEB-HORIZON scans have shown that the proposed method generates building contours that are accurate, lightweight, and robust to ubiquitous point clouds. Two comparison experiments also prove the superiority of the proposed method in terms of topological correctness, geometric accuracy, and representation compactness. Full article
(This article belongs to the Special Issue Advances in Deep Learning Based 3D Scene Understanding from LiDAR)
Show Figures

Graphical abstract

Back to TopTop