sensors-logo

Journal Browser

Journal Browser

Special Issue "LiDAR-Based Creation of Virtual Cities"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (10 April 2020).

Special Issue Editors

Prof. Dr. Jonathan Li
E-Mail Website
Guest Editor
Geospatial Sensing and Data Intelligence Lab, Faculty of Environment, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada
Interests: LiDAR remote sensing; point cloud understanding; deep learning; 3D vision; HD maps for smart cities and autonomous vehicles
Special Issues and Collections in MDPI journals
Dr. Liqiang Zhang
E-Mail Website
Guest Editor
State Key Laboratory of Remote Sensing Science, Faculty of Geographical Science, Beijing Normal University, Beijing 100875, China
Interests: land-use change; land change modeling; spatial analysis; deep learning; climate change; sustainable development; big remote sensing data
Special Issues and Collections in MDPI journals
Dr. Jiju Poovvancheri
E-Mail Website
Guest Editor
Department of Math and Computing Science, Saint Mary’s University, Halifax, NS B3P 2M6, Canada
Interests: computer graphics; 3D computer vision; geometric deep learning; related applications including motion capture for VR/AR and LiDAR-based urban modeling
Special Issues and Collections in MDPI journals
Prof. Dr. Michael Chapman
E-Mail Website
Guest Editor
Department of Civil Engineering, Ryerson University, Toronto, ON M5B 2K3, Canada
Interests: algorithms and processing methodologies for airborne sensors using GPS/INS; geometric processing of digital imagery in industrial environments; terrestrial imaging systems for transportation infrastructure mapping; algorithms and processing strategies for bio-metrology applications. algorithms and processing methodologies for LiDAR segmentation; recognition and modeling
Special Issues and Collections in MDPI journals
Prof. Dr. Haiyan Guan
E-Mail Website
Guest Editor
School of Remote Sensing and Geomatics Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: airborne/mobile laser scanning data processing; remote sensing image data understanding; multispectral/hyperspectral point clouds for semantic interpretation of wetlands; cultivated and vegetated areas
Special Issues and Collections in MDPI journals
Dr. Dong Chen
E-Mail Website
Guest Editor
College of Civil Engineering, Nanjing Forestry University, Nanjing 210037, China
Interests: image-and LiDAR-based segmentation and reconstruction, full-waveform LiDAR data processing, and related remote sensing applications in the field of forest ecosystems.
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Virtual cities, that is, digital models of cities at scale, are increasingly used in numerous applications such as 3D mapping and navigation, augmented and virtual reality applications, urban simulation, and urban planning, among others. Of late, light detection and ranging (LiDAR) has been used as one of the primary means to capture the physical world, which is then converted to digital models and used in different applications. The modelling from LiDAR data includes point cloud processing (registration, filtering, and segmentation), scene interpretation and/or shape recognition, reconstruction of the scene, and an optional simplification to make the 3D model web- and/or mobile-compatible. Despite two decades of research, each stage of the urban modelling pipeline is still far from being satisfactory, and hence calls for further research and investigation. Current challenges include detail modelling from imperfect (occluded and noisy) scans, free-from building modelling, light weight modelling for web-/mobile-compatibility, flexible modelling that generates multiple levels of detail (LoD) on the fly, and automated reconstruction from large-scale raw point clouds, to name a few. The interdisciplinary nature of 3D urban modelling seeks reciprocity and collaboration among urban modelling researchers from photogrammetry and remote sensing, computer graphics and computer vision communities, and so on. This Special Issue is dedicated to disseminating the recent developments in 3D urban modelling research, especially considering LiDAR as the input source.

This Special Issue covers a range of topics related to LiDAR-based 3D modelling pipelines. The list of suggested topics includes, but is not limited to, the following:

(1) LiDAR processing:

  • LiDAR filtering;
  • fusion of LiDAR with images;
  • fusion of LiDAR and DSM;
  • aerial and mobile LiDAR fusion;
  • LiDAR scan consolidation;
  • semantic alignment of city-scale LiDAR

(2) Scene interpretation:

  • LiDAR segmentation;
  • object classification and labelling;
  • semantic decomposition of urban scenes

(3) Scene reconstruction:

  • large-scale city reconstruction;
  • detail synthesis for urban scenes;
  • indoor reconstruction

(4) Urban object modelling:

  • building reconstruction;
  • tree extraction and modelling;
  • road surface extraction;
  • bridge modelling;
  • lamp pole reconstruction;
  • power line extraction

(5) Scene representation and visualization:

  • efficient data structures;
  • polyhedral meshes;
  • procedural models;
  • constructive solid geometry;
  • rendering and visualization of urban scenes

(6) Intelligent applications:

  • urban city change detection;
  • 2D/3D mapping;
  • augmented-/virtual-reality;
  • numerical simulation;
  • urban planning;
  • cultural heritage archival;
  • autonomous driving

This Special Issue seeks high-quality research and application submissions. Submitted papers should present original contributions and/or innovative applications. Reviews relevant to these topics are also welcome. Submissions based on previously published or submitted conference papers may be considered, provided they are considerably improved and extended.

Prof. Dr. Jonathan Li
Prof. Dr. Liqiang Zhang
Dr. Jiju Poovvancheri
Prof. Dr. Michael Chapman
Prof. Dr. Haiyan Guan
Dr. Dong Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • LiDAR
  • point cloud
  • 3D modelling
  • urban scene
  • 3D reconstruction

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Semantic-Based Building Extraction from LiDAR Point Clouds Using Contexts and Optimization in Complex Environment
Sensors 2020, 20(12), 3386; https://0-doi-org.brum.beds.ac.uk/10.3390/s20123386 - 15 Jun 2020
Cited by 8 | Viewed by 1056
Abstract
The extraction of buildings has been an essential part of the field of LiDAR point clouds processing in recent years. However, it is still challenging to extract buildings from huge amount of point clouds due to the complicated and incomplete structures, occlusions and [...] Read more.
The extraction of buildings has been an essential part of the field of LiDAR point clouds processing in recent years. However, it is still challenging to extract buildings from huge amount of point clouds due to the complicated and incomplete structures, occlusions and local similarities between different categories in a complex environment. Taking the urban and campus scene as examples, this paper presents a versatile and hierarchical semantic-based method for building extraction using LiDAR point clouds. The proposed method first performs a series of preprocessing operations, such as removing ground points, establishing super-points and using them as primitives for subsequent processing, and then semantically labels the raw LiDAR data. In the feature engineering process, considering the purpose of this article is to extract buildings, we tend to choose the features extracted from super-points that can describe building for the next classification. There are a portion of inaccurate labeling results due to incomplete or overly complex scenes, a Markov Random Field (MRF) optimization model is constructed for postprocessing and segmentation results refinement. Finally, the buildings are extracted from the labeled points. Experimental verification was performed on three datasets in different scenes, our results were compared with the state-of-the-art methods. These evaluation results demonstrate the feasibility and effectiveness of the proposed method for extracting buildings from LiDAR point clouds in multiple environments. Full article
(This article belongs to the Special Issue LiDAR-Based Creation of Virtual Cities)
Show Figures

Figure 1

Article
Automated Method of Extracting Urban Roads Based on Region Growing from Mobile Laser Scanning Data
Sensors 2019, 19(23), 5262; https://0-doi-org.brum.beds.ac.uk/10.3390/s19235262 - 29 Nov 2019
Viewed by 933
Abstract
With the rapid development of three-dimensional point cloud acquisition from mobile laser scanning systems, the extraction of urban roads has become a major research focus. Although it has great potential for digital image processing, the extraction of roads using the region growing approach [...] Read more.
With the rapid development of three-dimensional point cloud acquisition from mobile laser scanning systems, the extraction of urban roads has become a major research focus. Although it has great potential for digital image processing, the extraction of roads using the region growing approach is still in its infancy. We propose an automated method of urban road extraction based on region growing. First, an initial seed is chosen under constraints relating to the Gaussian curvature, height and number of neighboring points, which ensures that the initial seed is located on a road. Then, the growing condition is determined by the angle threshold of the tangent plane of the seed point. Then, new seeds are selected based on the identified road points and their curvature. The method also includes a strategy for dealing with multiple discontinuous roads in a dataset. The result shows that the method can not only achieve high accuracy in urban road extraction but is also stable and robust. Full article
(This article belongs to the Special Issue LiDAR-Based Creation of Virtual Cities)
Show Figures

Figure 1

Article
Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image Generation
Sensors 2019, 19(21), 4818; https://0-doi-org.brum.beds.ac.uk/10.3390/s19214818 - 05 Nov 2019
Cited by 4 | Viewed by 1589
Abstract
In this paper, we propose a method of generating a color image from light detection and ranging (LiDAR) 3D reflection intensity. The proposed method is composed of two steps: projection of LiDAR 3D reflection intensity into 2D intensity, and color image generation from [...] Read more.
In this paper, we propose a method of generating a color image from light detection and ranging (LiDAR) 3D reflection intensity. The proposed method is composed of two steps: projection of LiDAR 3D reflection intensity into 2D intensity, and color image generation from the projected intensity by using a fully convolutional network (FCN). The color image should be generated from a very sparse projected intensity image. For this reason, the FCN is designed to have an asymmetric network structure, i.e., the layer depth of the decoder in the FCN is deeper than that of the encoder. The well-known KITTI dataset for various scenarios is used for the proposed FCN training and performance evaluation. Performance of the asymmetric network structures are empirically analyzed for various depth combinations for the encoder and decoder. Through simulations, it is shown that the proposed method generates fairly good visual quality of images while maintaining almost the same color as the ground truth image. Moreover, the proposed FCN has much higher performance than conventional interpolation methods and generative adversarial network based Pix2Pix. One interesting result is that the proposed FCN produces shadow-free and daylight color images. This result is caused by the fact that the LiDAR sensor data is produced by the light reflection and is, therefore, not affected by sunlight and shadow. Full article
(This article belongs to the Special Issue LiDAR-Based Creation of Virtual Cities)
Show Figures

Figure 1

Article
Two-Layered Graph-Cuts-Based Classification of LiDAR Data in Urban Areas
Sensors 2019, 19(21), 4685; https://0-doi-org.brum.beds.ac.uk/10.3390/s19214685 - 28 Oct 2019
Cited by 1 | Viewed by 1114
Abstract
Classifying the LiDAR (Light Detection and Ranging) point cloud in the urban environment is a challenging task. Due to the complicated structures of urban objects, it is difficult to find suitable features and classifiers to efficiently category the points. A two-layered graph-cuts-based classification [...] Read more.
Classifying the LiDAR (Light Detection and Ranging) point cloud in the urban environment is a challenging task. Due to the complicated structures of urban objects, it is difficult to find suitable features and classifiers to efficiently category the points. A two-layered graph-cuts-based classification framework is addressed in this study. The hierarchical framework includes a bottom layer that defines the features and classifies point clouds at the point level as well as a top layer that defines the features and classifies the point cloud at the object level. A novel adaptive local modification method is employed to model the interactions between these two layers. The iterative graph cuts algorithm runs around the bottom and top layers to optimize the classification. In this way, the addressed framework benefits from the integration of point features and object features to improve the classification. The experiments demonstrate that the proposed method is capable of producing classification results with high accuracy and efficiency. Full article
(This article belongs to the Special Issue LiDAR-Based Creation of Virtual Cities)
Show Figures

Figure 1

Article
Hierarchical Classification of Urban ALS Data by Using Geometry and Intensity Information
Sensors 2019, 19(20), 4583; https://0-doi-org.brum.beds.ac.uk/10.3390/s19204583 - 21 Oct 2019
Cited by 4 | Viewed by 983
Abstract
Airborne laser scanning (ALS) can acquire both geometry and intensity information of geo-objects, which is important in mapping a large-scale three-dimensional (3D) urban environment. However, the intensity information recorded by ALS will be changed due to the flight height and atmospheric attenuation, which [...] Read more.
Airborne laser scanning (ALS) can acquire both geometry and intensity information of geo-objects, which is important in mapping a large-scale three-dimensional (3D) urban environment. However, the intensity information recorded by ALS will be changed due to the flight height and atmospheric attenuation, which decreases the robustness of the trained supervised classifier. This paper proposes a hierarchical classification method by separately using geometry and intensity information of urban ALS data. The method uses supervised learning for stable geometry information and unsupervised learning for fluctuating intensity information. The experiment results show that the proposed method can utilize the intensity information effectively, based on three aspects, as below. (1) The proposed method improves the accuracy of classification result by using intensity. (2) When the ALS data to be classified are acquired under the same conditions as the training data, the performance of the proposed method is as good as the supervised learning method. (3) When the ALS data to be classified are acquired under different conditions from the training data, the performance of the proposed method is better than the supervised learning method. Therefore, the classification model derived from the proposed method can be transferred to other ALS data whose intensity is inconsistent with the training data. Furthermore, the proposed method can contribute to the hierarchical use of some other ALS information, such as multi-spectral information. Full article
(This article belongs to the Special Issue LiDAR-Based Creation of Virtual Cities)
Show Figures

Figure 1

Article
Automatic Indoor Reconstruction from Point Clouds in Multi-room Environments with Curved Walls
Sensors 2019, 19(17), 3798; https://0-doi-org.brum.beds.ac.uk/10.3390/s19173798 - 02 Sep 2019
Cited by 14 | Viewed by 1337
Abstract
Recent developments in laser scanning systems have inspired substantial interest in indoor modeling. Semantically rich indoor models are required in many fields. Despite the rapid development of 3D indoor reconstruction methods for building interiors from point clouds, the indoor reconstruction of multi-room environments [...] Read more.
Recent developments in laser scanning systems have inspired substantial interest in indoor modeling. Semantically rich indoor models are required in many fields. Despite the rapid development of 3D indoor reconstruction methods for building interiors from point clouds, the indoor reconstruction of multi-room environments with curved walls is still not resolved. This study proposed a novel straight and curved line tracking method followed by a straight line test. Robust parameters are used, and a novel straight line regularization method is achieved using constrained least squares. The method constructs a cell complex with both straight lines and curved lines, and the indoor reconstruction is transformed into a labeling problem that is solved based on a novel Markov Random Field formulation. The optimal labeling is found by minimizing an energy function by applying a minimum graph cut approach. Detailed experiments were conducted, and the results indicate that the proposed method is well suited for 3D indoor modeling in multi-room indoor environments with curved walls. Full article
(This article belongs to the Special Issue LiDAR-Based Creation of Virtual Cities)
Show Figures

Figure 1

Article
ARTS, an AR Tourism System, for the Integration of 3D Scanning and Smartphone AR in Cultural Heritage Tourism and Pedagogy
Sensors 2019, 19(17), 3725; https://0-doi-org.brum.beds.ac.uk/10.3390/s19173725 - 28 Aug 2019
Cited by 3 | Viewed by 1715
Abstract
Interactions between cultural heritage, tourism, and pedagogy deserve investigation in an as-built environment under a macro- or micro-perspective of urban fabric. The heritage site of Shih Yih Hall, Lukang, was explored. An Augmented Reality Tourism System (ARTS) was developed on a smartphone-based platform [...] Read more.
Interactions between cultural heritage, tourism, and pedagogy deserve investigation in an as-built environment under a macro- or micro-perspective of urban fabric. The heritage site of Shih Yih Hall, Lukang, was explored. An Augmented Reality Tourism System (ARTS) was developed on a smartphone-based platform for a novel application scenario using 3D scans converted from a point cloud to a portable interaction size. ARTS comprises a real-time environment viewing module, a space-switching module, and an Augmented Reality (AR) guide graphic module. The system facilitates scenario initiations, projection and superimposition, annotation, and interface customization, with software tools developed using ARKit® on the iPhone XS Max®. The three-way interaction between urban fabric, cultural heritage tourism, and pedagogy was made possible through background block-outs and an additive or selective display. The illustration of the full-scale experience of the smartphone app was made feasible for co-relating the cultural dependence of urban fabric on tourism. The great fidelity of 3D scans and AR scenes act as a pedagogical aid for students or tourists. A Post-Study System Usability Questionnaire (PSSUQ) evaluation verified the usefulness of ARTS. Full article
(This article belongs to the Special Issue LiDAR-Based Creation of Virtual Cities)
Show Figures

Figure 1

Back to TopTop