remotesensing-logo

Journal Browser

Journal Browser

3D Urban Modeling by Fusion of Lidar Point Clouds and Optical Imagery

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (15 November 2022) | Viewed by 31106

Special Issue Editor


E-Mail Website
Guest Editor
Institute for Computer Scence and Control, Hungarian Academy of Sciences, Budapest, H-1111 Kende utca 13/17, Hungary
Interests: lidar–camera fusion; point clouds; reconstruction; scene analysis; registration

Special Issue Information

Dear Colleagues,

The journal Remote Sensing (ISSN 2072-4292) is currently running a Special Issue entitled “3D Urban Modeling by Fusion of Lidar Point Clouds and Optical Imagery”. Dr. Csaba Benedek is serving as Guest Editor for this issue.

Over the last decade, lidar (light detection and ranging) sensors have become indispensable tools in many applications fields of 3D urban scene modeling, such as virtual city reconstruction, road quality assessment, traffic analysis and control, cultural heritage documentation or evaluating the energy management of buildings. On the positive side, lidars are able to rapidly collect very accurate 3D data from large areas, and they can be integrated into various platforms, including aerial mapping systems (ALS), static terrestrial scanners (TLS), and mobile laser scanning (MLS) devices. However, by using the current technologies, we should deal with the trade-off between the spatial and temporal resolution of the recorded data: While real-time scanners, such as rotating multi-beam Lidar sensors provide high frame-rate measurement sequences, the individual point cloud frames are notably sparse, making high-level scene analysis definitely challenging. On the other hand, lidar-based aerial or terrestrial mapping systems fulfil a time demanding sequential scanning process of the environment while the sensor is continuously moving. Therefore, the resulting high-density point clouds are affected by registration artefacts, while all dynamic scene objects appear as phantom-like long-drawn, distorted structures in the recorded scene map. In addition, since laser does not provide color information, the visualization of lidar point clouds or lidar-based surface models demands accessory data sources.

Fusing lidar data with high-resolution optical images offers various possible ways to cope with the limitations of purely lidar-based or purely image-based methods. Both early and late fusion approaches, application of geometric, probabilistic or machine learning techniques are frequently taken into consideration and often lead to a significantly improved performance. Since the technology side is rapidly improving, the development of new efficient fusion algorithms is timely and required, which is the topic that serves as the focus of this Special Issue. High-quality, unpublished submissions that address one or more of the following topics are solicited:

  • Lidar–camera registration;
  • Virtual city model generation;
  • 3D building reconstruction;
  • Cultural heritage scene reconstruction;
  • Dynamic urban scene analysis, event monitoring, and unusual event detection;
  • Urban traffic analysis and control;
  • Road quality assessment, surveys of road marks and traffic signs, urban green area estimation;
  • Fusion of aerial and terrestrial lidar and image

Dr. Csaba Benedek
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D urban scene modeling
  • Lidar
  • Point clouds
  • Optical imagery
  • Image fusion

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

31 pages, 20395 KiB  
Article
3D Instance Segmentation and Object Detection Framework Based on the Fusion of Lidar Remote Sensing and Optical Image Sensing
by Ling Bai, Yinguo Li, Ming Cen and Fangchao Hu
Remote Sens. 2021, 13(16), 3288; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13163288 - 19 Aug 2021
Cited by 14 | Viewed by 3404
Abstract
Since single sensor and high-density point cloud data processing have certain direct processing limitations in urban traffic scenarios, this paper proposes a 3D instance segmentation and object detection framework for urban transportation scenes based on the fusion of Lidar remote sensing technology and [...] Read more.
Since single sensor and high-density point cloud data processing have certain direct processing limitations in urban traffic scenarios, this paper proposes a 3D instance segmentation and object detection framework for urban transportation scenes based on the fusion of Lidar remote sensing technology and optical image sensing technology. Firstly, multi-source and multi-mode data pre-fusion and alignment of Lidar and camera sensor data are effectively carried out, and then a unique and innovative network of stereo regional proposal selective search-driven DAGNN is constructed. Finally, using the multi-dimensional information interaction, three-dimensional point clouds with multi-features and unique concave-convex geometric characteristics are instance over-segmented and clustered by the hypervoxel storage in the remarkable octree and growing voxels. Finally, the positioning and semantic information of significant 3D object detection in this paper are visualized by multi-dimensional mapping of the boundary box. The experimental results validate the effectiveness of the proposed framework with excellent feedback for small objects, object stacking, and object occlusion. It can be a remediable or alternative plan to a single sensor and provide an essential theoretical and application basis for remote sensing, autonomous driving, environment modeling, autonomous navigation, and path planning under the V2X intelligent network space– ground integration in the future. Full article
(This article belongs to the Special Issue 3D Urban Modeling by Fusion of Lidar Point Clouds and Optical Imagery)
Show Figures

Figure 1

25 pages, 9596 KiB  
Article
Effective Selection of Variable Point Neighbourhood for Feature Point Extraction from Aerial Building Point Cloud Data
by Emon Kumar Dey, Fayez Tarsha Kurdi, Mohammad Awrangjeb and Bela Stantic
Remote Sens. 2021, 13(8), 1520; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13081520 - 15 Apr 2021
Cited by 17 | Viewed by 3082
Abstract
Existing approaches that extract buildings from point cloud data do not select the appropriate neighbourhood for estimation of normals on individual points. However, the success of these approaches depends on correct estimation of the normal vector. In most cases, a fixed neighbourhood is [...] Read more.
Existing approaches that extract buildings from point cloud data do not select the appropriate neighbourhood for estimation of normals on individual points. However, the success of these approaches depends on correct estimation of the normal vector. In most cases, a fixed neighbourhood is selected without considering the geometric structure of the object and the distribution of the input point cloud. Thus, considering the object structure and the heterogeneous distribution of the point cloud, this paper proposes a new effective approach for selecting a minimal neighbourhood, which can vary for each input point. For each point, a minimal number of neighbouring points are iteratively selected. At each iteration, based on the calculated standard deviation from a fitted 3D line to the selected points, a decision is made adaptively about the neighbourhood. The selected minimal neighbouring points make the calculation of the normal vector accurate. The direction of the normal vector is then used to calculate the inside fold feature points. In addition, the Euclidean distance from a point to the calculated mean of its neighbouring points is used to make a decision about the boundary point. In the context of the accuracy evaluation, the experimental results confirm the competitive performance of the proposed approach of neighbourhood selection over the state-of-the-art methods. Based on our generated ground truth data, the proposed fold and boundary point extraction techniques show more than 90% F1-scores. Full article
(This article belongs to the Special Issue 3D Urban Modeling by Fusion of Lidar Point Clouds and Optical Imagery)
Show Figures

Graphical abstract

22 pages, 29493 KiB  
Article
IM2ELEVATION: Building Height Estimation from Single-View Aerial Imagery
by Chao-Jung Liu, Vladimir A. Krylov, Paul Kane, Geraldine Kavanagh and Rozenn Dahyot
Remote Sens. 2020, 12(17), 2719; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12172719 - 22 Aug 2020
Cited by 46 | Viewed by 10577
Abstract
Estimation of the Digital Surface Model (DSM) and building heights from single-view aerial imagery is a challenging inherently ill-posed problem that we address in this paper by resorting to machine learning. We propose an end-to-end trainable convolutional-deconvolutional deep neural network architecture that enables [...] Read more.
Estimation of the Digital Surface Model (DSM) and building heights from single-view aerial imagery is a challenging inherently ill-posed problem that we address in this paper by resorting to machine learning. We propose an end-to-end trainable convolutional-deconvolutional deep neural network architecture that enables learning mapping from a single aerial imagery to a DSM for analysis of urban scenes. We perform multisensor fusion of aerial optical and aerial light detection and ranging (Lidar) data to prepare the training data for our pipeline. The dataset quality is key to successful estimation performance. Typically, a substantial amount of misregistration artifacts are present due to georeferencing/projection errors, sensor calibration inaccuracies, and scene changes between acquisitions. To overcome these issues, we propose a registration procedure to improve Lidar and optical data alignment that relies on Mutual Information, followed by Hough transform-based validation step to adjust misregistered image patches. We validate our building height estimation model on a high-resolution dataset captured over central Dublin, Ireland: Lidar point cloud of 2015 and optical aerial images from 2017. These data allow us to validate the proposed registration procedure and perform 3D model reconstruction from single-view aerial imagery. We also report state-of-the-art performance of our proposed architecture on several popular DSM estimation datasets. Full article
(This article belongs to the Special Issue 3D Urban Modeling by Fusion of Lidar Point Clouds and Optical Imagery)
Show Figures

Graphical abstract

25 pages, 7178 KiB  
Article
Improving the Accuracy of Automatic Reconstruction of 3D Complex Buildings Models from Airborne Lidar Point Clouds
by Marek Kulawiak and Zbigniew Lubniewski
Remote Sens. 2020, 12(10), 1643; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12101643 - 20 May 2020
Cited by 14 | Viewed by 3449
Abstract
Due to high requirements of variety of 3D spatial data applications with respect to data amount and quality, automatized, efficient and reliable data acquisition and preprocessing methods are needed. The use of photogrammetry techniques—as well as the light detection and ranging (LiDAR) automatic [...] Read more.
Due to high requirements of variety of 3D spatial data applications with respect to data amount and quality, automatized, efficient and reliable data acquisition and preprocessing methods are needed. The use of photogrammetry techniques—as well as the light detection and ranging (LiDAR) automatic scanners—are among attractive solutions. However, measurement data are in the form of unorganized point clouds, usually requiring transformation to higher order 3D models based on polygons or polyhedral surfaces, which is not a trivial process. The study presents a newly developed algorithm for correcting 3D point cloud data from airborne LiDAR surveys of regular 3D buildings. The proposed approach assumes the application of a sequence of operations resulting in 3D rasterization, i.e., creation and processing of a 3D regular grid representation of an object, prior to applying a regular Poisson surface reconstruction method. In order to verify the accuracy and quality of reconstructed objects for quantitative comparison with the obtained 3D models, high-quality ground truth models were used in the form of the meshes constructed from photogrammetric measurements and manually made using buildings architectural plans. The presented results show that applying the proposed algorithm positively influences the quality of the results and can be used in combination with existing surface reconstruction methods in order to generate more detailed 3D models from LiDAR scanning. Full article
(This article belongs to the Special Issue 3D Urban Modeling by Fusion of Lidar Point Clouds and Optical Imagery)
Show Figures

Figure 1

22 pages, 16672 KiB  
Article
On-the-Fly Camera and Lidar Calibration
by Balázs Nagy and Csaba Benedek
Remote Sens. 2020, 12(7), 1137; https://0-doi-org.brum.beds.ac.uk/10.3390/rs12071137 - 02 Apr 2020
Cited by 12 | Viewed by 5315
Abstract
Sensor fusion is one of the main challenges in self driving and robotics applications. In this paper we propose an automatic, online and target-less camera-Lidar extrinsic calibration approach. We adopt a structure from motion (SfM) method to generate 3D point clouds from the [...] Read more.
Sensor fusion is one of the main challenges in self driving and robotics applications. In this paper we propose an automatic, online and target-less camera-Lidar extrinsic calibration approach. We adopt a structure from motion (SfM) method to generate 3D point clouds from the camera data which can be matched to the Lidar point clouds; thus, we address the extrinsic calibration problem as a registration task in the 3D domain. The core step of the approach is a two-stage transformation estimation: First, we introduce an object level coarse alignment algorithm operating in the Hough space to transform the SfM-based and the Lidar point clouds into a common coordinate system. Thereafter, we apply a control point based nonrigid transformation refinement step to register the point clouds more precisely. Finally, we calculate the correspondences between the 3D Lidar points and the pixels in the 2D camera domain. We evaluated the method in various real-life traffic scenarios in Budapest, Hungary. The results show that our proposed extrinsic calibration approach is able to provide accurate and robust parameter settings on-the-fly. Full article
(This article belongs to the Special Issue 3D Urban Modeling by Fusion of Lidar Point Clouds and Optical Imagery)
Show Figures

Graphical abstract

Review

Jump to: Research

29 pages, 15239 KiB  
Review
Detailed Three-Dimensional Building Façade Reconstruction: A Review on Applications, Data and Technologies
by Anna Klimkowska, Stefano Cavazzi, Richard Leach and Stephen Grebby
Remote Sens. 2022, 14(11), 2579; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14112579 - 27 May 2022
Cited by 11 | Viewed by 3638
Abstract
Urban environments are regions of complex and diverse architecture. Their reconstruction and representation as three-dimensional city models have attracted the attention of many researchers and industry specialists, as they increasingly recognise the potential for new applications requiring detailed building models. Nevertheless, despite being [...] Read more.
Urban environments are regions of complex and diverse architecture. Their reconstruction and representation as three-dimensional city models have attracted the attention of many researchers and industry specialists, as they increasingly recognise the potential for new applications requiring detailed building models. Nevertheless, despite being investigated for a few decades, the comprehensive reconstruction of buildings remains a challenging task. While there is a considerable body of literature on this topic, including several systematic reviews summarising ways of acquiring and reconstructing coarse building structures, there is a paucity of in-depth research on the detection and reconstruction of façade openings (i.e., windows and doors). In this review, we provide an overview of emerging applications, data acquisition and processing techniques for building façade reconstruction, emphasising building opening detection. The use of traditional technologies from terrestrial and aerial platforms, along with emerging approaches, such as mobile phones and volunteered geography information, is discussed. The current status of approaches for opening detection is then examined in detail, separated into methods for three-dimensional and two-dimensional data. Based on the review, it is clear that a key limitation associated with façade reconstruction is process automation and the need for user intervention. Another limitation is the incompleteness of the data due to occlusion, which can be reduced by data fusion. In addition, the lack of available diverse benchmark datasets and further investigation into deep-learning methods for façade openings extraction present crucial opportunities for future research. Full article
(This article belongs to the Special Issue 3D Urban Modeling by Fusion of Lidar Point Clouds and Optical Imagery)
Show Figures

Graphical abstract

Back to TopTop