LiDAR and Time-of-flight Imaging

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Optics and Lasers".

Deadline for manuscript submissions: closed (30 August 2019) | Viewed by 126815

Special Issue Editors


E-Mail Website
Guest Editor
1. Centre for Sensors, Instrumentation and Systems (CD6), Universitat Politènnica de Catalunya (UPC), Rambla Sant Nebridi 10 E08222 Terrassa, Barcelona, Spain
2. Beamagine S.L., C/Bellesguard 16 E08755 Castellbisbal, Barcelona, Spain
Interests: photonics; optics; sensors; imaging; perception; machine vision; optical engineering; optical metrology; ladar; lidar; time of flight; biomedical sensors; biophotonics; data fusion; innovation; spin-off companies; technology transfer; knowledge exchange
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Director of Visual Enhancement and Cognitive Systems, Veoneer, Inc. SE-103 02 Stockholm, Sweden
Interests: lidar- and gated imaging; imaging performance in inclement weather; functional specification; cognitive system

Special Issue Information

Dear Colleagues,

Time-of-flight and lidar imaging are currently one the main drivers of the applied development in optomechanics and electronics. There is a compelling need to develop robust and cost-effective lidar sensors for the autonomous vehicle industry, and in particular for automotives. This has resulted in a number of different radiometric modelling approaches, and in intense activity in the development of novel components, including sources, detectors, and optics. Further, several sensing strategies are being proposed beyond the classical pulsed and modulated approaches, which may involve sophisticated components such as optical phased arrays or MEMS scanners to solve a problem that pushes the state of the art of current technology. Advances in lidar, however, also need progress in the behavior of lidar imaging units in inclment weather, or on the software side, as in strategies for the management of dense point-clouds in real time, or in miniaturization for mobile phone applications. Progress beyond the state of the art in such a number of different fields of applied science activities is required to bring lidar imagers closer to become the next step in optical imaging and to change our perception of the world.

Prof. Dr. Santiago Royo Royo
Dr. Jan-Erik Kallhammer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Lidar imaging
  • Sources and detectors for lidar imagers
  • Novel measurement schemes
  • Point cloud sensing and management

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

37 pages, 3024 KiB  
Article
An Overview of Lidar Imaging Systems for Autonomous Vehicles
by Santiago Royo and Maria Ballesta-Garcia
Appl. Sci. 2019, 9(19), 4093; https://0-doi-org.brum.beds.ac.uk/10.3390/app9194093 - 30 Sep 2019
Cited by 230 | Viewed by 41246
Abstract
Lidar imaging systems are one of the hottest topics in the optronics industry. The need to sense the surroundings of every autonomous vehicle has pushed forward a race dedicated to deciding the final solution to be implemented. However, the diversity of state-of-the-art approaches [...] Read more.
Lidar imaging systems are one of the hottest topics in the optronics industry. The need to sense the surroundings of every autonomous vehicle has pushed forward a race dedicated to deciding the final solution to be implemented. However, the diversity of state-of-the-art approaches to the solution brings a large uncertainty on the decision of the dominant final solution. Furthermore, the performance data of each approach often arise from different manufacturers and developers, which usually have some interest in the dispute. Within this paper, we intend to overcome the situation by providing an introductory, neutral overview of the technology linked to lidar imaging systems for autonomous vehicles, and its current state of development. We start with the main single-point measurement principles utilized, which then are combined with different imaging strategies, also described in the paper. An overview of the features of the light sources and photodetectors specific to lidar imaging systems most frequently used in practice is also presented. Finally, a brief section on pending issues for lidar development in autonomous vehicles has been included, in order to present some of the problems which still need to be solved before implementation may be considered as final. The reader is provided with a detailed bibliography containing both relevant books and state-of-the-art papers for further progress in the subject. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

24 pages, 7486 KiB  
Article
Real-Time RGB-D Simultaneous Localization and Mapping Guided by Terrestrial LiDAR Point Cloud for Indoor 3-D Reconstruction and Camera Pose Estimation
by Xujie Kang, Jing Li, Xiangtao Fan and Wenhui Wan
Appl. Sci. 2019, 9(16), 3264; https://0-doi-org.brum.beds.ac.uk/10.3390/app9163264 - 09 Aug 2019
Cited by 14 | Viewed by 5552
Abstract
In recent years, low-cost and lightweight RGB and depth (RGB-D) sensors, such as Microsoft Kinect, have made available rich image and depth data, making them very popular in the field of simultaneous localization and mapping (SLAM), which has been increasingly used in robotics, [...] Read more.
In recent years, low-cost and lightweight RGB and depth (RGB-D) sensors, such as Microsoft Kinect, have made available rich image and depth data, making them very popular in the field of simultaneous localization and mapping (SLAM), which has been increasingly used in robotics, self-driving vehicles, and augmented reality. The RGB-D SLAM constructs 3D environmental models of natural landscapes while simultaneously estimating camera poses. However, in highly variable illumination and motion blur environments, long-distance tracking can result in large cumulative errors and scale shifts. To address this problem in actual applications, in this study, we propose a novel multithreaded RGB-D SLAM framework that incorporates a highly accurate prior terrestrial Light Detection and Ranging (LiDAR) point cloud, which can mitigate cumulative errors and improve the system’s robustness in large-scale and challenging scenarios. First, we employed deep learning to achieve system automatic initialization and motion recovery when tracking is lost. Next, we used terrestrial LiDAR point cloud to obtain prior data of the landscape, and then we applied the point-to-surface inductively coupled plasma (ICP) iterative algorithm to realize accurate camera pose control from the previously obtained LiDAR point cloud data, and finally expanded its control range in the local map construction. Furthermore, an innovative double window segment-based map optimization method is proposed to ensure consistency, better real-time performance, and high accuracy of map construction. The proposed method was tested for long-distance tracking and closed-loop in two different large indoor scenarios. The experimental results indicated that the standard deviation of the 3D map construction is 10 cm in a mapping distance of 100 m, compared with the LiDAR ground truth. Further, the relative cumulative error of the camera in closed-loop experiments is 0.09%, which is twice less than that of the typical SLAM algorithm (3.4%). Therefore, the proposed method was demonstrated to be more robust than the ORB-SLAM2 algorithm in complex indoor environments. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

27 pages, 19655 KiB  
Article
A Layer-Wise Strategy for Indoor As-Built Modeling Using Point Clouds
by Lei Xie, Ruisheng Wang, Zutao Ming and Dong Chen
Appl. Sci. 2019, 9(14), 2904; https://0-doi-org.brum.beds.ac.uk/10.3390/app9142904 - 19 Jul 2019
Cited by 14 | Viewed by 3129
Abstract
The automatic modeling of as-built building interiors, known as indoor building reconstruction, is gaining increasing attention because of its widespread applications. With the development of sensors to acquire high-quality point clouds, a new modeling scheme called scan-to-BIM (building information modeling) emerged as well. [...] Read more.
The automatic modeling of as-built building interiors, known as indoor building reconstruction, is gaining increasing attention because of its widespread applications. With the development of sensors to acquire high-quality point clouds, a new modeling scheme called scan-to-BIM (building information modeling) emerged as well. However, the traditional scan-to-BIM process is time-tedious and labor-intensive. Most existing automatic indoor building reconstruction solutions can only fit the specific data or lack of detailed model representation. In this paper, we propose a layer-wise method, on the basis of 3D planar primitives, to create 2D floor plans and 3D building models. It can deal with different types of point clouds and retain many structural details with respect to protruding structures, complicated ceilings, and fine corners. The experimental results indicate the effectiveness of the proposed method and the robustness against noises and sparse data. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

18 pages, 2111 KiB  
Article
Light Transmission in Fog: The Influence of Wavelength on the Extinction Coefficient
by Pierre Duthon, Michèle Colomb and Frédéric Bernardin
Appl. Sci. 2019, 9(14), 2843; https://0-doi-org.brum.beds.ac.uk/10.3390/app9142843 - 16 Jul 2019
Cited by 30 | Viewed by 7535
Abstract
Autonomous driving is based on innovative technologies that have to ensure that vehicles are driven safely. LiDARs are one of the reference sensors for obstacle detection. However, this technology is affected by adverse weather conditions, especially fog. Different wavelengths are investigated to meet [...] Read more.
Autonomous driving is based on innovative technologies that have to ensure that vehicles are driven safely. LiDARs are one of the reference sensors for obstacle detection. However, this technology is affected by adverse weather conditions, especially fog. Different wavelengths are investigated to meet this challenge (905 nm vs. 1550 nm). The influence of wavelength on light transmission in fog is then examined and results reported. A theoretical approach by calculating the extinction coefficient for different wavelengths is presented in comparison to measurements with a spectroradiometer in the range of 350 nm–2450 nm. The experiment took place in the French Cerema PAVIN BPplatform for intelligent vehicles, which makes it possible to reproduce controlled fogs of different density for two types of droplet size distribution. Direct spectroradiometer extinction measurements vary in the same way as the models. Finally, the wavelengths for LiDARs should not be chosen on the basis of fog conditions: there is a small difference (<10%) between the extinction coefficients at 905 nm and 1550 nm for the same emitted power in fog. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

16 pages, 4328 KiB  
Article
Airborne Waveform Lidar Simulator Using the Radiative Transfer of a Laser Pulse
by Minsu Kim
Appl. Sci. 2019, 9(12), 2452; https://0-doi-org.brum.beds.ac.uk/10.3390/app9122452 - 15 Jun 2019
Cited by 6 | Viewed by 3100
Abstract
An airborne lidar simulator creates a lidar point cloud from a simulated lidar system, flight parameters, and the terrain digital elevation model (DEM). At the basic level, the lidar simulator computes the range from a lidar system to the surface of a terrain [...] Read more.
An airborne lidar simulator creates a lidar point cloud from a simulated lidar system, flight parameters, and the terrain digital elevation model (DEM). At the basic level, the lidar simulator computes the range from a lidar system to the surface of a terrain using the geomatics lidar equation. The simple computation effectively assumes that the beam divergence is zero. If the beam spot is meaningfully large due to the large beam divergence combined with high sensor altitude, then the beam plane with a finite size interacts with a ground target in a realistic and complex manner. The irradiance distribution of a delta-pulse beam plane is defined based on laser pulse radiative transfer. The airborne lidar simulator in this research simulates the interaction between the delta-pulse and a three-dimensional (3D) object and results in a waveform. The waveform will be convoluted using a system response function. The lidar simulator also computes the total propagated uncertainty (TPU). All sources of the uncertainties associated with the position of the lidar point and the detailed geomatics equations to compute TPU are described. The boresighting error analysis and the 3D accuracy assessment are provided as examples of the application using the simulator. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

14 pages, 7461 KiB  
Article
Testing and Validation of Automotive Point-Cloud Sensors in Adverse Weather Conditions
by Maria Jokela, Matti Kutila and Pasi Pyykönen
Appl. Sci. 2019, 9(11), 2341; https://0-doi-org.brum.beds.ac.uk/10.3390/app9112341 - 07 Jun 2019
Cited by 73 | Viewed by 6793
Abstract
Light detection and ranging sensors (LiDARS) are the most promising devices for range sensing in automated cars and therefore, have been under intensive development for the last five years. Even though various types of resolutions and scanning principles have been proposed, adverse weather [...] Read more.
Light detection and ranging sensors (LiDARS) are the most promising devices for range sensing in automated cars and therefore, have been under intensive development for the last five years. Even though various types of resolutions and scanning principles have been proposed, adverse weather conditions are still challenging for optical sensing principles. This paper investigates proposed methods in the literature and adopts a common validation method to perform both indoor and outdoor tests to examine how fog and snow affect performances of different LiDARs. As suspected, the performance degraded with all tested sensors, but their behavior was not identical. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

18 pages, 2878 KiB  
Article
Calibration of a Rotating or Revolving Platform with a LiDAR Sensor
by Mario Claer, Alexander Ferrein and Stefan Schiffer
Appl. Sci. 2019, 9(11), 2238; https://0-doi-org.brum.beds.ac.uk/10.3390/app9112238 - 30 May 2019
Cited by 6 | Viewed by 3022
Abstract
Perceiving its environment in 3D is an important ability for a modern robot. Today, this is often done using LiDARs which come with a strongly limited field of view (FOV), however. To extend their FOV, the sensors are mounted on driving vehicles in [...] Read more.
Perceiving its environment in 3D is an important ability for a modern robot. Today, this is often done using LiDARs which come with a strongly limited field of view (FOV), however. To extend their FOV, the sensors are mounted on driving vehicles in several different ways. This allows 3D perception even with 2D LiDARs if a corresponding localization system or technique is available. Another popular way to gain most information of the scanners is to mount them on a rotating carrier platform. In this way, their measurements in different directions can be collected and transformed into a common frame, in order to achieve a nearly full spherical perception. However, this is only possible if the kinetic chains of the platforms are known exactly, that is, if the LiDAR pose w.r.t. to its rotation center is well known. The manual measurement of these chains is often very cumbersome or sometimes even impossible to do with the necessary precision. Our paper proposes a method to calibrate the extrinsic LiDAR parameters by decoupling the rotation from the full six degrees of freedom transform and optimizing both separately. Thus, one error measure for the orientation and one for the translation with known orientation are minimized subsequently with a combination of a consecutive grid search and a gradient descent. Both error measures are inferred from spherical calibration targets. Our experiments with the method suggest that the main influences on the calibration results come from the the distance to the calibration targets, the accuracy of their center point estimation and the search grid resolution. However, our proposed calibration method improves the extrinsic parameters even with unfavourable configurations and from inaccurate initial pose guesses. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

15 pages, 7465 KiB  
Article
An Optical Interference Suppression Scheme for TCSPC Flash LiDAR Imagers
by Lucio Carrara and Adrian Fiergolski
Appl. Sci. 2019, 9(11), 2206; https://0-doi-org.brum.beds.ac.uk/10.3390/app9112206 - 29 May 2019
Cited by 23 | Viewed by 5637
Abstract
This paper describes an optical interference suppression scheme that allows flash light detection and ranging (LiDAR) imagers to run safely and reliably in uncontrolled environments where multiple LiDARs are expected to operate concurrently. The issue of optical interference is a potential show-stopper for [...] Read more.
This paper describes an optical interference suppression scheme that allows flash light detection and ranging (LiDAR) imagers to run safely and reliably in uncontrolled environments where multiple LiDARs are expected to operate concurrently. The issue of optical interference is a potential show-stopper for the adoption of flash LiDAR as a technology of choice in multi-user application fields such as automotive sensing and autonomous vehicle navigation. The relatively large emission angle and field of view of flash LiDAR imagers make them especially vulnerable to optical interference. This work illustrates how a time-correlated single-photon counting LiDAR can control the timing of its laser emission to reduce its statistical correlation to other modulated or pulsed light sources. This method is based on a variable random delay applied to the laser pulse generated by LiDAR and to the internal circuitry measuring the time-of-flight. The statistical properties of the pseudorandom sequence of delays determines the effectiveness of LiDAR resilience against unintentional and intentional optical interference. For basic multi-camera operation, a linear feedback shift register (LFSR) was used as a random delay generator, and the performance of the interference suppression was evaluated as a function of sequence length and integration time. Direct interference from an identical LiDAR emitter pointed at the same object was reduced up to 50 dB. Changing integration time between 10 ms and 100 ms showed a marginal impact on the performance of the suppression (less than 3 dB deviation). LiDAR signal integrity was characterized during suppression, obtaining a maximum relative deviation of the measured time-of-flight of 0.1%, and a maximum deviation of measurements spread (full-width half-maximum) of 3%. The LiDAR signal presented an expected worst-case reduction in intensity of 25%. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

13 pages, 5726 KiB  
Article
High Spatial Resolution Three-Dimensional Imaging Static Unitary Detector-Based Laser Detection and Ranging Sensor
by Hyeon-June Kim, Eun-Gyu Lee, Han-Woong Choi and Choul-Young Kim
Appl. Sci. 2019, 9(10), 2145; https://0-doi-org.brum.beds.ac.uk/10.3390/app9102145 - 26 May 2019
Viewed by 2981
Abstract
This paper presents a static unitary detector (STUD)-based laser detection and ranging (LADAR) sensor with a 16-to-1 transimpedance-combining amplifier for high spatial resolution three-dimensional (3-D) applications. In order to readout the large size of a photodetector for better results of 3-D information without [...] Read more.
This paper presents a static unitary detector (STUD)-based laser detection and ranging (LADAR) sensor with a 16-to-1 transimpedance-combining amplifier for high spatial resolution three-dimensional (3-D) applications. In order to readout the large size of a photodetector for better results of 3-D information without any reduction of the bandwidth, the partitioning photosensitive cell method is embedded in a 16-to-1 transimpedance-combining amplifier. The effective number of partitioning photosensitive cells and signal-combining stages are selected based on the analysis of the partitioning photosensitive cell method for the optimum performance of a transimpedance-combining amplifier. A prototype chip is fabricated in a 0.18-μm CMOS technology. The input referred noise is 41.9 pA/√Hz with a bandwidth of 230 MHz and a transimpedance gain of 70.4 dB·Ω. The total power consumption of the prototype chip is approximately 86 mW from a 1.8-V supply, and the TICA consumes approximately 15.4 mW of it. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

27 pages, 10985 KiB  
Article
Feature-Preserved Point Cloud Simplification Based on Natural Quadric Shape Models
by Kun Zhang, Shiquan Qiao, Xiaohong Wang, Yongtao Yang and Yongqiang Zhang
Appl. Sci. 2019, 9(10), 2130; https://0-doi-org.brum.beds.ac.uk/10.3390/app9102130 - 24 May 2019
Cited by 32 | Viewed by 3830
Abstract
With the development of 3D scanning technology, a huge volume of point cloud data has been collected at a lower cost. The huge data set is the main burden during the data processing of point clouds, so point cloud simplification is critical. The [...] Read more.
With the development of 3D scanning technology, a huge volume of point cloud data has been collected at a lower cost. The huge data set is the main burden during the data processing of point clouds, so point cloud simplification is critical. The main aim of point cloud simplification is to reduce data volume while preserving the data features. Therefore, this paper provides a new method for point cloud simplification, named FPPS (feature-preserved point cloud simplification). In FPPS, point cloud simplification entropy is defined, which quantifies features hidden in point clouds. According to simplification entropy, the key points including the majority of the geometric features are selected. Then, based on the natural quadric shape, we introduce a point cloud matching model (PCMM), by which the simplification rules are set. Additionally, the similarity between PCMM and the neighbors of the key points is measured by the shape operator. This represents the criteria for the adaptive simplification parameters in FPPS. Finally, the experiment verifies the feasibility of FPPS and compares FPPS with other four-point cloud simplification algorithms. The results show that FPPS is superior to other simplification algorithms. In addition, FPPS can partially recognize noise. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

17 pages, 4018 KiB  
Article
A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion
by Guolai Jiang, Lei Yin, Shaokun Jin, Chaoran Tian, Xinbo Ma and Yongsheng Ou
Appl. Sci. 2019, 9(10), 2105; https://0-doi-org.brum.beds.ac.uk/10.3390/app9102105 - 22 May 2019
Cited by 57 | Viewed by 10336
Abstract
The method of simultaneous localization and mapping (SLAM) using a light detection and ranging (LiDAR) sensor is commonly adopted for robot navigation. However, consumer robots are price sensitive and often have to use low-cost sensors. Due to the poor performance of a low-cost [...] Read more.
The method of simultaneous localization and mapping (SLAM) using a light detection and ranging (LiDAR) sensor is commonly adopted for robot navigation. However, consumer robots are price sensitive and often have to use low-cost sensors. Due to the poor performance of a low-cost LiDAR, error accumulates rapidly while SLAM, and it may cause a huge error for building a larger map. To cope with this problem, this paper proposes a new graph optimization-based SLAM framework through the combination of low-cost LiDAR sensor and vision sensor. In the SLAM framework, a new cost-function considering both scan and image data is proposed, and the Bag of Words (BoW) model with visual features is applied for loop close detection. A 2.5D map presenting both obstacles and vision features is also proposed, as well as a fast relocation method with the map. Experiments were taken on a service robot equipped with a 360° low-cost LiDAR and a front-view RGB-D camera in the real indoor scene. The results show that the proposed method has better performance than using LiDAR or camera only, while the relocation speed with our 2.5D map is much faster than with traditional grid map. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

15 pages, 5283 KiB  
Article
Evaluation and Improvement of Lidar Performance Based on Temporal and Spatial Variance Calculation
by Fei Gao, Xinxin Xu, Qingsong Zhu, Li Wang, Tingyao He, Longlong Wang, Samo Stanič and Dengxin Hua
Appl. Sci. 2019, 9(9), 1786; https://0-doi-org.brum.beds.ac.uk/10.3390/app9091786 - 29 Apr 2019
Cited by 2 | Viewed by 2646
Abstract
Poisson distributions have the characteristic of equality between their variance and mean values. By constructing a calculation model of the temporal variance and spatial variance, the relationship between the variance and mean values of lidar analog data and photon-counting data can be analyzed. [...] Read more.
Poisson distributions have the characteristic of equality between their variance and mean values. By constructing a calculation model of the temporal variance and spatial variance, the relationship between the variance and mean values of lidar analog data and photon-counting data can be analyzed. The calculation results show that the photon-counting data from far field have the distribution property of equality between the variances and the corresponding mean values, while the analog data for the whole probing traces do not. In this paper, by analyzing the distribution properties of the spatial variance and temporal variance of lidar data, the dead time of photon-counting data was estimated, and the threshold voltage of the photon-counting system and the linear working range of photomultiplier tube were evaluated. The results show that the linear working range of the high voltage for the photomultiplier tube in the ultraviolet elastic scanning lidar is between −500 V and −1000 V, and the dead time and threshold voltage of the photon-counting system in the Licel transient recorder are 3.488 ns and 1.20 mV, respectively. Meanwhile, a novel gluing method between analog data and photon-counting data is presented, based on the calculation results of the variance distribution of lidar data. The linear transfer coefficients were determined by minimizing the differences between the variance and mean of the transformed photon-counting data in the near filed with high signal to noise ratio. The glued data were distributed to express the atmospheric conditions uniformly. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

28 pages, 17336 KiB  
Article
The Influence of the Cartographic Transformation of TLS Data on the Quality of the Automatic Registration
by Jakub Markiewicz and Dorota Zawieska
Appl. Sci. 2019, 9(3), 509; https://0-doi-org.brum.beds.ac.uk/10.3390/app9030509 - 01 Feb 2019
Cited by 7 | Viewed by 3645
Abstract
This paper discusses the issue of the influence of cartographic Terrestrial Laser Scanning (TLS) data conversion into feature-based automatic registration. Automatic registration of data is a multi-stage process, it is based on original software tools and consists of: (1) Conversion of data to [...] Read more.
This paper discusses the issue of the influence of cartographic Terrestrial Laser Scanning (TLS) data conversion into feature-based automatic registration. Automatic registration of data is a multi-stage process, it is based on original software tools and consists of: (1) Conversion of data to the raster form, (2) register of TLS data in pairs in all possible combinations using the SURF (Speeded Up Robust Features) and FAST (Features from Accelerated Segment Test) algorithms, (3) the quality analysis of relative orientation of processed pairs, and (4) the final bundle adjustment. The following two problems, related to the influence of the spherical image, the orthoimage and the Mercator representation of the point cloud, are discussed: The correctness of the automatic tie points detection and distribution and the influence of the TLS position on the completeness of the registration process and the quality assessment. The majority of popular software applications use manually or semi-automatically determined corresponding points. However, the authors propose an original software tool to address the first issue, which automatically detects and matches corresponding points on each TLS raster representation, utilizing different algorithms (SURF and FAST). To address the second task, the authors present a series of analyses: The time of detection of characteristic points, the percentage of incorrectly detected points and adjusted characteristic points, the number of detected control and check points, the orientation accuracy of control and check points, and the distribution of control and check points. Selection of an appropriate method for the TLS point cloud conversion to the raster form and selection of an appropriate algorithm, considerably influence the completeness of the entire process, and the accuracy of data orientation. The results of the performed experiments show that fully automatic registration of the TLS point clouds in the raster forms is possible; however, it is not possible to propose one, universal form of the point cloud, because a priori knowledge concerning the scanner positions is required. If scanner stations are located close to one another in raster images or in spherical images, Mercator projections are recommended. In the case where fragments of the surface are measured under different angles from different distances and heights of the TLS, orthoimages are suggested. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

15 pages, 5236 KiB  
Article
An Improved Skewness Balancing Filtering Algorithm Based on Thin Plate Spline Interpolation
by Penggen Cheng, Zhenyang Hui, Yuanping Xia, Yao Yevenyo Ziggah, Youjian Hu and Jing Wu
Appl. Sci. 2019, 9(1), 203; https://0-doi-org.brum.beds.ac.uk/10.3390/app9010203 - 08 Jan 2019
Cited by 14 | Viewed by 3708
Abstract
Most filtering algorithms suffer from complex parameter settings or threshold adjusting. To solve this problem, this paper proposes an improved skewness balancing filtering algorithm based on thin plate spline (TPS) interpolation. The proposed algorithm filters the nonground points in an iterative manner. A [...] Read more.
Most filtering algorithms suffer from complex parameter settings or threshold adjusting. To solve this problem, this paper proposes an improved skewness balancing filtering algorithm based on thin plate spline (TPS) interpolation. The proposed algorithm filters the nonground points in an iterative manner. A reference surface that reflects the fluctuation of the terrain is generated using the TPS interpolation method. Accordingly, the elevation difference from each point to the surface can be calculated. By applying the skewness balancing principle to these elevation differences, nonground points can be removed automatically. To verify the validity and robustness of the proposed method, the datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) were adopted. The experimental results show that this presented method can adapt to complex environments and achieve a higher filtering accuracy than the traditional skewness balancing algorithm. Moreover, in comparison with the other eight filtering methods tested by the ISPRS and four improved filtering methods proposed recently, the proposed method achieved an average total error of 5.39%, which is smaller than that of most of these other methods. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

13 pages, 2653 KiB  
Article
Semi-Analytic Monte Carlo Model for Oceanographic Lidar Systems: Lookup Table Method Used for Randomly Choosing Scattering Angles
by Peng Chen, Delu Pan, Zhihua Mao and Hang Liu
Appl. Sci. 2019, 9(1), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/app9010048 - 24 Dec 2018
Cited by 14 | Viewed by 2846
Abstract
Monte Carlo (MC) is a significant technique for finding the radiative transfer equation (RTE) solution. Nowadays, the Henyey-Greenstein (HG) scattering phase function (spf) has been widely used in most studies during the core procedure of randomly choosing scattering angles in oceanographic lidar MC [...] Read more.
Monte Carlo (MC) is a significant technique for finding the radiative transfer equation (RTE) solution. Nowadays, the Henyey-Greenstein (HG) scattering phase function (spf) has been widely used in most studies during the core procedure of randomly choosing scattering angles in oceanographic lidar MC simulations. However, the HG phase function does not work well at small or large scattering angles. Other spfs work well, e.g., Fournier-Forand phase function (FF); however, solving the cumulative distribution function (cdf) of the scattering phase function (even if possible) would result in a complicated formula. To avoid the above-mentioned problems, we present a semi-analytic MC radiative transfer model in this paper, which uses the cdf equation to build up a lookup table (LUT) of ψ vs. P Ψ ( ψ ) to determine scattering angles for various spfs (e.g., FF, Petzold measured particle phase function, and so on). Moreover, a lidar geometric model for analytically estimating the probability of photon scatter back to a remote receiver was developed; in particular, inhomogeneous layers are divided into voxels with different optical properties; therefore, it is useful for inhomogeneous water. First, the simulations between the inverse function method for HG cdf and the LUT method for FF cdf were compared. Then, multiple scattering and wind-driven sea surface condition effects were studied. Finally, we compared our simulation results with measurements of airborne lidar. The mean relative errors between simulation and measurements in inhomogeneous water are within 14% for the LUT method and within 22% for the inverse cdf (ICDF) method. The results suggest feasibility and effectiveness of our simulation model. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

12 pages, 820 KiB  
Article
Time Resolution Improvement Using Dual Delay Lines for Field-Programmable-Gate-Array-Based Time-to-Digital Converters with Real-Time Calibration
by Yuan-Ho Chen
Appl. Sci. 2019, 9(1), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/app9010020 - 21 Dec 2018
Cited by 13 | Viewed by 6031
Abstract
This paper presents a time-to-digital converter (TDC) based on a field programmable gate array (FPGA) with a tapped delay line (TDL) architecture. This converter employs dual delay lines (DDLs) to enable real-time calibrations, and the proposed DDL-TDC measures the statistical distribution of delays [...] Read more.
This paper presents a time-to-digital converter (TDC) based on a field programmable gate array (FPGA) with a tapped delay line (TDL) architecture. This converter employs dual delay lines (DDLs) to enable real-time calibrations, and the proposed DDL-TDC measures the statistical distribution of delays to permit the calibration of nonuniform delay cells in FPGA-based TDC designs. DDLs are also used to set up alternate calibrations, thus enabling environmental effects to be immediately accounted for. Experimental results revealed that relative to a conventional TDL-TDC, the proposed DDL-TDC reduced the maximum differential nonlinearity by 26% and the integral nonlinearity by 30%. A root-mean-squared value of 32 ps was measured by inputting the constant delay source into the proposed DDL-TDC. The proposed scheme also maintained excellent linearity across a range of temperatures. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

19 pages, 7612 KiB  
Article
Development of Image Processing for Crack Detection on Concrete Structures through Terrestrial Laser Scanning Associated with the Octree Structure
by Soojin Cho, Seunghee Park, Gichun Cha and Taekeun Oh
Appl. Sci. 2018, 8(12), 2373; https://0-doi-org.brum.beds.ac.uk/10.3390/app8122373 - 23 Nov 2018
Cited by 30 | Viewed by 7920
Abstract
Terrestrial laser scanning (TLS) provides a rapid remote sensing technique to model 3D objects but can also be used to assess the surface condition of structures. In this study, an effective image processing technique is proposed for crack detection on images extracted from [...] Read more.
Terrestrial laser scanning (TLS) provides a rapid remote sensing technique to model 3D objects but can also be used to assess the surface condition of structures. In this study, an effective image processing technique is proposed for crack detection on images extracted from the octree structure of TLS data. To efficiently utilize TLS for the surface condition assessment of large structures, a process was constructed to compress the original scanned data based on the octree structure. The point cloud data obtained by TLS was converted into voxel data, and further converted into an octree data structure, which significantly reduced the data size but minimized the loss of resolution to detect cracks on the surface. The compressed data was then used to detect cracks on the surface using a combination of image processing algorithms. The crack detection procedure involved the following main steps: (1) classification of an image into three categories (i.e., background, structural joints and sediments, and surface) using K-means clustering according to color similarity, (2) deletion of non-crack parts on the surface using improved subtraction combined with median filtering and K-means clustering results, (3) detection of major crack objects on the surface based on Otsu’s binarization method, and (4) highlighting crack objects by morphological operations. The proposed technique was validated on a spillway wall of a concrete dam structure in South Korea. The scanned data was compressed up to 50% of the original scanned data, while showing good performance in detecting cracks with various shapes. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

13 pages, 5901 KiB  
Article
LIDAR Point Cloud Registration for Sensing and Reconstruction of Unstructured Terrain
by Qingyuan Zhu, Jinjin Wu, Huosheng Hu, Chunsheng Xiao and Wei Chen
Appl. Sci. 2018, 8(11), 2318; https://0-doi-org.brum.beds.ac.uk/10.3390/app8112318 - 21 Nov 2018
Cited by 11 | Viewed by 5015
Abstract
When 3D laser scanning (LIDAR) is used for navigation of autonomous vehicles operated on unstructured terrain, it is necessary to register the acquired point cloud and accurately perform point cloud reconstruction of the terrain in time. This paper proposes a novel registration method [...] Read more.
When 3D laser scanning (LIDAR) is used for navigation of autonomous vehicles operated on unstructured terrain, it is necessary to register the acquired point cloud and accurately perform point cloud reconstruction of the terrain in time. This paper proposes a novel registration method to deal with uneven-density and high-noise of unstructured terrain point clouds. It has two steps of operation, namely initial registration and accurate registration. Multisensor data is firstly used for initial registration. An improved Iterative Closest Point (ICP) algorithm is then deployed for accurate registration. This algorithm extracts key points and builds feature descriptors based on the neighborhood normal vector, point cloud density and curvature. An adaptive threshold is introduced to accelerate iterative convergence. Experimental results are given to show that our two-step registration method can effectively solve the uneven-density and high-noise problem in registration of unstructured terrain point clouds, thereby improving the accuracy of terrain point cloud reconstruction. Full article
(This article belongs to the Special Issue LiDAR and Time-of-flight Imaging)
Show Figures

Figure 1

Back to TopTop