sensors-logo

Journal Browser

Journal Browser

Indoor LiDAR/Vision Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (31 December 2017) | Viewed by 58210

Special Issue Editors


E-Mail Website
Guest Editor
Department of Remote Sensing and Geo-Information Engineering, School of Land Science and Technology, China University of Geosciences in Beijing, Xueyuan Road 29, Haidian District, Beijing 100083, China
Interests: digital photogrammetry and computer vision; processing of indoor, terrestrial and air-borne LiDAR data; indoor 3D modeling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Geospatial Sensing and Data Intelligence Lab, Faculty of Environment, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada
Interests: LiDAR remote sensing; point cloud understanding; deep learning; 3D vision; HD maps for smart cities and autonomous vehicles
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, School of Informatics, Xiamen University, 422 Siming Road South, Xiamen 361005, Fujian, China
Interests: 3D vision; LiDAR; mobile mapping; geospatial big data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rising urban population and the increasing complexity of cities as conglomerates of enclosed spaces, there is a growing demand for indoor navigation, positioning, mapping, modeling, etc. However, outdoor systems in common use, e.g., GNSS, airborne, vehicle-based and terrestrial LiDAR become unavailable or expensive in the indoor environments. Therefore, compact and low-cost sensors like 2D LiDAR, RGB-D cameras and vision systems are playing important roles in 3D indoor spatial applications. The evolution of geo-spatial sensors from outdoor environment to indoor space requires new sensor models, methods, algorithms and techniques for multi-sensor integration and data fusion.

The aim of this Special Issue is to present current and state of the art research in the development of indoor LiDAR/vision systems related to multi-sensor integration and data fusion. This Special Issue invite contributions in the following topics (but is not limited to them):

  • LiDAR/vision sensor calibration
  • Multisensory fusion for indoor mapping
  • Indoor sensing solutions with low-cost sensors in mobile devices
  • Low-cost sensor integration and fusion for indoor positioning and navigation
  • Quality control and evaluation of indoor LiDAR/vision systems
  • SLAM methods for indoor LiDAR/vision systems

Prof. Dr. Zhizhong Kang
Prof. Dr. Jonathan Li
Prof. Dr. Cheng Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • LiDAR/vision system
  • RGB-D camera
  • multi-sensor
  • sensor calibration
  • multisensory fusion
  • data quality control/evaluation
  • indoor positioning and navigation
  • indoor mapping

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 9577 KiB  
Article
Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis
by Yi Zheng, Michael Peter, Ruofei Zhong, Sander Oude Elberink and Quan Zhou
Sensors 2018, 18(6), 1838; https://0-doi-org.brum.beds.ac.uk/10.3390/s18061838 - 05 Jun 2018
Cited by 17 | Viewed by 4013
Abstract
Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to [...] Read more.
Indoor space subdivision is an important aspect of scene analysis that provides essential information for many applications, such as indoor navigation and evacuation route planning. Until now, most proposed scene understanding algorithms have been based on whole point clouds, which has led to complicated operations, high computational loads and low processing speed. This paper presents novel methods to efficiently extract the location of openings (e.g., doors and windows) and to subdivide space by analyzing scanlines. An opening detection method is demonstrated that analyses the local geometric regularity in scanlines to refine the extracted opening. Moreover, a space subdivision method based on the extracted openings and the scanning system trajectory is described. Finally, the opening detection and space subdivision results are saved as point cloud labels which will be used for further investigations. The method has been tested on a real dataset collected by ZEB-REVO. The experimental results validate the completeness and correctness of the proposed method for different indoor environment and scanning paths. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

16 pages, 25235 KiB  
Article
Behavior Analysis of Novel Wearable Indoor Mapping System Based on 3D-SLAM
by Susana Lagüela, Iago Dorado, Manuel Gesto, Pedro Arias, Diego González-Aguilera and Henrique Lorenzo
Sensors 2018, 18(3), 766; https://0-doi-org.brum.beds.ac.uk/10.3390/s18030766 - 02 Mar 2018
Cited by 35 | Viewed by 5765
Abstract
This paper presents a Wearable Prototype for indoor mapping developed by the University of Vigo. The system is based on a Velodyne LiDAR, acquiring points with 16 rays for a simplistic or low-density 3D representation of reality. With this, a Simultaneous Localization and [...] Read more.
This paper presents a Wearable Prototype for indoor mapping developed by the University of Vigo. The system is based on a Velodyne LiDAR, acquiring points with 16 rays for a simplistic or low-density 3D representation of reality. With this, a Simultaneous Localization and Mapping (3D-SLAM) method is developed for the mapping and generation of 3D point clouds of scenarios deprived from GNSS signal. The quality of the system presented is validated through the comparison with a commercial indoor mapping system, Zeb-Revo, from the company GeoSLAM and with a terrestrial LiDAR, Faro Focus3D X330. The first is considered as a relative reference with other mobile systems and is chosen due to its use of the same principle for mapping: SLAM techniques based on Robot Operating System (ROS), while the second is taken as ground-truth for the determination of the final accuracy of the system regarding reality. Results show that the accuracy of the system is mainly determined by the accuracy of the sensor, with little increment in the error introduced by the mapping algorithm. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

16 pages, 6821 KiB  
Article
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
by Xufu Mu, Jing Chen, Zixiang Zhou, Zhen Leng and Lei Fan
Sensors 2018, 18(2), 506; https://0-doi-org.brum.beds.ac.uk/10.3390/s18020506 - 08 Feb 2018
Cited by 18 | Viewed by 5016
Abstract
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and [...] Read more.
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Graphical abstract

16 pages, 3933 KiB  
Article
Study of the Integration of the CNU-TS-1 Mobile Tunnel Monitoring System
by Liming Du, Ruofei Zhong, Haili Sun, Qiang Zhu and Zhen Zhang
Sensors 2018, 18(2), 420; https://0-doi-org.brum.beds.ac.uk/10.3390/s18020420 - 01 Feb 2018
Cited by 22 | Viewed by 4219
Abstract
A rapid, precise and automated means for the regular inspection and maintenance of a large number of tunnels is needed. Based on the depth study of the tunnel monitoring method, the CNU-TS-1 mobile tunnel monitoring system (TS1) is developed and presented. It can [...] Read more.
A rapid, precise and automated means for the regular inspection and maintenance of a large number of tunnels is needed. Based on the depth study of the tunnel monitoring method, the CNU-TS-1 mobile tunnel monitoring system (TS1) is developed and presented. It can efficiently obtain the cross-sections that are orthogonal to the tunnel in a dynamic way, and the control measurements that depend on design data are eliminated. By using odometers to locate the cross-sections and correcting the data based on longitudinal joints of tunnel segment lining, the cost of the system has been significantly reduced, and the interval between adjacent cross-sections can reach 1–2 cm when pushed to collect data at a normal walking speed. Meanwhile, the relative deformation of tunnel can be analyzed by selecting cross-sections from original data. Through the measurement of the actual tunnel, the applicability of the system for tunnel deformation detection is verified, and the system is shown to be 15 times more efficient than that of the total station. The simulation experiment of the tunnel deformation indicates that the measurement accuracy of TS1 for cross-sections is 1.1 mm. Compared with the traditional method, TS1 improves the efficiency as well as increases the density of the obtained points. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

14 pages, 8457 KiB  
Article
A New Localization System for Indoor Service Robots in Low Luminance and Slippery Indoor Environment Using Afocal Optical Flow Sensor Based Sensor Fusion
by Dong-Hoon Yi, Tae-Jae Lee and Dong-Il “Dan” Cho
Sensors 2018, 18(1), 171; https://0-doi-org.brum.beds.ac.uk/10.3390/s18010171 - 10 Jan 2018
Cited by 14 | Viewed by 4633
Abstract
In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of [...] Read more.
In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

2772 KiB  
Article
Real-Time Indoor Scene Description for the Visually Impaired Using Autoencoder Fusion Strategies with Visible Cameras
by Salim Malek, Farid Melgani, Mohamed Lamine Mekhalfi and Yakoub Bazi
Sensors 2017, 17(11), 2641; https://0-doi-org.brum.beds.ac.uk/10.3390/s17112641 - 16 Nov 2017
Cited by 12 | Viewed by 3775
Abstract
This paper describes three coarse image description strategies, which are meant to promote a rough perception of surrounding objects for visually impaired individuals, with application to indoor spaces. The described algorithms operate on images (grabbed by the user, by means of a chest-mounted [...] Read more.
This paper describes three coarse image description strategies, which are meant to promote a rough perception of surrounding objects for visually impaired individuals, with application to indoor spaces. The described algorithms operate on images (grabbed by the user, by means of a chest-mounted camera), and provide in output a list of objects that likely exist in his context across the indoor scene. In this regard, first, different colour, texture, and shape-based feature extractors are generated, followed by a feature learning step by means of AutoEncoder (AE) models. Second, the produced features are fused and fed into a multilabel classifier in order to list the potential objects. The conducted experiments point out that fusing a set of AE-learned features scores higher classification rates with respect to using the features individually. Furthermore, with respect to reference works, our method: (i) yields higher classification accuracies, and (ii) runs (at least four times) faster, which enables a potential full real-time application. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

4508 KiB  
Article
Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices
by Jin-Chun Piao and Shin-Dug Kim
Sensors 2017, 17(11), 2567; https://0-doi-org.brum.beds.ac.uk/10.3390/s17112567 - 07 Nov 2017
Cited by 26 | Viewed by 8214
Abstract
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present [...] Read more.
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

5455 KiB  
Article
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching
by Simon Donné, Bart Goossens and Wilfried Philips
Sensors 2017, 17(9), 1939; https://0-doi-org.brum.beds.ac.uk/10.3390/s17091939 - 23 Aug 2017
Viewed by 3750
Abstract
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid—we will assume [...] Read more.
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid—we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

4591 KiB  
Article
A Visual-Based Approach for Indoor Radio Map Construction Using Smartphones
by Tao Liu, Xing Zhang, Qingquan Li and Zhixiang Fang
Sensors 2017, 17(8), 1790; https://0-doi-org.brum.beds.ac.uk/10.3390/s17081790 - 04 Aug 2017
Cited by 21 | Viewed by 4927
Abstract
Localization of users in indoor spaces is a common issue in many applications. Among various technologies, a Wi-Fi fingerprinting based localization solution has attracted much attention, since it can be easily deployed using the existing off-the-shelf mobile devices and wireless networks. However, the [...] Read more.
Localization of users in indoor spaces is a common issue in many applications. Among various technologies, a Wi-Fi fingerprinting based localization solution has attracted much attention, since it can be easily deployed using the existing off-the-shelf mobile devices and wireless networks. However, the collection of the Wi-Fi radio map is quite labor-intensive, which limits its potential for large-scale application. In this paper, a visual-based approach is proposed for the construction of a radio map in anonymous indoor environments. This approach collects multi-sensor data, e.g., Wi-Fi signals, video frames, inertial readings, when people are walking in indoor environments with smartphones in their hands. Then, it spatially recovers the trajectories of people by using both visual and inertial information. Finally, it estimates the location of fingerprints from the trajectories and constructs a Wi-Fi radio map. Experiment results show that the average location error of the fingerprints is about 0.53 m. A weighted k-nearest neighbor method is also used to evaluate the constructed radio map. The average localization error is about 3.2 m, indicating that the quality of the constructed radio map is at the same level as those constructed by site surveying. However, this approach can greatly reduce the human labor cost, which increases the potential for applying it to large indoor environments. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

12109 KiB  
Article
Build a Robust Learning Feature Descriptor by Using a New Image Visualization Method for Indoor Scenario Recognition
by Jichao Jiao, Xin Wang and Zhongliang Deng
Sensors 2017, 17(7), 1569; https://0-doi-org.brum.beds.ac.uk/10.3390/s17071569 - 04 Jul 2017
Cited by 6 | Viewed by 3856
Abstract
In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different [...] Read more.
In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different from human eyes, which assists researchers to see the reasons that cause a computer to make errors. Additionally, according to the visualization, we notice that the HOG features can obtain rich texture information. However, a large amount of background interference is also introduced. In order to enhance the robustness of the HOG feature, we propose an improved method for suppressing the background interference. On the basis of the original HOG feature, we introduce a principal component analysis (PCA) to extract the principal components of the image colour information. Then, a new hybrid feature descriptor, which is named HOG–PCA (HOGP), is made by deeply fusing these two features. Finally, the HOGP is compared to the state-of-the-art HOG feature descriptor in four scenes under different illumination. In the simulation and experimental tests, the qualitative and quantitative assessments indicate that the visualizing images of the HOGP feature are close to the observation results obtained by human eyes, which is better than the original HOG feature for object detection. Furthermore, the runtime of our proposed algorithm is hardly increased in comparison to the classic HOG feature. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

3208 KiB  
Article
A New Calibration Method for Commercial RGB-D Sensors
by Walid Darwish, Shenjun Tang, Wenbin Li and Wu Chen
Sensors 2017, 17(6), 1204; https://0-doi-org.brum.beds.ac.uk/10.3390/s17061204 - 24 May 2017
Cited by 34 | Viewed by 8398
Abstract
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and [...] Read more.
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. Full article
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Show Figures

Figure 1

Back to TopTop