sensors-logo

Journal Browser

Journal Browser

Mobile Multi-Sensors in Positioning, Navigation, and Mapping Applications (Volume II)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Navigation and Positioning".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 21282

Special Issue Editor


E-Mail Website
Guest Editor
Mobile Multi-Sensor Systems Reaserch Group, University of Calgary, Calgary, AB T2N 1N4, Canada
Interests: multisensor systems; signal processing; error modeling and optimal estimation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Following the success of the previous Special Issue “Mobile Multi-Sensors in Positioning, Navigation, and Mapping Applications” (https://0-www-mdpi-com.brum.beds.ac.uk/journal/sensors/special_issues/MMSensors), we are pleased to announce the next in the series, entitled “Mobile Multi-Sensors in Positioning, Navigation, and Mapping Applications II”.

Sensors have always been the core of any system used in positioning, navigation, and mapping. Mobile sensing in particular is the main component for such systems in land, airborne, and marine applications. In recent decades, it has become a standard tool in those mobile systems to integrate different sensors that complement each other, hence adding more capabilities to the used system. These sensors include GNSS, inertial sensors (accelerometers and gyroscopes), magnetometers, compasses, odometers, vision-based sensors, LiDAR, scanners, etc. Although sensor integration has been implemented to improve overall system performance, it has introduced multiple challenges due to the added system complexities. This has led researchers to investigate several aspects such as sensor synchronization, data fusion, signal processing, sensor error models, integration schemes, and optimal estimation techniques. Moreover, with the advances in sensor technology, sensors costs are lower, and their sizes are smaller. This has come with the price of large sensor errors, which again has motivated researchers to investigate more approaches to overcome this issue.

Therefore, the main objective of this Special Issue is to feature the current advances related to mobile multi-sensors in positioning, navigation, and mapping applications. Invited original research contributions can cover a wide range of topics, including but not limited to:

  • Sensor calibration and evaluation;
  • Signal processing techniques;
  • Sensor data fusion;
  • Land, airborne, and marine applications;
  • Sensor stochastic error models;
  • Robotics;
  • Autonomous driving;
  • Optimal estimation techniques;
  • Autonomous underwater vehicles (AUV);
  • Micro-electromechanical systems (MEMS);
  • Indoor positioning and navigation;
  • Unmanned aerial vehicle (UAV) applications;
  • Multi-sensor systems in challenging environments;
  • Remote sensing applications;
  • Pipeline surveying and monitoring;
  • Vision-aided navigation.

Dr. Sameh Nassar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2952 KiB  
Article
Estimating BDS-3 Satellite Differential Code Biases with the Single-Frequency Uncombined PPP Model
by Jizhong Wu, Shan Gao and Dongchen Li
Sensors 2023, 23(18), 7900; https://0-doi-org.brum.beds.ac.uk/10.3390/s23187900 - 15 Sep 2023
Viewed by 568
Abstract
Differential Code Bias (DCB) is a crucially systematic error in satellite positioning and ionospheric modeling. This study aims to estimate the BeiDou-3 global navigation satellite system (BDS-3) satellite DCBs by using the single-frequency (SF) uncombined Precise Point Positioning (PPP) model. The experiment utilized [...] Read more.
Differential Code Bias (DCB) is a crucially systematic error in satellite positioning and ionospheric modeling. This study aims to estimate the BeiDou-3 global navigation satellite system (BDS-3) satellite DCBs by using the single-frequency (SF) uncombined Precise Point Positioning (PPP) model. The experiment utilized BDS-3 B1 observations collected from 25 International GNSS Service (IGS) stations located at various latitudes during March 2023. The results reveal that the accuracy of estimating B1I-B3I DCBs derived from single receiver exhibits latitude dependence. Stations in low-latitude regions show considerable variability in the root mean square (RMS) of absolute offsets for satellite DCBs estimation, covering a wide range of values. In contrast, mid- to high-latitude stations demonstrate a more consistent pattern with relatively stable RMS values. Moreover, it has been observed that the stations situated in the Northern Hemisphere display a higher level of consistency in the RMS values when compared to those in the Southern Hemisphere. When incorporating estimates from all 25 stations, the RMS of the absolute offsets in satellite DCBs estimation consistently remained below 0.8 ns. Notably, after excluding 8 low-latitude stations and utilizing data from the remaining 17 stations, the RMS of absolute offsets in satellite DCBs estimation decreased to below 0.63 ns. These enhancements underscore the importance of incorporating a sufficient number of mid- and high-latitude stations to mitigate the effects of ionospheric variability when utilizing SF observations for satellite DCBs estimation. Full article
Show Figures

Figure 1

18 pages, 5165 KiB  
Article
A Simultaneous Localization and Mapping System Using the Iterative Error State Kalman Filter Judgment Algorithm for Global Navigation Satellite System
by Bo You, Guangjin Zhong, Chen Chen, Jiayu Li and Ersi Ma
Sensors 2023, 23(13), 6000; https://0-doi-org.brum.beds.ac.uk/10.3390/s23136000 - 28 Jun 2023
Viewed by 1549
Abstract
Outdoor autonomous mobile robots heavily rely on GPS data for localization. However, GPS data can be erroneous and signals can be interrupted in highly urbanized areas or areas with incomplete satellite coverage, leading to localization deviations. In this paper, we propose a SLAM [...] Read more.
Outdoor autonomous mobile robots heavily rely on GPS data for localization. However, GPS data can be erroneous and signals can be interrupted in highly urbanized areas or areas with incomplete satellite coverage, leading to localization deviations. In this paper, we propose a SLAM (Simultaneous Localization and Mapping) system that combines the IESKF (Iterated Extended Kalman Filter) and a factor graph to address these issues. We perform IESKF filtering on LiDAR and inertial measurement unit (IMU) data at the front-end to achieve a more accurate estimation of local pose and incorporate the resulting laser inertial odometry into the back-end factor graph. Furthermore, we introduce a GPS signal filtering method based on GPS state and confidence to ensure that abnormal GPS data is not used in the back-end processing. In the back-end factor graph, we incorporate loop closure factors, IMU preintegration factors, and processed GPS factors. We conducted comparative experiments using the publicly available KITTI dataset and our own experimental platform to compare the proposed SLAM system with two commonly used SLAM systems: the filter-based SLAM system (FAST-LIO) and the graph optimization-based SLAM system (LIO-SAM). The experimental results demonstrate that the proposed SLAM system outperforms the other systems in terms of localization accuracy, especially in cases of GPS signal interruption. Full article
Show Figures

Figure 1

26 pages, 18816 KiB  
Article
Navigation of an Autonomous Spraying Robot for Orchard Operations Using LiDAR for Tree Trunk Detection
by Ailian Jiang and Tofael Ahamed
Sensors 2023, 23(10), 4808; https://0-doi-org.brum.beds.ac.uk/10.3390/s23104808 - 16 May 2023
Cited by 6 | Viewed by 2613
Abstract
Traditional Japanese orchards control the growth height of fruit trees for the convenience of farmers, which is unfavorable to the operation of medium- and large-sized machinery. A compact, safe, and stable spraying system could offer a solution for orchard automation. Due to the [...] Read more.
Traditional Japanese orchards control the growth height of fruit trees for the convenience of farmers, which is unfavorable to the operation of medium- and large-sized machinery. A compact, safe, and stable spraying system could offer a solution for orchard automation. Due to the complex orchard environment, the dense tree canopy not only obstructs the GNSS signal but also has effects due to low light, which may impact the recognition of objects by ordinary RGB cameras. To overcome these disadvantages, this study selected LiDAR as a single sensor to achieve a prototype robot navigation system. In this study, density-based spatial clustering of applications with noise (DBSCAN) and K-means and random sample consensus (RANSAC) machine learning algorithms were used to plan the robot navigation path in a facilitated artificial-tree-based orchard system. Pure pursuit tracking and an incremental proportional–integral–derivative (PID) strategy were used to calculate the vehicle steering angle. In field tests on a concrete road, grass field, and a facilitated artificial-tree-based orchard, as indicated by the test data results for several formations of left turns and right turns separately, the position root mean square error (RMSE) of this vehicle was as follows: on the concrete road, the right turn was 12.0 cm and the left turn was 11.6 cm, on grass, the right turn was 12.6 cm and the left turn was 15.5 cm, and in the facilitated artificial-tree-based orchard, the right turn was 13.8 cm and the left turn was 11.4 cm. The vehicle was able to calculate the path in real time based on the position of the objects, operate safely, and complete the task of pesticide spraying. Full article
Show Figures

Figure 1

17 pages, 3594 KiB  
Article
UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction
by Bin Li, Haifeng Ye, Sihan Fu, Xiaojin Gong and Zhiyu Xiang
Sensors 2023, 23(8), 3967; https://0-doi-org.brum.beds.ac.uk/10.3390/s23083967 - 13 Apr 2023
Viewed by 1525
Abstract
Due to the complementary characteristics of visual and LiDAR information, these two modalities have been fused to facilitate many vision tasks. However, current studies of learning-based odometries mainly focus on either the visual or LiDAR modality, leaving visual–LiDAR odometries (VLOs) under-explored. This work [...] Read more.
Due to the complementary characteristics of visual and LiDAR information, these two modalities have been fused to facilitate many vision tasks. However, current studies of learning-based odometries mainly focus on either the visual or LiDAR modality, leaving visual–LiDAR odometries (VLOs) under-explored. This work proposes a new method to implement an unsupervised VLO, which adopts a LiDAR-dominant scheme to fuse the two modalities. We, therefore, refer to it as unsupervised vision-enhanced LiDAR odometry (UnVELO). It converts 3D LiDAR points into a dense vertex map via spherical projection and generates a vertex color map by colorizing each vertex with visual information. Further, a point-to-plane distance-based geometric loss and a photometric-error-based visual loss are, respectively, placed on locally planar regions and cluttered regions. Last, but not least, we designed an online pose-correction module to refine the pose predicted by the trained UnVELO during test time. In contrast to the vision-dominant fusion scheme adopted in most previous VLOs, our LiDAR-dominant method adopts the dense representations for both modalities, which facilitates the visual–LiDAR fusion. Besides, our method uses the accurate LiDAR measurements instead of the predicted noisy dense depth maps, which significantly improves the robustness to illumination variations, as well as the efficiency of the online pose correction. The experiments on the KITTI and DSEC datasets showed that our method outperformed previous two-frame-based learning methods. It was also competitive with hybrid methods that integrate a global optimization on multiple or all frames. Full article
Show Figures

Figure 1

19 pages, 9815 KiB  
Article
SLAM and 3D Semantic Reconstruction Based on the Fusion of Lidar and Monocular Vision
by Lu Lou, Yitian Li, Qi Zhang and Hanbing Wei
Sensors 2023, 23(3), 1502; https://0-doi-org.brum.beds.ac.uk/10.3390/s23031502 - 29 Jan 2023
Cited by 7 | Viewed by 4415
Abstract
Monocular camera and Lidar are the two most commonly used sensors in unmanned vehicles. Combining the advantages of the two is the current research focus of SLAM and semantic analysis. In this paper, we propose an improved SLAM and semantic reconstruction method based [...] Read more.
Monocular camera and Lidar are the two most commonly used sensors in unmanned vehicles. Combining the advantages of the two is the current research focus of SLAM and semantic analysis. In this paper, we propose an improved SLAM and semantic reconstruction method based on the fusion of Lidar and monocular vision. We fuse the semantic image with the low-resolution 3D Lidar point clouds and generate dense semantic depth maps. Through visual odometry, ORB feature points with depth information are selected to improve positioning accuracy. Our method uses parallel threads to aggregate 3D semantic point clouds while positioning the unmanned vehicle. Experiments are conducted on the public CityScapes and KITTI Visual Odometry datasets, and the results show that compared with the ORB-SLAM2 and DynaSLAM, our positioning error is approximately reduced by 87%; compared with the DEMO and DVL-SLAM, our positioning accuracy improves in most sequences. Our 3D reconstruction quality is better than DynSLAM and contains semantic information. The proposed method has engineering application value in the unmanned vehicles field. Full article
Show Figures

Figure 1

27 pages, 7382 KiB  
Article
An Indoor Location-Based Augmented Reality Framework
by Jehn-Ruey Jiang and Hanas Subakti
Sensors 2023, 23(3), 1370; https://0-doi-org.brum.beds.ac.uk/10.3390/s23031370 - 26 Jan 2023
Cited by 4 | Viewed by 2791
Abstract
This paper proposes an indoor location-based augmented reality framework (ILARF) for the development of indoor augmented-reality (AR) systems. ILARF integrates an indoor localization unit (ILU), a secure context-aware message exchange unit (SCAMEU), and an AR visualization and interaction unit (ARVIU). The ILU runs [...] Read more.
This paper proposes an indoor location-based augmented reality framework (ILARF) for the development of indoor augmented-reality (AR) systems. ILARF integrates an indoor localization unit (ILU), a secure context-aware message exchange unit (SCAMEU), and an AR visualization and interaction unit (ARVIU). The ILU runs on a mobile device such as a smartphone and utilizes visible markers (e.g., images and text), invisible markers (e.g., Wi-Fi, Bluetooth Low Energy, and NFC signals), and device sensors (e.g., accelerometers, gyroscopes, and magnetometers) to determine the device location and direction. The SCAMEU utilizes a message queuing telemetry transport (MQTT) server to exchange ambient sensor data (e.g., temperature, light, and humidity readings) and user data (e.g., user location and user speed) for context-awareness. The unit also employs a web server to manage user profiles and settings. The ARVIU uses AR creation tools to handle user interaction and display context-aware information in appropriate areas of the device’s screen. One prototype AR app for use in gyms, Gym Augmented Reality (GAR), was developed based on ILARF. Users can register their profiles and configure settings when using GAR to visit a gym. Then, GAR can help users locate appropriate gym equipment based on their workout programs or favorite exercise specified in their profiles. GAR provides instructions on how to properly use the gym equipment and also makes it possible for gym users to socialize with each other, which may motivate them to go to the gym regularly. GAR is compared with other related AR systems. The comparison shows that GAR is superior to others by virtue of its use of ILARF; specifically, it provides more information, such as user location and direction, and has more desirable properties, such as secure communication and a 3D graphical user interface. Full article
Show Figures

Figure 1

17 pages, 5629 KiB  
Article
Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation
by Haileleol Tibebu, Varuna De-Silva, Corentin Artaud, Rafael Pina and Xiyu Shi
Sensors 2022, 22(20), 8021; https://0-doi-org.brum.beds.ac.uk/10.3390/s22208021 - 20 Oct 2022
Cited by 5 | Viewed by 2400
Abstract
Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To overcome this [...] Read more.
Recent deep learning frameworks draw strong research interest in application of ego-motion estimation as they demonstrate a superior result compared to geometric approaches. However, due to the lack of multimodal datasets, most of these studies primarily focused on single-sensor-based estimation. To overcome this challenge, we collect a unique multimodal dataset named LboroAV2 using multiple sensors, including camera, light detecting and ranging (LiDAR), ultrasound, e-compass and rotary encoder. We also propose an end-to-end deep learning architecture for fusion of RGB images and LiDAR laser scan data for odometry application. The proposed method contains a convolutional encoder, a compressed representation and a recurrent neural network. Besides feature extraction and outlier rejection, the convolutional encoder produces a compressed representation, which is used to visualise the network’s learning process and to pass useful sequential information. The recurrent neural network uses this compressed sequential data to learn the relationship between consecutive time steps. We use the Loughborough autonomous vehicle (LboroAV2) and the Karlsruhe Institute of Technology and Toyota Institute (KITTI) Visual Odometry (VO) datasets to experiment and evaluate our results. In addition to visualising the network’s learning process, our approach provides superior results compared to other similar methods. The code for the proposed architecture is released in GitHub and accessible publicly. Full article
Show Figures

Figure 1

16 pages, 1415 KiB  
Article
Expectation–Maximization-Based Simultaneous Localization and Mapping for Millimeter-Wave Communication Systems
by Lu Chen, Zhigang Chen and Zhi Ji
Sensors 2022, 22(18), 6941; https://0-doi-org.brum.beds.ac.uk/10.3390/s22186941 - 14 Sep 2022
Cited by 3 | Viewed by 1163
Abstract
In this paper, we proposed a novel expectation–maximization-based simultaneous localization and mapping (SLAM) algorithm for millimeter-wave (mmW) communication systems. By fully exploiting the geometric relationship among the access point (AP) positions, the angle difference of arrival (ADOA) from the APs and the mobile [...] Read more.
In this paper, we proposed a novel expectation–maximization-based simultaneous localization and mapping (SLAM) algorithm for millimeter-wave (mmW) communication systems. By fully exploiting the geometric relationship among the access point (AP) positions, the angle difference of arrival (ADOA) from the APs and the mobile terminal (MT) position, and regarding the MT positions as the latent variable of the AP positions, the proposed algorithm first reformulates the SLAM problem as the maximum likelihood joint estimation over both the AP positions and the MT positions in a latent variable model. Then, it employs a feasible stochastic approximation expectation–maximization (EM) method to estimate the AP positions. Specifically, the stochastic Monte Carlo approximation is employed to obtain the intractable expectation of the MT positions’ posterior probability in the E-step, and the gradient descent-based optimization is used as a viable substitute for estimating the high-dimensional AP positions in the M-step. Further, it estimates the MT positions and constructs the indoor map based on the estimated AP topology. Due to the efficient processing capability of the stochastic approximation EM method and taking full advantage of the abundant spatial information in the crowd-sourcing ADOA data, the proposed method can achieve a better positioning and mapping performance than the existing geometry-based mmW SLAM method, which usually has to compromise between the computation complexity and the estimation performance. The simulation results confirm the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

24 pages, 15014 KiB  
Article
Adaptive Data Fusion Method of Multisensors Based on LSTM-GWFA Hybrid Model for Tracking Dynamic Targets
by Hao Yin, Dongguang Li, Yue Wang and Xiaotong Hong
Sensors 2022, 22(15), 5800; https://0-doi-org.brum.beds.ac.uk/10.3390/s22155800 - 3 Aug 2022
Cited by 2 | Viewed by 1894
Abstract
In preparation for the battlefields of the future, using unmanned aerial vehicles (UAV) loaded with multisensors to track dynamic targets has become the research focus in recent years. According to the air combat tracking scenarios and traditional multisensor weighted fusion algorithms, this paper [...] Read more.
In preparation for the battlefields of the future, using unmanned aerial vehicles (UAV) loaded with multisensors to track dynamic targets has become the research focus in recent years. According to the air combat tracking scenarios and traditional multisensor weighted fusion algorithms, this paper contains designs of a new data fusion method using a global Kalman filter and LSTM prediction measurement variance, which uses an adaptive truncation mechanism to determine the optimal weights. The method considers the temporal correlation of the measured data and introduces a detection mechanism for maneuvering of targets. Numerical simulation results show the accuracy of the algorithm can be improved about 66% by training 871 flight data. Based on a mature refitted civil wing UAV platform, the field experiments verified the data fusion method for tracking dynamic target is effective, stable, and has generalization ability. Full article
Show Figures

Figure 1

17 pages, 2824 KiB  
Article
An Adaptive Filtering Method for Cooperative Localization in Leader–Follower AUVs
by Lin Zhao, Hong-Yi Dai, Lin Lang and Ming Zhang
Sensors 2022, 22(13), 5016; https://0-doi-org.brum.beds.ac.uk/10.3390/s22135016 - 2 Jul 2022
Cited by 5 | Viewed by 1467
Abstract
In the complex and variable marine environment, the navigation and localization of autonomous underwater vehicles (AUVs) are very important and challenging. When the conventional Kalman filter (KF) is applied to the cooperative localization of leader–follower AUVs, the outliers in the sensor observations will [...] Read more.
In the complex and variable marine environment, the navigation and localization of autonomous underwater vehicles (AUVs) are very important and challenging. When the conventional Kalman filter (KF) is applied to the cooperative localization of leader–follower AUVs, the outliers in the sensor observations will have a substantial adverse effect on the localization accuracy of the AUVs. Meanwhile, inaccurate noise covariance matrices may result in significant estimation errors. In this paper, we proposed an improved Sage–Husa adaptive extended Kalman filter (improved SHAEKF) for the cooperative localization of multi-AUVs. Firstly, the measurement anomalies were evaluated by calculating the Chi-square test statistics based on the innovation. The detection threshold was determined according to the confidence level of the Chi-square test, and the Chi-square test statistics exceeding the threshold were regarded as measurement abnormalities. When measurement anomalies occurred, the Sage–Husa adaptive extended Kalman filter algorithm was improved by suboptimal maximum a posterior estimation using weighted exponential fading memory, and the measurement noise covariance matrix was adjusted online. The numerical simulation of leader–follower multi-AUV cooperative localization verified the effectiveness of the improved SHAEKF and demonstrated that the average root mean square and the average standard deviation of the localization errors based on the improved SHAEKF were significantly reduced in the case of the presence of measurement abnormalities. Full article
Show Figures

Figure 1

Back to TopTop