Mobile Robots Navigation Ⅱ

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Mechanical Engineering".

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 24781

Special Issue Editors


E-Mail
Guest Editor
System Engineering and Automation Department, Miguel Hernandez University, 03202 Elche (Alicante), Spain
Interests: visual appearance; mobile robots; parallel robots; mapping

E-Mail Website
Guest Editor
System Engineering and Automation Department, Miguel Hernandez University, 03202 Elche, Spain
Interests: computer vision; omnidirectional imaging; appearance descriptors; image processing; mobile robotics; environment modeling; visual localization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Navigation is one of the fundamental abilities that mobile robots must be endowed with, so that they can carry out high-level tasks autonomously in any a priori unknown environment. Traditionally, this problem has been addressed efficiently through a sequence of actions. First, the surroundings of the robot are perceived and interpreted. Second, the robot estimates its position and orientation within the environment. With this information, a trajectory towards the target points is planned, and the vehicle is guided along this trajectory, often including a reactive action that considers obstacles, possible changes or interactions with the environment or with the users.

The robot may be equipped with different kinds of sensors to perceive the environment, such as laser rangefinders, visual systems of RGB-D platforms, and any of them presents some particularities that must be considered while designing the navigation algorithms. Additionally, in broad lines, navigation can be addressed from two different perspectives, depending on the existence of an initial model or map of the environment. In cases where no map is initially available, an exploration step can be included to achieve complete knowledge of the environment. Integrated exploration systems consider all these issues jointly, and they develop trajectory planning and control, while a model of the environment is obtained, and the robot estimates its position and orientation within it. Additionally, in some cases, the robots must navigate through social environments, and it is necessary to implement interfaces that permit an easy and intuitive interaction between the robot and the users.

The aim of this Special Issue is to present current frameworks in these fields and, in general, approaches to any problem related to the navigation of mobile robots either when they move in a plane (3D) or in the space (6D). In this way, this Special Issue invites contributions to the following topics (but it is not limited to them):

  • Self-localization and odometry;
  • Environment modelling and interpretation;
  • Trajectory planning and motion control;
  • Algorithms and methods for navigation;
  • Data fusion for mobile robot navigation;
  • Deep learning in mobile robot navigation;
  • Vision-based mobile robot navigation;
  • Map-based navigation;
  • Reactive navigation;
  • Collaborative navigation of robots;
  • Interfaces for robot navigation and social interaction;
  • Applications of mobile robot navigation.

Prof. Dr. Oscar Reinoso
Prof. Dr. Luis Payá
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Exploration
  • Path planning
  • Mapping
  • Localization
  • Visual odometry
  • SLAM (simultaneous localization and mapping)
  • Navigation
  • Obstacle avoidance
  • Interaction

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 160 KiB  
Editorial
Special Issue on Mobile Robots Navigation II
by Luis Payá and Oscar Reinoso
Appl. Sci. 2023, 13(3), 1567; https://0-doi-org.brum.beds.ac.uk/10.3390/app13031567 - 26 Jan 2023
Viewed by 876
Abstract
Navigation is one of the fundamental abilities that mobile robots must be endowed with, so that they can carry out high-level tasks autonomously in any a priori unknown environment [...] Full article
(This article belongs to the Special Issue Mobile Robots Navigation Ⅱ)

Research

Jump to: Editorial

21 pages, 4076 KiB  
Article
A Deep Reinforcement Learning Approach for Active SLAM
by Julio A. Placed and José A. Castellanos
Appl. Sci. 2020, 10(23), 8386; https://0-doi-org.brum.beds.ac.uk/10.3390/app10238386 - 25 Nov 2020
Cited by 23 | Viewed by 5811
Abstract
In this paper, we formulate the active SLAM paradigm in terms of model-free Deep Reinforcement Learning, embedding the traditional utility functions based on the Theory of Optimal Experimental Design in rewards, and therefore relaxing the intensive computations of classical approaches. We validate such [...] Read more.
In this paper, we formulate the active SLAM paradigm in terms of model-free Deep Reinforcement Learning, embedding the traditional utility functions based on the Theory of Optimal Experimental Design in rewards, and therefore relaxing the intensive computations of classical approaches. We validate such formulation in a complex simulation environment, using a state-of-the-art deep Q-learning architecture with laser measurements as network inputs. Trained agents become capable not only to learn a policy to navigate and explore in the absence of an environment model but also to transfer their knowledge to previously unseen maps, which is a key requirement in robotic exploration. Full article
(This article belongs to the Special Issue Mobile Robots Navigation Ⅱ)
Show Figures

Figure 1

28 pages, 3944 KiB  
Article
Creating Incremental Models of Indoor Environments through Omnidirectional Imaging
by Vicente Román, Luis Payá, Sergio Cebollada and Óscar Reinoso
Appl. Sci. 2020, 10(18), 6480; https://0-doi-org.brum.beds.ac.uk/10.3390/app10186480 - 17 Sep 2020
Cited by 6 | Viewed by 1948
Abstract
In this work, an incremental clustering approach to obtain compact hierarchical models of an environment is developed and evaluated. This process is performed using an omnidirectional vision sensor as the only source of information. The method is structured in two loop closure levels. [...] Read more.
In this work, an incremental clustering approach to obtain compact hierarchical models of an environment is developed and evaluated. This process is performed using an omnidirectional vision sensor as the only source of information. The method is structured in two loop closure levels. First, the Node Level Loop Closure process selects the candidate nodes with which the new image can close the loop. Second, the Image Level Loop Closure process detects the most similar image and the node with which the current image closed the loop. The algorithm is based on an incremental clustering framework and leads to a topological model where the images of each zone tend to be clustered in different nodes. In addition, the method evaluates when two nodes are similar and they can be merged in a unique node or when a group of connected images are different enough to the others and they should constitute a new node. To perform the process, omnidirectional images are described with global appearance techniques in order to obtain robust descriptors. The use of such technique in mapping and localization algorithms is less extended than local features description, so this work also evaluates the efficiency in clustering and mapping techniques. The proposed framework is tested with three different public datasets, captured by an omnidirectional vision system mounted on a robot while it traversed three different buildings. This framework is able to build the model incrementally, while the robot explores an unknown environment. Some relevant parameters of the algorithm adapt their value as the robot captures new visual information to fully exploit the features’ space, and the model is updated and/or modified as a consequence. The experimental section shows the robustness and efficiency of the method, comparing it with a batch spectral clustering algorithm. Full article
(This article belongs to the Special Issue Mobile Robots Navigation Ⅱ)
Show Figures

Figure 1

22 pages, 13628 KiB  
Article
An Adaptive Target Tracking Algorithm Based on EKF for AUV with Unknown Non-Gaussian Process Noise
by Lingyan Dong, Hongli Xu, Xisheng Feng, Xiaojun Han and Chuang Yu
Appl. Sci. 2020, 10(10), 3413; https://0-doi-org.brum.beds.ac.uk/10.3390/app10103413 - 15 May 2020
Cited by 9 | Viewed by 2110
Abstract
An adaptive target tracking method based on extended Kalman filter (TT-EKF) is proposed to simultaneously estimate the state of an Autonomous Underwater Vehicle (AUV) and an mobile recovery system (MRS) with unknown non-Gaussian process noise in homing process. In the application scenario of [...] Read more.
An adaptive target tracking method based on extended Kalman filter (TT-EKF) is proposed to simultaneously estimate the state of an Autonomous Underwater Vehicle (AUV) and an mobile recovery system (MRS) with unknown non-Gaussian process noise in homing process. In the application scenario of this article, the process noise includes the measurement noise of AUV heading and forward speed and the estimation error of MRS heading and forward speed. The accuracy of process noise covariance matrix (PNCM) can affect the state estimation performance of the TT-EKF. The variational Bayesian based algorithm is applied to estimate the process noise statistics. We use a Gaussian mixture distribution to model the non-Gaussian noisy forward speed of AUV and MRS. We use a von-Mises distribution to model the noisy heading of AUV and MRS. The variational Bayesian algorithm is applied to estimate the parameters of these distributions, and then the PNCM can be calculated. The prediction error of TT-EKF is online compensated by using a multilayer neural network, and the neural network is online trained during the target tracking process. Matlab simulation and experimental data analysis results verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Mobile Robots Navigation Ⅱ)
Show Figures

Figure 1

20 pages, 7167 KiB  
Article
Dynamic-DSO: Direct Sparse Odometry Using Objects Semantic Information for Dynamic Environments
by Chao Sheng, Shuguo Pan, Wang Gao, Yong Tan and Tao Zhao
Appl. Sci. 2020, 10(4), 1467; https://0-doi-org.brum.beds.ac.uk/10.3390/app10041467 - 21 Feb 2020
Cited by 21 | Viewed by 4521
Abstract
Traditional Simultaneous Localization and Mapping (SLAM) (with loop closure detection), or Visual Odometry (VO) (without loop closure detection), are based on the static environment assumption. When working in dynamic environments, they perform poorly whether using direct methods or indirect methods (feature points methods). [...] Read more.
Traditional Simultaneous Localization and Mapping (SLAM) (with loop closure detection), or Visual Odometry (VO) (without loop closure detection), are based on the static environment assumption. When working in dynamic environments, they perform poorly whether using direct methods or indirect methods (feature points methods). In this paper, Dynamic-DSO which is a semantic monocular direct visual odometry based on DSO (Direct Sparse Odometry) is proposed. The proposed system is completely implemented with the direct method, which is different from the most current dynamic systems combining the indirect method with deep learning. Firstly, convolutional neural networks (CNNs) are applied to the original RGB image to generate the pixel-wise semantic information of dynamic objects. Then, based on the semantic information of the dynamic objects, dynamic candidate points are filtered out in keyframes candidate points extraction; only static candidate points are reserved in the tracking and optimization module, to achieve accurate camera pose estimation in dynamic environments. The photometric error calculated by the projection points in dynamic region of subsequent frames are removed from the whole photometric error in pyramid motion tracking model. Finally, the sliding window optimization which neglects the photometric error calculated in the dynamic region of each keyframe is applied to obtain the precise camera pose. Experiments on the public TUM dynamic dataset and the modified Euroc dataset show that the positioning accuracy and robustness of the proposed Dynamic-DSO is significantly higher than the state-of-the-art direct method in dynamic environments, and the semi-dense cloud map constructed by Dynamic-DSO is clearer and more detailed. Full article
(This article belongs to the Special Issue Mobile Robots Navigation Ⅱ)
Show Figures

Graphical abstract

13 pages, 7121 KiB  
Article
Augmenting GPS with Geolocated Fiducials to Improve Accuracy for Mobile Robot Applications
by Robert Ross and Rahinul Hoque
Appl. Sci. 2020, 10(1), 146; https://0-doi-org.brum.beds.ac.uk/10.3390/app10010146 - 23 Dec 2019
Cited by 18 | Viewed by 3295
Abstract
In recent decades Global Positioning Systems (GPS) have become a ubiquitous tool to support navigation. Traditional GPS has an error in the order of 10–15 m, which is adequate for many applications (e.g., vehicle navigation) but for many robotics applications lacks required accuracy. [...] Read more.
In recent decades Global Positioning Systems (GPS) have become a ubiquitous tool to support navigation. Traditional GPS has an error in the order of 10–15 m, which is adequate for many applications (e.g., vehicle navigation) but for many robotics applications lacks required accuracy. In this paper we describe a technique, FAGPS (Fiducial Augmented Global Positioning System) to periodically use fiducial markers to lower the GPS drift, and hence for a small time-period have a more accurate GPS determination. We describe results from simulations and from field testing in open-sky environments where horizontal GPS accuracy was improved from a twice the distance root mean square (2DRMS) error of 5.5 m to 2.99 m for a period of up-to 30 min. Full article
(This article belongs to the Special Issue Mobile Robots Navigation Ⅱ)
Show Figures

Figure 1

21 pages, 1800 KiB  
Article
A Research on the Simultaneous Localization Method in the Process of Autonomous Underwater Vehicle Homing with Unknown Varying Measurement Error
by Lingyan Dong, Hongli Xu, Xisheng Feng, Xiaojun Han and Chuang Yu
Appl. Sci. 2019, 9(21), 4614; https://0-doi-org.brum.beds.ac.uk/10.3390/app9214614 - 30 Oct 2019
Cited by 4 | Viewed by 2825
Abstract
We propose an acoustic-based framework for automatically homing an Autonomous Underwater Vehicle (AUV) to the fixed docking station (F-DS) and mobile docking station (M-DS). The proposed framework contains a simultaneous localization method of AUV and docking station (DS) and a guidance method based [...] Read more.
We propose an acoustic-based framework for automatically homing an Autonomous Underwater Vehicle (AUV) to the fixed docking station (F-DS) and mobile docking station (M-DS). The proposed framework contains a simultaneous localization method of AUV and docking station (DS) and a guidance method based on the position information. The Simultaneous localization and mapping (SLAM) algorithm is not available as the statistical characteristics of the measurement error of the observation system are unknown. To solve this problem, we propose a data pre-processing method. Firstly, the measurement error data of acoustic sensor are collected. Then, We propose a Variational Auto-Encoder (VAE) based Gaussian mixture model (GMM) for estimating the statistical characteristics of measurement error. Finally, we propose a support vector regression (SVR) algorithm to fit the non-linear relationship between the statistical characteristics of measurement error and its corresponding working distance. We adopt a guidance method based on line-of-sight (LOS) and path tracking method for homing an AUV to the fixed docking station (F-DS) and mobile docking station (M-DS). The lake experimental data are used to verify the performance of the localization with the estimated statistical characteristics of measurement error. Full article
(This article belongs to the Special Issue Mobile Robots Navigation Ⅱ)
Show Figures

Figure 1

23 pages, 4829 KiB  
Article
Intelligent Path Recognition against Image Noises for Vision Guidance of Automated Guided Vehicles in a Complex Workspace
by Xing Wu, Chao Sun, Ting Zou, Haining Xiao, Longjun Wang and Jingjing Zhai
Appl. Sci. 2019, 9(19), 4108; https://0-doi-org.brum.beds.ac.uk/10.3390/app9194108 - 01 Oct 2019
Cited by 16 | Viewed by 2408
Abstract
Applying computer vision to mobile robot navigation has been studied more than two decades. The most challenging problems for a vision-based AGV running in a complex workspace involve the non-uniform illumination, sight-line occlusion or stripe damage, which inevitably result in incomplete or deformed [...] Read more.
Applying computer vision to mobile robot navigation has been studied more than two decades. The most challenging problems for a vision-based AGV running in a complex workspace involve the non-uniform illumination, sight-line occlusion or stripe damage, which inevitably result in incomplete or deformed path images as well as many fake artifacts. Neither the fixed threshold methods nor the iterative optimal threshold methods can obtain a suitable threshold for the path images acquired on all conditions. It is still an open question to estimate the model parameters of guide paths accurately by distinguishing the actual path pixels from the under- or over- segmentation error points. Hence, an intelligent path recognition approach based on KPCA–BPNN and IPSO–BTGWP is proposed here, in order to resist the interferences from the complex workspace. Firstly, curvilinear paths were recognized from their straight counterparts by means of a path classifier based on KPCA–BPNN. Secondly, an approximation method based on BTGWP was developed for replacing the curve with a series of piecewise lines (a polyline path). Thirdly, a robust path estimation method based on IPSO was proposed to figure out the path parameters from a set of path pixels surrounded by noise points. Experimental results showed that our approach can effectively improve the accuracy and reliability of a low-cost vision-guidance system for AGVs in a complex workspace. Full article
(This article belongs to the Special Issue Mobile Robots Navigation Ⅱ)
Show Figures

Figure 1

Back to TopTop