sensors-logo

Journal Browser

Journal Browser

Mobile Robots: Navigation, Control and Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (29 February 2024) | Viewed by 44810

Special Issue Editors


E-Mail Website
Guest Editor
Department of Engineering, Lancaster University, Lancaster LA1 4YW, UK
Interests: radiological instrumentation; robotics in nuclear decommissioning environments
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Engineering, Lancaster University, Lancaster LA1 4YW, UK
Interests: sensors; nuclear instrumentation; sub-aquatic robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Navigation is one of the main challenges in robotics. Different technologies and strategies are involved: sensing, positioning, mapping, approaching, tracking, formation, control, communication, human interface, learning, etc.

The aim of this Special Issue is to contribute to the state-of-the-art and present current applications of robot navigation. The Guest Editors invite papers related to the following topics, but the list is non-exhaustive:

  1. Development of robotics and sensors designed to be utilized within any radiological environment.
  2. Perception and Stand-alone and cooperative approaches. SLAM.
  3. Map-based, landmark-based, and beacon-based navigation (2D and 3D).
  4. Data fusion for mobile robot navigation.
  5. Wireless sensor networks for mobile robot navigation.
  6. Network control systems.
  7. Robot formation and tracking.
  8. Adaptive robot navigation and control.
  9. Biologically inspired robot navigation.
  10. Path planning.
  11. Applications of mobile robot navigation.
  12. Genetic algorithm for mobile robot navigation.
  13. Tracking algorithms.

Dr. Stephen Monk
Dr. David Cheneler
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (23 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

24 pages, 3750 KiB  
Article
Robust Tracking Control of Wheeled Mobile Robot Based on Differential Flatness and Sliding Active Disturbance Rejection Control: Simulations and Experiments
by Amine Abadi, Amani Ayeb, Moussa Labbadi, David Fofi, Toufik Bakir and Hassen Mekki
Sensors 2024, 24(9), 2849; https://0-doi-org.brum.beds.ac.uk/10.3390/s24092849 - 29 Apr 2024
Viewed by 191
Abstract
This paper proposes a robust tracking control method for wheeled mobile robot (WMR) against uncertainties, including wind disturbances and slipping. Through the application of the differential flatness methodology, the under-actuated WMR model is transformed into a linear canonical form, simplifying the design of [...] Read more.
This paper proposes a robust tracking control method for wheeled mobile robot (WMR) against uncertainties, including wind disturbances and slipping. Through the application of the differential flatness methodology, the under-actuated WMR model is transformed into a linear canonical form, simplifying the design of a stabilizing feedback controller. To handle uncertainties from wheel slip and wind disturbances, the proposed feedback controller uses sliding mode control (SMC). However, increased uncertainties lead to chattering in the SMC approach due to higher control inputs. To mitigate this, a boundary layer around the switching surface is introduced, implementing a continuous control law to reduce chattering. Although increasing the boundary layer thickness reduces chattering, it may compromise the robustness achieved by SMC. To address this challenge, an active disturbance rejection control (ADRC) is integrated with boundary layer sliding mode control. ADRC estimates lumped uncertainties via an extended state observer and eliminates them within the feedback loop. This combined feedback control method aims to achieve practical control and robust tracking performance. Stability properties of the closed-loop system are established using the Lyapunov theory. Finally, simulations and experimental results are conducted to compare and evaluate the efficiency of the proposed robust tracking controller against other existing control methods. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
20 pages, 4228 KiB  
Article
Enhancing Robot Task Planning and Execution through Multi-Layer Large Language Models
by Zhirong Luan, Yujun Lai, Rundong Huang, Shuanghao Bai, Yuedi Zhang, Haoran Zhang and Qian Wang
Sensors 2024, 24(5), 1687; https://0-doi-org.brum.beds.ac.uk/10.3390/s24051687 - 06 Mar 2024
Viewed by 1075
Abstract
Large language models have found utility in the domain of robot task planning and task decomposition. Nevertheless, the direct application of these models for instructing robots in task execution is not without its challenges. Limitations arise in handling more intricate tasks, encountering difficulties [...] Read more.
Large language models have found utility in the domain of robot task planning and task decomposition. Nevertheless, the direct application of these models for instructing robots in task execution is not without its challenges. Limitations arise in handling more intricate tasks, encountering difficulties in effective interaction with the environment, and facing constraints in the practical executability of machine control instructions directly generated by such models. In response to these challenges, this research advocates for the implementation of a multi-layer large language model to augment a robot’s proficiency in handling complex tasks. The proposed model facilitates a meticulous layer-by-layer decomposition of tasks through the integration of multiple large language models, with the overarching goal of enhancing the accuracy of task planning. Within the task decomposition process, a visual language model is introduced as a sensor for environment perception. The outcomes of this perception process are subsequently assimilated into the large language model, thereby amalgamating the task objectives with environmental information. This integration, in turn, results in the generation of robot motion planning tailored to the specific characteristics of the current environment. Furthermore, to enhance the executability of task planning outputs from the large language model, a semantic alignment method is introduced. This method aligns task planning descriptions with the functional requirements of robot motion, thereby refining the overall compatibility and coherence of the generated instructions. To validate the efficacy of the proposed approach, an experimental platform is established utilizing an intelligent unmanned vehicle. This platform serves as a means to empirically verify the proficiency of the multi-layer large language model in addressing the intricate challenges associated with both robot task planning and execution. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

21 pages, 9648 KiB  
Article
Kinematic Analysis and Application to Control Logic Development for RHex Robot Locomotion
by Piotr Burzyński, Ewa Pawłuszewicz, Leszek Ambroziak and Suryansh Sharma
Sensors 2024, 24(5), 1636; https://0-doi-org.brum.beds.ac.uk/10.3390/s24051636 - 02 Mar 2024
Viewed by 584
Abstract
This study explores the kinematic model of the popular RHex hexapod robots which have garnered considerable interest for their locomotion capabilities. We study the influence of tripod trajectory parameters on the RHex robot’s movement, aiming to craft a precise kinematic model that enhances [...] Read more.
This study explores the kinematic model of the popular RHex hexapod robots which have garnered considerable interest for their locomotion capabilities. We study the influence of tripod trajectory parameters on the RHex robot’s movement, aiming to craft a precise kinematic model that enhances walking mechanisms. This model serves as a cornerstone for refining robot control strategies, enabling tailored performance enhancements or specific motion patterns. Validation conducted on a bespoke test bed confirms the model’s efficacy in predicting spatial movements, albeit with minor deviations due to motor load variations and control system dynamics. In particular, the derived kinematic framework offers valuable insights for advancing control logic, particularly navigating in flat terrains, thereby broadening the RHex robot’s application spectrum. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

22 pages, 6443 KiB  
Article
Path Following for Autonomous Mobile Robots with Deep Reinforcement Learning
by Yu Cao, Kan Ni, Takahiro Kawaguchi and Seiji Hashimoto
Sensors 2024, 24(2), 561; https://0-doi-org.brum.beds.ac.uk/10.3390/s24020561 - 16 Jan 2024
Cited by 1 | Viewed by 1078
Abstract
Autonomous mobile robots have become integral to daily life, providing crucial services across diverse domains. This paper focuses on path following, a fundamental technology and critical element in achieving autonomous mobility. Existing methods predominantly address tracking through steering control, neglecting velocity control or [...] Read more.
Autonomous mobile robots have become integral to daily life, providing crucial services across diverse domains. This paper focuses on path following, a fundamental technology and critical element in achieving autonomous mobility. Existing methods predominantly address tracking through steering control, neglecting velocity control or relying on path-specific reference velocities, thereby constraining their generality. In this paper, we propose a novel approach that integrates the conventional pure pursuit algorithm with deep reinforcement learning for a nonholonomic mobile robot. Our methodology employs pure pursuit for steering control and utilizes the soft actor-critic algorithm to train a velocity control strategy within randomly generated path environments. Through simulation and experimental validation, our approach exhibits notable advancements in path convergence and adaptive velocity adjustments to accommodate paths with varying curvatures. Furthermore, this method holds the potential for broader applicability to vehicles adhering to nonholonomic constraints beyond the specific model examined in this paper. In summary, our study contributes to the progression of autonomous mobility by harmonizing conventional algorithms with cutting-edge deep reinforcement learning techniques, enhancing the robustness of path following. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

19 pages, 2886 KiB  
Article
Dynamic Output Feedback and Neural Network Control of a Non-Holonomic Mobile Robot
by Manuel Cardona and Fernando E. Serrano
Sensors 2023, 23(15), 6875; https://0-doi-org.brum.beds.ac.uk/10.3390/s23156875 - 03 Aug 2023
Viewed by 1458
Abstract
This paper presents the design and synthesis of a dynamic output feedback neural network controller for a non-holonomic mobile robot. First, the dynamic model of a non-holonomic mobile robot is presented, in which these constraints are considered for the mathematical derivation of a [...] Read more.
This paper presents the design and synthesis of a dynamic output feedback neural network controller for a non-holonomic mobile robot. First, the dynamic model of a non-holonomic mobile robot is presented, in which these constraints are considered for the mathematical derivation of a feasible representation of this kind of robot. Then, two control strategies are provided based on kinematic control for this kind of robot. The first control strategy is based on driftless control; this means that considering that the velocity vector of the mobile robot is orthogonal to its restriction, a dynamic output feedback and neural network controller is designed so that the control action would be zero only when the velocity of the mobile robot is zero. The Lyapunov stability theorem is implemented in order to find a suitable control law. Then, another control strategy is designed for trajectory-tracking purposes, in which similar to the driftless controller, a kinematic control scheme is provided that is suitable to implement in more sophisticated hardware. In both control strategies, a dynamic control law is provided along with a feedforward neural network controller, so in this way, by the Lyapunov theory, the stability and convergence to the origin of the mobile robot position coordinates are ensured. Finally, two numerical experiments are presented in order to validate the theoretical results synthesized in this research study. Discussions and conclusions are provided in order to analyze the results found in this research study. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

20 pages, 19409 KiB  
Article
A LiDAR-Camera-Inertial-GNSS Apparatus for 3D Multimodal Dataset Collection in Woodland Scenarios
by Mário P. Cristóvão, David Portugal, Afonso E. Carvalho  and João Filipe Ferreira 
Sensors 2023, 23(15), 6676; https://0-doi-org.brum.beds.ac.uk/10.3390/s23156676 - 26 Jul 2023
Cited by 1 | Viewed by 1489
Abstract
Forestry operations have become of great importance for a sustainable environment in the past few decades due to the increasing toll induced by rural abandonment and climate change. Robotics presents a promising solution to this problem; however, gathering the necessary data for developing [...] Read more.
Forestry operations have become of great importance for a sustainable environment in the past few decades due to the increasing toll induced by rural abandonment and climate change. Robotics presents a promising solution to this problem; however, gathering the necessary data for developing and testing algorithms can be challenging. This work proposes a portable multi-sensor apparatus to collect relevant data generated by several onboard sensors. The system incorporates Laser Imaging, Detection and Ranging (LiDAR), two stereo depth cameras and a dedicated inertial measurement unit (IMU) to obtain environmental data, which are coupled with an Android app that extracts Global Navigation Satellite System (GNSS) information from a cell phone. Acquired data can then be used for a myriad of perception-based applications, such as localization and mapping, flammable material identification, traversability analysis, path planning and/or semantic segmentation toward (semi-)automated forestry actuation. The modular architecture proposed is built on Robot Operating System (ROS) and Docker to facilitate data collection and the upgradability of the system. We validate the apparatus’ effectiveness in collecting datasets and its flexibility by carrying out a case study for Simultaneous Localization and Mapping (SLAM) in a challenging woodland environment, thus allowing us to compare fundamentally different methods with the multimodal system proposed. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

23 pages, 12102 KiB  
Article
Commercial Optical and Acoustic Sensor Performances under Varying Turbidity, Illumination, and Target Distances
by Fredrik Fogh Sørensen, Christian Mai, Ole Marius Olsen, Jesper Liniger and Simon Pedersen
Sensors 2023, 23(14), 6575; https://0-doi-org.brum.beds.ac.uk/10.3390/s23146575 - 21 Jul 2023
Cited by 4 | Viewed by 1007
Abstract
Acoustic and optical sensing modalities represent two of the primary sensing methods within underwater environments, and both have been researched extensively in previous works. Acoustic sensing is the premier method due to its high transmissivity in water and its relative immunity to environmental [...] Read more.
Acoustic and optical sensing modalities represent two of the primary sensing methods within underwater environments, and both have been researched extensively in previous works. Acoustic sensing is the premier method due to its high transmissivity in water and its relative immunity to environmental factors such as water clarity. Optical sensing is, however, valuable for many operational and inspection tasks and is readily understood by human operators. In this work, we quantify and compare the operational characteristics and environmental effects of turbidity and illumination on two commercial-off-the-shelf sensors and an additional augmented optical method, including: a high-frequency, forward-looking inspection sonar, a stereo camera with built-in stereo depth estimation, and color imaging, where a laser has been added for distance triangulation. The sensors have been compared in a controlled underwater environment with known target objects to ascertain quantitative operation performance, and it is shown that optical stereo depth estimation and laser triangulation operate satisfactorily at low and medium turbidites up to a distance of approximately one meter, with an error below 2 cm and 12 cm, respectively; acoustic measurements are almost completely unaffected up to two meters under high turbidity, with an error below 5 cm. Moreover, the stereo vision algorithm is slightly more robust than laser-line triangulation across turbidity and lighting conditions. Future work will concern the improvement of the stereo reconstruction and laser triangulation by algorithm enhancement and the fusion of the two sensing modalities. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

26 pages, 6234 KiB  
Article
RUDE-AL: Roped UGV Deployment Algorithm of an MCDPR for Sinkhole Exploration
by David Orbea, Christyan Cruz Ulloa, Jaime Del Cerro and Antonio Barrientos
Sensors 2023, 23(14), 6487; https://0-doi-org.brum.beds.ac.uk/10.3390/s23146487 - 18 Jul 2023
Viewed by 869
Abstract
The presence of sinkholes has been widely studied due to their potential risk to infrastructure and to the lives of inhabitants and rescuers in urban disaster areas, which is generally addressed in geotechnics and geophysics. In recent years, robotics has gained importance for [...] Read more.
The presence of sinkholes has been widely studied due to their potential risk to infrastructure and to the lives of inhabitants and rescuers in urban disaster areas, which is generally addressed in geotechnics and geophysics. In recent years, robotics has gained importance for the inspection and assessment of areas of potential risk for sinkhole formation, as well as for environmental exploration and post-disaster assistance. From the mobile robotics approach, this paper proposes RUDE-AL (Roped UGV DEployment ALgorithm), a methodology for deploying a Mobile Cable-Driven Parallel Robot (MCDPR) composed of four mobile robots and a cable-driven parallel robot (CDPR) for sinkhole exploration tasks and assistance to potential trapped victims. The deployment of the fleet is organized with node-edge formation during the mission’s first stage, positioning itself around the area of interest and acting as anchors for the subsequent release of the cable robot. One of the relevant issues considered in this work is the selection of target points for mobile robots (anchors) considering the constraints of a roped fleet, avoiding the collision of the cables with positive obstacles through a fitting function that maximizes the area covered of the zone to explore and minimizes the cost of the route distance performed by the fleet using genetic algorithms, generating feasible target routes for each mobile robot with a configurable balance between the parameters of the fitness function. The main results show a robust method whose adjustment function is affected by the number of positive obstacles near the area of interest and the shape characteristics of the sinkhole. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

37 pages, 9293 KiB  
Article
Gaze Point Tracking Based on a Robotic Body–Head–Eye Coordination Method
by Xingyang Feng, Qingbin Wang, Hua Cong, Yu Zhang and Mianhao Qiu
Sensors 2023, 23(14), 6299; https://0-doi-org.brum.beds.ac.uk/10.3390/s23146299 - 11 Jul 2023
Viewed by 1609
Abstract
When the magnitude of a gaze is too large, human beings change the orientation of their head or body to assist their eyes in tracking targets because saccade alone is insufficient to keep a target at the center region of the retina. To [...] Read more.
When the magnitude of a gaze is too large, human beings change the orientation of their head or body to assist their eyes in tracking targets because saccade alone is insufficient to keep a target at the center region of the retina. To make a robot gaze at targets rapidly and stably (as a human does), it is necessary to design a body–head–eye coordinated motion control strategy. A robot system equipped with eyes and a head is designed in this paper. Gaze point tracking problems are divided into two sub-problems: in situ gaze point tracking and approaching gaze point tracking. In the in situ gaze tracking state, the desired positions of the eye, head and body are calculated on the basis of minimizing resource consumption and maximizing stability. In the approaching gaze point tracking state, the robot is expected to approach the object at a zero angle. In the process of tracking, the three-dimensional (3D) coordinates of the object are obtained by the bionic eye and then converted to the head coordinate system and the mobile robot coordinate system. The desired positions of the head, eyes and body are obtained according to the object’s 3D coordinates. Then, using sophisticated motor control methods, the head, eyes and body are controlled to the desired position. This method avoids the complex process of adjusting control parameters and does not require the design of complex control algorithms. Based on this strategy, in situ gaze point tracking and approaching gaze point tracking experiments are performed by the robot. The experimental results show that body–head–eye coordination gaze point tracking based on the 3D coordinates of an object is feasible. This paper provides a new method that differs from the traditional two-dimensional image-based method for robotic body–head–eye gaze point tracking. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

20 pages, 2806 KiB  
Article
A Convex Optimization Approach to Multi-Robot Task Allocation and Path Planning
by Tingjun Lei, Pradeep Chintam, Chaomin Luo, Lantao Liu and Gene Eu Jan
Sensors 2023, 23(11), 5103; https://0-doi-org.brum.beds.ac.uk/10.3390/s23115103 - 26 May 2023
Cited by 6 | Viewed by 1837
Abstract
In real-world applications, multiple robots need to be dynamically deployed to their appropriate locations as teams while the distance cost between robots and goals is minimized, which is known to be an NP-hard problem. In this paper, a new framework of team-based multi-robot [...] Read more.
In real-world applications, multiple robots need to be dynamically deployed to their appropriate locations as teams while the distance cost between robots and goals is minimized, which is known to be an NP-hard problem. In this paper, a new framework of team-based multi-robot task allocation and path planning is developed for robot exploration missions through a convex optimization-based distance optimal model. A new distance optimal model is proposed to minimize the traveled distance between robots and their goals. The proposed framework fuses task decomposition, allocation, local sub-task allocation, and path planning. To begin, multiple robots are firstly divided and clustered into a variety of teams considering interrelation and dependencies of robots, and task decomposition. Secondly, the teams with various arbitrary shape enclosing intercorrelative robots are approximated and relaxed into circles, which are mathematically formulated to convex optimization problems to minimize the distance between teams, as well as between a robot and their goals. Once the robot teams are deployed into their appropriate locations, the robot locations are further refined by a graph-based Delaunay triangulation method. Thirdly, in the team, a self-organizing map-based neural network (SOMNN) paradigm is developed to complete the dynamical sub-task allocation and path planning, in which the robots are dynamically assigned to their nearby goals locally. Simulation and comparison studies demonstrate the proposed hybrid multi-robot task allocation and path planning framework is effective and efficient. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

31 pages, 4504 KiB  
Article
Robot Navigation in Complex Workspaces Employing Harmonic Maps and Adaptive Artificial Potential Fields
by Panagiotis Vlantis, Charalampos P. Bechlioulis and Kostas J. Kyriakopoulos
Sensors 2023, 23(9), 4464; https://0-doi-org.brum.beds.ac.uk/10.3390/s23094464 - 03 May 2023
Cited by 3 | Viewed by 1444
Abstract
In this work, we address the single robot navigation problem within a planar and arbitrarily connected workspace. In particular, we present an algorithm that transforms any static, compact, planar workspace of arbitrary connectedness and shape to a disk, where the navigation problem can [...] Read more.
In this work, we address the single robot navigation problem within a planar and arbitrarily connected workspace. In particular, we present an algorithm that transforms any static, compact, planar workspace of arbitrary connectedness and shape to a disk, where the navigation problem can be easily solved. Our solution benefits from the fact that it only requires a fine representation of the workspace boundary (i.e., a set of points), which is easily obtained in practice via SLAM. The proposed transformation, combined with a workspace decomposition strategy that reduces the computational complexity, has been exhaustively tested and has shown excellent performance in complex workspaces. A motion control scheme is also provided for the class of non-holonomic robots with unicycle kinematics, which are commonly used in most industrial applications. Moreover, the tuning of the underlying control parameters is rather straightforward as it affects only the shape of the resulted trajectories and not the critical specifications of collision avoidance and convergence to the goal position. Finally, we validate the efficacy of the proposed navigation strategy via extensive simulations and experimental studies. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

23 pages, 8743 KiB  
Article
Navigation with Polytopes: A Toolbox for Optimal Path Planning with Polytope Maps and B-spline Curves
by Ngoc Thinh Nguyen, Pranav Tej Gangavarapu, Niklas Fin Kompe, Georg Schildbach and Floris Ernst
Sensors 2023, 23(7), 3532; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073532 - 28 Mar 2023
Cited by 3 | Viewed by 1695
Abstract
To deal with the problem of optimal path planning in 2D space, this paper introduces a new toolbox named “Navigation with Polytopes” and explains the algorithms behind it. The toolbox allows one to create a polytopic map from a standard grid map, search [...] Read more.
To deal with the problem of optimal path planning in 2D space, this paper introduces a new toolbox named “Navigation with Polytopes” and explains the algorithms behind it. The toolbox allows one to create a polytopic map from a standard grid map, search for an optimal corridor, and plan a safe B-spline reference path used for mobile robot navigation. Specifically, the B-spline path is converted into its equivalent Bézier representation via a novel calculation method in order to reduce the conservativeness of the constrained path planning problem. The conversion can handle the differences between the curve intervals and allows for efficient computation. Furthermore, two different constraint formulations used for enforcing a B-spline path to stay within the sequence of connected polytopes are proposed, one with a guaranteed solution. The toolbox was extensively validated through simulations and experiments. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

18 pages, 3732 KiB  
Article
Improving Mobile Robot Maneuver Performance Using Fractional-Order Controller
by Daniel Acosta, Bibiana Fariña, Jonay Toledo and Leopoldo Acosta
Sensors 2023, 23(6), 3191; https://0-doi-org.brum.beds.ac.uk/10.3390/s23063191 - 16 Mar 2023
Viewed by 1098
Abstract
In this paper, the low-level velocity controller of an autonomous vehicle is studied. The performance of the traditional controller used in this kind of system, a PID, is analyzed. This kind of controller cannot follow ramp references without error, so when the reference [...] Read more.
In this paper, the low-level velocity controller of an autonomous vehicle is studied. The performance of the traditional controller used in this kind of system, a PID, is analyzed. This kind of controller cannot follow ramp references without error, so when the reference implies a change in the speed, the vehicle cannot follow the proposed reference, and there is a significant difference between the actual and desired vehicle behaviors. A fractional controller is proposed which changes the ordinary dynamics allowing faster responses for small times, at the cost of slower responses for large times. The idea is to take advantage of this fact to follow fast setpoint changes with a smaller error than that obtained with a classic non-fractional PI controller. Using this controller, the vehicle can follow variable speed references with zero stationary error, significantly reducing the difference between reference and actual vehicle behavior. The paper presents the fractional controller, studies its stability in function of the fractional parameters, designs the controller, and tests its stability. The designed controller is tested on a real prototype, and its behavior is compared to a standard PID controller. The designed fractional PID controller overcomes the results of the standard PID controller. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

18 pages, 7680 KiB  
Article
Self Calibration of a Sonar–Vision System for Underwater Vehicles: A New Method and a Dataset
by Nicolas Pecheux, Vincent Creuze, Frédéric Comby and Olivier Tempier
Sensors 2023, 23(3), 1700; https://0-doi-org.brum.beds.ac.uk/10.3390/s23031700 - 03 Feb 2023
Cited by 2 | Viewed by 1916
Abstract
Monocular cameras and multibeam imaging sonars are common sensors of Unmanned Underwater Vehicles (UUV). In this paper, we propose a new method for calibrating a hybrid sonar–vision system. This method is based on motion comparisons between both images and allows us to compute [...] Read more.
Monocular cameras and multibeam imaging sonars are common sensors of Unmanned Underwater Vehicles (UUV). In this paper, we propose a new method for calibrating a hybrid sonar–vision system. This method is based on motion comparisons between both images and allows us to compute the transformation matrix between the camera and the sonar and to estimate the camera’s focal length. The main advantage of our method lies in performing the calibration without any specific calibration pattern, while most other existing methods use physical targets. In this paper, we also propose a new sonar–vision dataset and use it to prove the validity of our calibration method. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

17 pages, 6510 KiB  
Article
A Strategy for Controlling Motions Related to Sensory Information in a Walking Robot Big Foot
by Ivan Chavdarov, Kaloyan Yovchev, Lyubomira Miteva, Aleksander Stefanov and Dimitar Nedanovski
Sensors 2023, 23(3), 1506; https://0-doi-org.brum.beds.ac.uk/10.3390/s23031506 - 29 Jan 2023
Cited by 1 | Viewed by 1427
Abstract
Acquiring adequate sensory information and using it to provide motor control are important issues in the process of creating walking robots. The objective of this article is to present control algorithms for the optimization of the walking cycle of an innovative walking robot [...] Read more.
Acquiring adequate sensory information and using it to provide motor control are important issues in the process of creating walking robots. The objective of this article is to present control algorithms for the optimization of the walking cycle of an innovative walking robot named “Big Foot”. The construction of the robot is based on minimalist design principles—only two motors are used, with which Big Foot can walk and even overcome obstacles. It is equipped with different types of sensors, with some of them providing information necessary for the realization of an optimized walk cycle. We examine two laws of motion—sinusoidal and polynomial—where we compare the results with constant angular velocity motion. Both proposed laws try to find balance between minimizing shock loads and maximizing walking speed for a given motor power. Experimental results are derived with the help of a 3D-printed working prototype of the robot, with the correct realization of the laws of motion being ensured by the use of a PD controller receiving data from motor encoders and tactile sensors. The experimental results validate the proposed laws of motion and the results can be applied to other walking robots with similar construction. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

17 pages, 2232 KiB  
Article
Semantic-Structure-Aware Multi-Level Information Fusion for Robust Global Orientation Optimization of Autonomous Mobile Robots
by Guofei Xiang, Songyi Dian, Ning Zhao and Guodong Wang
Sensors 2023, 23(3), 1125; https://0-doi-org.brum.beds.ac.uk/10.3390/s23031125 - 18 Jan 2023
Cited by 3 | Viewed by 1702
Abstract
Multi-camera-based simultaneous localization and mapping (SLAM) has been widely applied in various mobile robots under uncertain or unknown environments to accomplish tasks autonomously. However, the conventional purely data-driven feature extraction methods cannot utilize the rich semantic information in the environment, which leads to [...] Read more.
Multi-camera-based simultaneous localization and mapping (SLAM) has been widely applied in various mobile robots under uncertain or unknown environments to accomplish tasks autonomously. However, the conventional purely data-driven feature extraction methods cannot utilize the rich semantic information in the environment, which leads to the performance of the SLAM system being susceptible to various interferences. In this work, we present a semantic-aware multi-level information fusion scheme for robust global orientation estimation. Specifically, a visual semantic perception system based on the synthesized surround view image is proposed for the multi-eye surround vision system widely used in mobile robots, which is used to obtain the visual semantic information required for SLAM tasks. The original multi-eye image was first transformed to the synthesized surround view image, and the passable space was extracted with the help of the semantic segmentation network model as a mask for feature extraction; moreover, the hybrid edge information was extracted to effectively eliminate the distorted edges by further using the distortion characteristics of the reverse perspective projection process. Then, the hybrid semantic information was used for robust global orientation estimation; thus, better localization performance was obtained. The experiments on an intelligent vehicle, which was used for automated valet parking both in indoor and outdoor scenes, showed that the proposed hybrid multi-level information fusion method achieved at least a 10-percent improvement in comparison with other edge segmentation methods, the average orientation estimation error being between 1 and 2 degrees, much smaller than other methods, and the trajectory drift value of the proposed method was much smaller than that of other methods. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

14 pages, 7298 KiB  
Article
Dynamic Balance Control of Double Gyros Unicycle Robot Based on Sliding Mode Controller
by Yang Zhang, Hongzhe Jin and Jie Zhao
Sensors 2023, 23(3), 1064; https://0-doi-org.brum.beds.ac.uk/10.3390/s23031064 - 17 Jan 2023
Cited by 4 | Viewed by 1626
Abstract
This paper presents a doublegyroscope unicycle robot, which is dynamically balanced by sliding mode controller and PD controller based on its dynamics. This double−gyroscope robot uses the precession effect of the double gyro system to achieve its lateral balance. The two gyroscopes are [...] Read more.
This paper presents a doublegyroscope unicycle robot, which is dynamically balanced by sliding mode controller and PD controller based on its dynamics. This double−gyroscope robot uses the precession effect of the double gyro system to achieve its lateral balance. The two gyroscopes are at the same speed and in reverse direction so as to ensure that the precession torque of the gyroscopes does not interfere with the longitudinal direction of the unicycle robot. The lateral controller of the unicycle robot is a sliding mode controller. It not only maintains the balance ability of the unicycle robot, but also improves its robustness. The longitudinal controller of the unicycle robot is a PD controller, and its input variables are pitch angle and pitch angular velocity. In order to track the set speed, the speed of the unicycle robot is brought into the longitudinal controller to facilitate the speed control. The dynamic balance of the designed double gyro unicycle robot is verified by simulation and experiment results. At the same time, the anti−interference ability of the designed controller is verified by interference simulation and experiment. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

19 pages, 3421 KiB  
Article
Improving Tracking of Trajectories through Tracking Rate Regulation: Application to UAVs
by Fernando Diaz-del-Rio, Pablo Sanchez-Cuevas, Pablo Iñigo-Blasco and J. L. Sevillano-Ramos
Sensors 2022, 22(24), 9795; https://0-doi-org.brum.beds.ac.uk/10.3390/s22249795 - 13 Dec 2022
Cited by 3 | Viewed by 1251
Abstract
The tracking problem (that is, how to follow a previously memorized path) is one of the most important problems in mobile robots. Several methods can be formulated depending on the way the robot state is related to the path. “Trajectory tracking” is the [...] Read more.
The tracking problem (that is, how to follow a previously memorized path) is one of the most important problems in mobile robots. Several methods can be formulated depending on the way the robot state is related to the path. “Trajectory tracking” is the most common method, with the controller aiming to move the robot toward a moving target point, like in a real-time servosystem. In the case of complex systems or systems under perturbations or unmodeled effects, such as UAVs (Unmanned Aerial Vehicles), other tracking methods can offer additional benefits. In this paper, methods that consider the dynamics of the path’s descriptor parameter (which can be called “error adaptive tracking”) are contrasted with trajectory tracking. A formal description of tracking methods is first presented, showing that two types of error adaptive tracking can be used with the same controller in any system. Then, it is shown that the selection of an appropriate tracking rate improves error convergence and robustness for a UAV system, which is illustrated by simulation experiments. It is concluded that error adaptive tracking methods outperform trajectory tracking ones, producing a faster and more robust convergence tracking, while preserving, if required, the same tracking rate when convergence is achieved. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

21 pages, 1801 KiB  
Article
OCTUNE: Optimal Control Tuning Using Real-Time Data with Algorithm and Experimental Results
by Mohamed Abdelkader, Mohamed Mabrok and Anis Koubaa
Sensors 2022, 22(23), 9240; https://0-doi-org.brum.beds.ac.uk/10.3390/s22239240 - 28 Nov 2022
Viewed by 1721
Abstract
Autonomous robots require control tuning to optimize their performance, such as optimal trajectory tracking. Controllers, such as the Proportional–Integral–Derivative (PID) controller, which are commonly used in robots, are usually tuned by a cumbersome manual process or offline data-driven methods. Both approaches must be [...] Read more.
Autonomous robots require control tuning to optimize their performance, such as optimal trajectory tracking. Controllers, such as the Proportional–Integral–Derivative (PID) controller, which are commonly used in robots, are usually tuned by a cumbersome manual process or offline data-driven methods. Both approaches must be repeated if the system configuration changes or becomes exposed to new environmental conditions. In this work, we propose a novel algorithm that can perform online optimal control tuning (OCTUNE) of a discrete linear time-invariant (LTI) controller in a classical feedback system without the knowledge of the plant dynamics. The OCTUNE algorithm uses the backpropagation optimization technique to optimize the controller parameters. Furthermore, convergence guarantees are derived using the Lyapunov stability theory to ensure stable iterative tuning using real-time data. We validate the algorithm in realistic simulations of a quadcopter model with PID controllers using the known Gazebo simulator and a real quadcopter platform. Simulations and actual experiment results show that OCTUNE can be effectively used to automatically tune the UAV PID controllers in real-time, with guaranteed convergence. Finally, we provide an open-source implementation of the OCTUNE algorithm, which can be adapted for different applications. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

22 pages, 8157 KiB  
Article
Neural Network-Based Autonomous Search Model with Undulatory Locomotion Inspired by Caenorhabditis Elegans
by Mohan Chen, Dazheng Feng, Hongtao Su, Meng Wang and Tingting Su
Sensors 2022, 22(22), 8825; https://0-doi-org.brum.beds.ac.uk/10.3390/s22228825 - 15 Nov 2022
Viewed by 1417
Abstract
Caenorhabditis elegans (C. elegans) exhibits sophisticated chemotaxis behavior with a unique locomotion pattern using a simple nervous system only and is, therefore, well suited to inspire simple, cost-effective robotic navigation schemes. Chemotaxis in C. elegans involves two complementary strategies: klinokinesis, which [...] Read more.
Caenorhabditis elegans (C. elegans) exhibits sophisticated chemotaxis behavior with a unique locomotion pattern using a simple nervous system only and is, therefore, well suited to inspire simple, cost-effective robotic navigation schemes. Chemotaxis in C. elegans involves two complementary strategies: klinokinesis, which allows reorientation by sharp turns when moving away from targets; and klinotaxis, which gradually adjusts the direction of motion toward the preferred side throughout the movement. In this study, we developed an autonomous search model with undulatory locomotion that combines these two C. elegans chemotaxis strategies with its body undulatory locomotion. To search for peaks in environmental variables such as chemical concentrations and radiation in directions close to the steepest gradients, only one sensor is needed. To develop our model, we first evolved a central pattern generator and designed a minimal network unit with proprioceptive feedback to encode and propagate rhythmic signals; hence, we realized realistic undulatory locomotion. We then constructed adaptive sensory neuron models following real electrophysiological characteristics and incorporated a state-dependent gating mechanism, enabling the model to execute the two orientation strategies simultaneously according to information from a single sensor. Simulation results verified the effectiveness, superiority, and realness of the model. Our simply structured model exploits multiple biological mechanisms to search for the shortest-path concentration peak over a wide range of gradients and can serve as a theoretical prototype for worm-like navigation robots. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

33 pages, 17395 KiB  
Article
Intelligent Smart Marine Autonomous Surface Ship Decision System Based on Improved PPO Algorithm
by Wei Guan, Zhewen Cui and Xianku Zhang
Sensors 2022, 22(15), 5732; https://0-doi-org.brum.beds.ac.uk/10.3390/s22155732 - 31 Jul 2022
Cited by 12 | Viewed by 2663
Abstract
With the development of artificial intelligence technology, the behavior decision-making of an intelligent smart marine autonomous surface ship (SMASS) has become particularly important. This research proposed local path planning and a behavior decision-making approach based on improved Proximal Policy Optimization (PPO), which could [...] Read more.
With the development of artificial intelligence technology, the behavior decision-making of an intelligent smart marine autonomous surface ship (SMASS) has become particularly important. This research proposed local path planning and a behavior decision-making approach based on improved Proximal Policy Optimization (PPO), which could drive an unmanned SMASS to the target without requiring any human experiences. In addition, a generalized advantage estimation was added to the loss function of the PPO algorithm, which allowed baselines in PPO algorithms to be self-adjusted. At first, the SMASS was modeled with the Nomoto model in a simulation waterway. Then, distances, obstacles, and prohibited areas were regularized as rewards or punishments, which were used to judge the performance and manipulation decisions of the vessel Subsequently, improved PPO was introduced to learn the action–reward model, and the neural network model after training was used to manipulate the SMASS’s movement. To achieve higher reward values, the SMASS could find an appropriate path or navigation strategy by itself. After a sufficient number of rounds of training, a convincing path and manipulation strategies would likely be produced. Compared with the proposed approach of the existing methods, this approach is more effective in self-learning and continuous optimization and thus closer to human manipulation. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

Review

Jump to: Research

37 pages, 8227 KiB  
Review
Multi-Agent Deep Reinforcement Learning for Multi-Robot Applications: A Survey
by James Orr and Ayan Dutta
Sensors 2023, 23(7), 3625; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073625 - 30 Mar 2023
Cited by 25 | Viewed by 10569
Abstract
Deep reinforcement learning has produced many success stories in recent years. Some example fields in which these successes have taken place include mathematics, games, health care, and robotics. In this paper, we are especially interested in multi-agent deep reinforcement learning, where multiple agents [...] Read more.
Deep reinforcement learning has produced many success stories in recent years. Some example fields in which these successes have taken place include mathematics, games, health care, and robotics. In this paper, we are especially interested in multi-agent deep reinforcement learning, where multiple agents present in the environment not only learn from their own experiences but also from each other and its applications in multi-robot systems. In many real-world scenarios, one robot might not be enough to complete the given task on its own, and, therefore, we might need to deploy multiple robots who work together towards a common global objective of finishing the task. Although multi-agent deep reinforcement learning and its applications in multi-robot systems are of tremendous significance from theoretical and applied standpoints, the latest survey in this domain dates to 2004 albeit for traditional learning applications as deep reinforcement learning was not invented. We classify the reviewed papers in our survey primarily based on their multi-robot applications. Our survey also discusses a few challenges that the current research in this domain faces and provides a potential list of future applications involving multi-robot systems that can benefit from advances in multi-agent deep reinforcement learning. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

15 pages, 301 KiB  
Review
Virtual Obstacles for Sensors Incapacitation in Robot Navigation: A Systematic Review of 2D Path Planning
by Thabang Ngwenya, Michael Ayomoh and Sarma Yadavalli
Sensors 2022, 22(18), 6943; https://0-doi-org.brum.beds.ac.uk/10.3390/s22186943 - 14 Sep 2022
Cited by 3 | Viewed by 2236
Abstract
The field of mobile robot (MR) navigation with obstacle avoidance has largely focused on real, physical obstacles as the sole external causative agent for navigation impediment. This paper has explored the possible option of virtual obstacles (VOs) dominance in robot navigation impediment in [...] Read more.
The field of mobile robot (MR) navigation with obstacle avoidance has largely focused on real, physical obstacles as the sole external causative agent for navigation impediment. This paper has explored the possible option of virtual obstacles (VOs) dominance in robot navigation impediment in certain navigation environments as a MR move from one point in the workspace to a desired target point. The systematically explored literature presented reviews mostly between the years 2000 and 2021; however, some outlier reviews from earlier years were also covered. An exploratory review approach was deployed to itemise and discuss different navigation environments and how VOs can impact the efficacy of both algorithms and sensors on a robotic vehicle. The associated limitations and the specific problem types addressed in the different literature sources were highlighted including whether or not a VO was considered in the path planning simulation or experiment. The discussion and conclusive sections further recommended some solutions as a measure towards addressing sensor performance incapacitation in a robot vehicle navigation problem. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Back to TopTop