Next Article in Journal
The Effect of Inorganic Preservatives in the Norway Spruce Wood on Its Wettability and Adhesion with PUR Glue
Next Article in Special Issue
Water Surface and Ground Control of a Small Cross-Domain Robot Based on Fast Line-of-Sight Algorithm and Adaptive Sliding Mode Integral Barrier Control
Previous Article in Journal
Magnetic Field Effect on the Handedness of Electrodeposited Heusler Alloy
Previous Article in Special Issue
Impedance Control of Space Robot On-Orbit Insertion and Extraction Based on Prescribed Performance Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method for Detecting Dynamic Objects Using 2D LiDAR Based on Scan Matching

1
Department of Control and Information Systems, Faculty of Electrical Engineering and Information Technology, University of Žilina, 010 26 Žilina, Slovakia
2
Department of Geotechnics, Faculty of Civil Engineering, University of Žilina, 010 26 Žilina, Slovakia
*
Author to whom correspondence should be addressed.
Submission received: 11 April 2022 / Revised: 16 May 2022 / Accepted: 30 May 2022 / Published: 1 June 2022
(This article belongs to the Special Issue Robots Dynamics: Application and Control)

Abstract

:
The autonomous movement of the mobile robotic system is a complex problem. If there are dynamic objects in the space when performing this task, the complexity of the solution increases. To avoid collisions, it is necessary to implement a suitable detection algorithm and adjust the trajectory of the robotic system. This work deals with the design of a method for the detection of dynamic objects; based on the outputs of this method, the moving trajectory of the robotic system is modified. The method is based on the SegMatch algorithm, which is based on the scan matching, while the main sensor of the environment is a 2D LiDAR. This method is successfully implemented in an autonomous mobile robotic system, the aim of which is to perform active simultaneous localization and mapping. The result is a collision-free transition through a mapped environment. Matlab is used as the main software tool.

1. Introduction

Nowadays, robotics has several areas, each of which is closely related. The area of robotics that we focus on in our workplace is called Simultaneous Localization and Mapping (SLAM). In solving this problem, the robotic system is in an unknown environment, and the initial position is known. The main goals of the SLAM solution are to obtain the most accurate model of the mapped environment and to monitor the exact position of the robotic system in the environment. This task does not seem very difficult when considering an ideal environment where various noises and measurement errors do not occur. Unfortunately, this kind of environment can only be achieved on a theoretical level. Various types of methods and algorithms are used to overcome these shortcomings. The sequence of individual steps is generally still the same and independent of the type of algorithm used. The steps are as follows: initialization, data association, extraction of landmarks, estimation of the state of the mobile robotic system and the state of landmarks, updating the status of the mobile robotic system, and updating the status of landmarks. For each of the mentioned steps, it is possible to use different solution methods, which are already linked to the specific algorithm used. SLAM can be used in a variety of environments, whether on land, in water, or in the air. Currently, one of the most popular techniques in this area is called ASLAM. It is often referred to in the scientific literature as automatic SLAM [1], autonomous SLAM [2], or SPLMA (Simultaneous Planning, Localization and Mapping) [3]. All these mentioned names have the same destination and thus is active motion selection. Using ASLAM, the autonomous movement of the mobile robotic system is solved in order to obtain a map. In the obtained map, various areas of our interest can be defined, such as unexplored areas, areas of irregular shapes, and areas with narrow alleys. The topic of autonomous vehicles is very wide. If we wanted to focus on the field of road transport, we could have achieved it as the author in [4] did, where he chose to use LAS (Lane Assistant System) technology. If we wanted to transfer such an application to the field of industry, it is possible to cite the deployment of an Autonomous Ground Vehicle (AGV) in an industrial hall as an example. It is usually necessary to map the space before deployment (this depends on the technology used), and it is possible to use active SLAM for this. Such mapping is often performed during working hours and the customer cannot always afford to be absent from the workforce, with the result that there may be a dynamic object in the mapped environment in the form of moving people. This can result in defects in the map and collisions of the robotic system with a dynamic object. We considered these mentioned complications to be crucial from our point of view, so we tried to find a solution. ASLAM with a combination of dynamic object detection to avoid collisions can significantly reduce the time required to map the environment.
When operating a mobile robotic system, the detection of dynamic objects in the environment is an important aspect that increases the safety of the entire system. However, detection alone is often not enough to provide reliable navigation in a complex environment. The ideal situation is if the robotic system can predict the shape and trajectory of the movement in order to avoid collisions. The combination of dynamic object motion detection and prediction can be considered a particularly challenging task if we consider the variability of dynamic object patterns. Taking into account as an example two objects: a man and a car, two different types of motion arise. The person moves nonlinearly and without significant restrictions when changing direction, while the car has a smoother movement and reaches a much higher speed, which can cause a problem when tracking it. Even with this consideration, it can be seen that this is a complex problem in which a different approach to the solution can be chosen. If an algorithm is used to map a dynamic environment that does not consider the occurrence of dynamic objects, it will adversely affect the mapping result. This has resulted in new methods and approaches to addressing SLAM. Detection and tracking of moving objects (DATMO) deals with the detection and tracking of dynamic objects. This issue is relatively old and the author has summarized it in [5], summarizing the current state and possible methods of solution. In [6], the author focused on the creation of a probabilistic technique of data clustering and estimation of the movement of individual clusters. He called this method CRF-Clustering (Conditional Random Fields). In addition, he addressed the issue of overlapping points of dynamic objects in individual consecutive images. He performed the measurements by scanning the environment using a 2D lidar. The problem of creating a map in a dynamic environment is addressed by the author in [7]. In the article, the author designs a new method that inserts a probabilistic technique for identifying false measurements between mapping and localization processes. The aim of his work was to capture dynamic objects, isolate them, and represent them in 3D space using lidar scanning. In this approach, he worked with both 2D and 3D lidar data. In the mentioned sources, the authors mainly focused on the detection of a dynamic object, or its tracking or removal from the map.
One of the interesting options for accessing detected dynamic objects can take advantage of the autonomous movement of the robot. If the robot detects the occurrence of a dynamic object in its motion trajectory, it recalculates a new trajectory that would dynamically reflect the object on the map. This mentioned issue was an area of our interest. One of the first steps was to design an algorithm that would allow the detection of dynamic objects. During the designing process, we decided to use a method that is based on the scan matching. Scan matching is the process of obtaining the relative pose between two frames whose visual fields overlap with one another. This method is often used when solving SLAM. A high-precision frame-to-frame scan matching algorithm is critical to improve the autonomous navigation capabilities of robots. Scan matching algorithms can be divided into two categories based on the type of sensor used: laser scan matching algorithms and visual scan matching algorithms. Compared to visual sensors, laser lidar has anti-interference capabilities, and its performance is not affected by light. Moreover, the research results of this paper were applied to indoor service robots, so 2D lidar sensors were adopted for scan matching. Due to the importance of the frame-to-frame scan matching algorithm, it has attracted much research attention, and research has resulted in many milestone findings. The most classic frame-to-frame scan matching algorithm is the Iterative Closest Point (ICP) algorithm proposed in [8]. A large number of variants [9] have been proposed based on ICP. Authors in [10] proposed Point-to-Line ICP (PL-ICP), which overcomes the shortcomings of low-resolution lidar data and has quadratic convergence properties, achieving fewer iterations and higher accuracy, and this algorithm performed well in a structured environment. Similar to the PL-ICP, [11] proposed a probabilistic version of the ICP: GICP (Generalized ICP), which uses the Gaussian distribution to model the lidar sensor data and assigns weights according to the normal vector. This algorithm can be considered to be a variant of Point-to-Plane ICP [12]. The authors in [13] introduced a frame-to-frame Scan Matching Algorithm for 2D Lidar Based on Attention. This method took into account only a part of the images in the comparison, which guaranteed less error accumulation and higher accuracy. We were most interested in the SegMatch method described in [14]. The SegMatch algorithm was designed for online real-time loop closure detection and localization. We decided to use part of this method in our design. The result of these methods should be to estimate the position of individual images based on matching areas, with nonmatching areas becoming irrelevant. In the approach proposed in this article, we focus on nonmatching areas and try to verify that these are dynamic objects.
Section 2 discusses the used hardware and software. Section 3 deals with the design and implementation of an algorithm for detecting dynamic objects during SLAM execution.

2. Hardware and Software Solutions

For practical validation of the mentioned issue, it is necessary to have a mobile robotic system (MRS), to which the proposed solution can be applied. From our point of view, the MRS used is considered a black box that can be controlled using the API. Communication with the robot takes place using LoRa technology. LoRa (Long Range) is a spread spectrum modulation technique derived from chirp spread spectrum technology. This technology is a long-range, low-power wireless platform that has become the de facto wireless platform of the Internet of Things. A Lidar LD-OEM 1000 is mounted on the MRS, which performs measurements. LoRa communication technology does not allow sending data in the volume and speed required, so a Wi-Fi router is added, which communicates with lidar and an external computing unit using the TCP/IP protocol. The concept is summarized in Figure 1. The main element of this robotic system is the omnidirectional wheels. If omnidirectional wheels and a relevant control algorithm are used, then the robotic system is holonomic. The design and execution of this MRS is beyond the scope of this article and will therefore be published in our future publications.
There are currently many open-source solutions for SLAM, so we do not consider it necessary to make new ones. We decide to use the Matlab platform, which offers its algorithm and adapts it on our own. Matlab uses the collaboration of several libraries for the SLAM solution, while the Robotic toolbox, Computer vision, Lidar toolbox, and Navigation toolbox already include specific solutions. It is possible to map the environment using a lidar sensor, RGB camera, but also using an IMU. How the algorithm works is discussed in [15,16]. Testing the SLAM algorithm on MRS is only the first step in creating the entire ASLAM algorithm. The second step is to choose a suitable target for the next move of the MRS. The topic of ASLAM is beyond the scope of this article. If the destination to which the MRS is to be moved is selected, we can use the path planning algorithm. The next chapter focuses on path planning algorithms.

2.1. Path Planning

Path planning is a term known in robotics as part of the navigation of an autonomous robot. The result of planning is the process by which we go from the initial position to the target position. The main condition is that the road does not cross with objects in the area. There are two approaches to path planning, local or global planning. Global trip planning is based on a familiar environment and is suitable for finding the optimal route. In local planning, the robot responds appropriately to unknown or dynamic obstacles. In this case, the robot makes decisions while moving. By combining both methods, better results can be achieved. The planning can be divided into two parts, while each part can be solved separately. The first deals with the pre-processing of the workspace, where this space is discretized and then converted into a configuration space. The configuration space consists of all robot configurations in the workspace where it performs the given tasks. The robot configuration is determined by a vector that contains the position and orientation of the robot. After discretizing the workspace, a configuration space is created, represented by a graph, grid, or function. The second part deals with the use of a specific algorithm, to find the path between the start and end position of the robot, already in the created configuration space. If we wanted to categorize the individual approaches, we would again come across different points of view of the authors in their division. However, the opinions do not differ so radically. In the article [17], the author divided the issue into three basic approaches, namely roadmaps, cell decomposition, and potential fields. He considered A*, probabilistic road maps (from the Probabilistic Roadmap), and genetic algorithms (from the Genetic algorithm) to be the most frequently used algorithms. In [18], the author added a group of approaches called heuristic approaches compared to the previous author. He included artificial intelligence methods in this subset, such as neural network planning, fuzzy logic, genetic algorithms, and particle swarm optimization. As a result, he evaluated that heuristic methods are more suitable for dynamic environments. The author dealt with this area in a more complex way in [19], where his view of this area is captured in Figure 2. Here, the author mainly focused on global travel planning methods. His approach divided the environment modeling, optimization criteria, and path search method into three steps. The step, which he called the optimization criteria, can be considered as inserting inputs into the third step for optimal path search. He used methods such as the framework space approach, the free space approach, cellular decomposition, topological methods, and probabilistic road maps. He divided the search for a path into two areas, the heuristic and the field of artificial intelligence.
As we have noticed, in [18,19], there was a conflict in the division of methods under the designation heuristic. If we looked further in the area of categorization of this issue, we would find further discrepancies. Based on these discrepancies, the author in [20] chose a different approach to solve this issue. He divided all the used methods into four areas, namely Reactive computing, Soft Computing, C-Space Search, and Optimal Control. This distribution can be seen in Figure 3. In addition, each subcategory may have something in common with another subcategory.
We decide to use the A* algorithm to plan the route. Algorithm A* is used to search a discrete state space. The output should be the optimal trajectory from the initial state to the target state.

2.2. Representation of Space

The space being described can be represented as continuous or discrete. In the real world, the environment is continuous, which complicates the solution in finding the optimal route, as continuous space is described by a set of infinite possible states. For simplicity, the continuous environment is divided into a finite set of states. This process is called environmental discretization. Finding a path then means searching for a sequence of states leading to the destination state. The state represents a point in space. The basic types of space representations can be divided according to [21], into metric, topological, and hybrid, see Figure 4. In each variant, localization in space is a basic problem, but of a different nature. The represented environment is often called the environment map. Based on what part of the environment the map represents, it is divided into a local or global map. They focus locally on the robot’s close surroundings, while global maps focus on the entire area. Metric maps represent objects in the environment based on their geometric properties. Two variants are most often used. The first is based on the calculation of Cartesian coordinates of the environment. Kalman filtration is often used here, using a fusion of odometric data and data from environmental sensors. The second variant takes the form of a grid map of occupancy. These maps are made from cells, which represent an estimate of the occupancy of individual cells in the environment [21]. The way cells are divided and formed is dealt with by the cellular decomposition of the medium [22]. The advantage of the metric map is the ability to create a detailed representation of the environment. However, higher demands on recording capacity and computing capacity are also related to this. However, the biggest disadvantage of this approach is that it does not provide a system for representing symbolic entities such as tables and doors. In a topological map, the environment is represented on the basis of its significant properties. This type of map takes the form of a graph, where nodes represent significant properties and edges represent relationships between properties. When creating a map, the uncertainty in determining the position of the mobile robotic system does not affect it. One of the problems with this approach is identifying the significant features and nodes in the environment that are used to create the map. The advantage of this map is considered to be fault tolerance in determining the position of the system in Cartesian coordinates, as it is not a priority when creating the map. A hybrid map is a combination of metric and topological, consisting of nodes and edges similar to a topological map. However, both edges and nodes can be described in the metric form of a map. This way of representing space uses the advantages of both mentioned forms. The metric description provides greater map detail, and the topological description increases resilience to localization errors. The problem arises when switching from one form of representation to another [21].
The standard that deals with the data format for the needs of robotic system navigation, i.e., the map format, is called IEEE 1873–2015 [23]. This standard defines the concept of a global map, which consists of local maps. A local map can consist of a metric map, a topological map, or a combination of both. In the context of the standard, grid maps and geometric maps are included in metric maps. A geometry map contains a list of contiguous geometric elements (such as points and lines). Topological maps already consist of the mentioned nodes and edges. Edges represent a direct (single jump) connection between adjacent nodes. In metric maps, the distance between two elements must be calculated, while in typological maps, this only applies to some elements. The output of the map defined by the IEEE 1873–2015 standard is in XML format. In our case, we decide to use grid map.

3. Design and Implementation

In this section, we focus in more detail on the procedure by which we try to identify dynamic objects in the mapped space. The whole procedure is summarized in Algorithm 1. The presumption for the functionality of the algorithm is available data in the form of 2D measurements and information about the position in which the measurement is performed. These input data are provided by the SLAM algorithm. The position estimate is written in the form of a matrix [x y θ], where x, y are the global coordinates of the measuring point and θ is the angle of rotation of the scanner. The first measurement is performed at coordinates [0 0 0]. The whole procedure is based on a comparison of three measurements, where j is the measurement number.
Algorithm 1 Dynamic Object Detection
Input:
Cell 2D environmental measurements, estimation of the position of each measurement
Output:
Clouds of points of dynamic objects, matrix of centers of dynamic objects
  • Transformation of measurements into a global coordinate system
  • Creation of three consecutive point cloud measurements.
  • Point cloud segmentation from j + 1 and j + 2 measurements
  • Description of segment properties.
  • Search for matching segments based on their properties.
  • Creating a set of matching and nonmatching segments.
  • Creating an area of interest based on the boundaries of each mismatched segment.
  • Finding measurement points is an area of interest.
The used SLAM algorithm cannot directly generate global measurement coordinates; therefore, it is necessary to calculate them by estimating the measurement position. To recalculate the coordinates into the global system, we use relation, Equation (1), where x′ and y′ are the coordinates of the measurement position. The third coordinate is also needed to create a point cloud. This coordinate has a value of 0 for each measuring point.
x y   =   c o s θ s i n θ s i n θ c o s   θ x y   .
After this step, we arrive at the data processing in the loop where scan matching is performed. At the beginning of the loop, three independent cloud points are created from three consecutive measurements ptCloudC(j), ptCloudA(j + 1), ptCloudB(j + 2). For ptCloudA and ptCloudB, segmentation and clustering of points based on Euclidean distance is applied. To compare individual clusters, it is necessary to define their properties, based on which they are compared. Here, a method is used to describe eigenvalue-based features. In this method, a vector consisting of seven values, (v) = [linearity, planarity, scattering, omnivariance, anisotropy, eigenentropy, change in curvature], is created. The calculation of these values is made on the basis of the following relations, Equations (2)–(8).
L λ   =   e 1     e 2 e 1 ,
P λ   =   e 2     e 3 e 1 ,
S λ   =   e 3 e 1 ,
O λ   =   e 1 e 2 e 3 3   ,
A λ   =   e 1     e 3 e 1 ,
E λ   =   i = 1 3 e i ln e i ,
C λ   =   e 3 e 1   +   e 2   +   e 3   .
The numbers e1, e2, e3 are normalized eigenvalues that are defined on the basis of the relation ei = λi/∑λ, where i ϵ {1,2,3}. The variables λ1, λ2, λ3, are nonnegative eigenvalues that correspond to the orthogonal system of eigenvectors. ∑λ is given by Equation (9).
Σ λ   =   i = 1 3 e i   .
Assuming a point cloud formed by a total number of N 3D points and a given value kN, we may consider each individual 3D point X = (X, Y, Z)TR3 and the respective k neighbors defining its scale. For describing the local 3D structure around X, the respective 3D covariance matrix also known as the 3D structure tensor S ∈ R3×3 is derived, which is a symmetric positive definite matrix. Thus, its three eigenvalues λ1, λ2, λ3R exist, are nonnegative, and correspond to an orthogonal system of eigenvectors. The author dealt with this issue in more detail in [24].
This whole process can be called function extraction. After assigning a property to each segment, you can move to the segment comparison technique. The segment recognition algorithm (SegMatch) is used for comparison. The comparison is performed between two point clouds (source and destination), which are divided into segments and have extracted segment functions. The SegMatch algorithm was designed for online real-time loop closure detection and localization. The principle of the functionality of the whole algorithm is shown in Figure 5. From the whole algorithm, we use only the part that is marked in red in the figure. A more detailed description of the algorithm is discussed by the author in [14,25].
Properties from ptCloudA and ptCloudB are compared with each other. This comparison results in a set of matching and a set of mismatched elements. With matching elements, it is assumed that these are static elements of the map and they are no longer worked with. The set of mismatched elements from ptCloudB is processed as follows. Each cluster of points in this set is defined as a new point cloud. Its boundary x and y coordinates are extracted from this cloud. Based on these coordinates, we obtain the area in which the potential dynamic object is located. In the last identification step, the occurrence of points in this area in ptCloudC is examined. If no match with ptCloudC is found, this cluster of points is considered a dynamic object. Each dynamically identified object is stored in a cell in the form of a point cloud. From each object, its geometric center is then calculated on the basis of formula (10). The whole mentioned procedure is summarized in Figure 6.
x c e n t e r   =   ( x m i n   +   x m a x ) / 2 , y c e n t e r   =   ( y m i n   +   y m a x ) / 2 .

3.1. Detection of a Dynamic Objects during Stationary Measurement

As a first step, we decided to test our proposed algorithm on a stationary measurement, in which the mapped environment was a dynamic object. The main goal was to determine how successful our proposed algorithm would work. The measurement took place in the corridors of our department. Photos of the environment can be seen in Figure 7. This measurement also included objects that could cause noise in the measurement, such as glass doors and flowers, which we considered desirable, as the measurement did not take place in an ideal environment.
In the first step, data were collected from a lidar scanner. These data were entered into the SLAM algorithm in which the deviations were calculated and a 2D map was generated. This step was not necessary from the point of view of the whole measurement, as it was a stationary measurement and the position of the scanner did not change, but the output served as a visual check for the detection of dynamic objects. This map can be seen in Figure 8. The dynamic object that we wanted to identify using the proposed methods is highlighted in yellow. The whole map consisted of 16 measurements, while the object of our interest was in it 12 times. The measurement was performed at point [0 0].
After generating the map, we processed the data into a suitable form for creating a point cloud. After editing the data, we applied the proposed algorithm to identify dynamic objects. As a result, the dynamic objects were displayed on a graph, which can be seen in Figure 9. The number of identified dynamic objects was 11, which was one fewer compared to the original map. The problem with identifying an object was that the object was too close to the wall, which resulted in uniting the edge of the wall and the object when creating the segments. This problem can be seen in Figure 10, where each segment is separated by a different color. An incorrectly segmented object is indicated by an arrow.
Once the dynamic object data were obtained, the data were edited and added to the data to create a navigation map. After creating this map, it was possible to apply navigation algorithms. The addition of a dynamic object to the navigation map was in order to take into account the trajectory of the dynamic object’s motion, in finding a suitable path. The application of the navigation algorithm can be seen in Figure 11. The blue line represents the trajectory of the dynamic object and the red line symbolizes the route calculated using the A* algorithm. The time was recorded during the measurement, due to which we had the opportunity to determine the average speed of the object. In this case, it was 3.48 km/h.

3.2. Tests Performed during SLAM

After testing on static measurements, we decided to use the proposed algorithm when performing SLAM. This was the last step before real-time testing. For the first test, 16 measurements were used with a sampling period averaging 1.2 s. During this measurement, the mapping device traveled approximately 3.3 m. A total of 15 objects were found during the identification of dynamic objects, 3 of which were falsely identified (walls), and 3 objects were not recognized, as they were assigned to a wall during segmentation and clustering. Dynamic objects are on the map in Figure 12 drawn in green, and blue indicates the trajectory of the measuring system. The total distance traveled by a dynamic object was approximately 11.7 m, while its average speed was 3.19 km/h. In Figure 13, identified objects are plotted. One of the other measurements that were made was if the object was moving at a low speed when the sampling frequency was very high, and there would be a problem of overlapping measurements. The result can be seen in Figure 14 and Figure 15. A total of 25 measurements were used in this test. In this case, 29 objects were identified, of which 11 were falsely identified objects. There was a problem with the light reflectance as the sidewall in the corridor where the measurements took place was made of glass. In six images, the subject could not be recognized, due to the overlap of the subject area.

3.3. Real-Time Measurement

After the functionality of the proposed method was verified, we decided to perform mapping and identification of dynamic objects simultaneously. When testing such a case, the whole process was run in a loop, alternating part of SLAM and dynamic object identification. This procedure can be seen in Figure 16. In the first step, it was necessary to process the measured data from the 2D lidar and locate it on the map. Based on the accuracy of this step, a further direction of the calculations took place. In the second step, the individual measurements were transformed into a point cloud. The point cloud was segmented and point clusters were formed. These clusters were then compared and the presence of a dynamic object was evaluated.
As already mentioned, the finding of dynamic objects is based on a comparison of three measurements. Therefore, at least three measurements are needed before the first comparison is made. After performing three initialization measurements, the identification of objects with mapping alternates regularly.
During the first measurement, we realized that the main limiting element would be the computing power of the evaluation unit. A laptop with 8 GB RAM and a 2.2 GHz dual-core processor was used for our measurements. A total of 13 measurements were made in this test. The execution of one cycle, which included SLAM and identification, took an average of 4.53 s. The identified dynamic object passed a 19 m track at a speed of 1.5 km/h. Our mapping device passed the 4.5 m track with a speed of 0.35 km/h. Again, the problem of glass reflectance and incorrect segmentation occurred during this measurement. The results can be seen in Figure 17.

3.4. Applying Dynamic Object Detection While Moving to the Target Position

One of the other goals we set was to apply Algorithm 1 during the execution of active SLAM, in order to avoid dynamic objects during the execution of the movement to the target position. As the total detection time was about 4 s, optimization was necessary. During the optimization, some auxiliary processes were removed, which were used to visually check the functionality of the entire process. In addition, a more powerful computing unit was provided, which resulted in a faster process. As a result, the response time averaged 0.3 s where the value interval was <0.20;0.65>. After these results, it was possible to consider a reasonable time response of the whole system when detecting a dynamic object. Here, there was a need to properly place Algorithm 1 in an ongoing active SLAM. Detection of dynamic objects was implemented in the part where the robot was moving and reading its odometric data. The result of this implementation can be seen in Figure 18, where a combination of ASLAM and object detection was used. Detected objects are marked in green on the map. The measurement was performed at a time when the corridor was quite busy.
After the successful implementation of Algorithm 1 during ASLAM execution, it was necessary to determine the extent to which the detection of dynamic objects would interfere with the activity performed during ASLAM. In this step, the conditions under which the robot would stop and calculate a new path had to be determined. As mentioned above, from each identified object, we obtained information about its geometric center in global coordinates using relation, Equation (11).
x c e n t e r   =   x m i n   +   x m a x 2 ,
where xmin and xmax are the boundary coordinates of the object. Alternatively, the y coordinate is calculated. Based on these coordinates, the distance from the planned trajectory is calculated. If the object is located at a distance of 0.5 m from the planned trajectory, the robot is stopped and calculates the new trajectory. In order to determine the new trajectory, which means that the identified object is in the place where the robot has already passed, it is necessary to update the path that it still has to take, based on the measurement. The whole implementation process is shown in Figure 19. It should be added here that this whole procedure was performed only when the robot was moving. One of the things to consider was the possibility of stopping the object at the robot’s target. In this case, the area around the destination was clear and a new navigation destination point was selected.

4. Conclusions

We were able to design an algorithm that will allow the detection of dynamic objects based on 2D environmental measurements. The algorithm was developed to detect people who are in the mapped environment during SLAM execution. The algorithm was not bound to the used SLAM method but assumed the input data from the measurement and the position of the performed measurement. The proposed approach was based on the SegMach algorithm, which we used to find matching and nonmatching segments. We subsequently searched for mismatched segments in the previous measurement. Based on this procedure, we were able to identify dynamic objects in the mapped environment. One of the main problems with the occurrence of dynamic objects is the occurrence of defects in the resulting map. This was solved by a function that creates a map and is directly implemented in Matlab. Another problem is a possible collision of the MRS with the dynamic object. In our case, we proposed to solve this by calculating the distance of the dynamic object from our planned trajectory. In the case of approaching a predetermined critical distance, a new trajectory was calculated, taking into account the dynamic objects in the map. The advantage of this algorithm can be considered as its time response and the possibility of use in relatively busy areas. The average response of the algorithm was 0.3 s. The human walking speed is around 4.8 km/h, and the speed of the MRS used was 2 km/h; based on these data, we can theoretically state that the algorithm works in real-time and can react in advance to a dynamic object. If the algorithm fails to respond, the MRS system contains built-in security functions that work independently of our calculations. The time response can be further improved by using a more powerful computing unit. The main disadvantage of the algorithm was the inability to detect an object that is located near a wall or some larger (compared to a human) stationary object. From the point of view of ASLAM, we did not have to consider this too important, as when performing ASLAM, we usually tried to walk through the middle of the room or from a safe distance from obstacles. This, of course, depends on the path planning algorithm used and its settings. The second problem mentioned was the formation of randomly reflected light rays from the glass surface. However, this problem occurs with most LiDARs and can be eliminated by tuning the echo reflectance. Based on the mentioned disadvantages, we did not use the percentage of success of the algorithm, as it could be misleading. We consider the proposed method for the detection of dynamic objects to be the main scientific contribution of this work. This method can be used when conducting autonomous surveys or AGVs deployed in warehouses. One of the other directions of this research will probably be the deployment of several MRSs. These MRSs will be represented as a multi-agent system using ITS (Intelligent Transport System) architectures as described by the author in [26]. The goal will be to obtain the most comprehensive view of the environment.

Author Contributions

Conceptualization, M.M.; methodology, M.M.; software, M.M.; validation, P.V.; formal analysis, D.N.; investigation, P.H.; resources, B.M.; data curation, M.H.; writing—original draft preparation, M.H.; writing—review and editing, J.M.; visualization, M.H.; supervision, P.H.; project administration, D.N.; funding acquisition, P.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by VEGA 1/0241/22 Mobile robotic systems for support during crisis situations.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All necessary data are given in this article.

Acknowledgments

The contribution is sponsored by VEGA 1/0241/22 Mobile robotic systems for support during crisis situations.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schleicher, D.; Bergasa, L.M.; Ocana, M.; Lopez, R.M.E. Real-Time Hierarchical Outdoor SLAM Based on Stereovision and GPS Fusion. IEEE Trans. Intell. Transp. Syst. 2009, 10, 440–452. [Google Scholar] [CrossRef]
  2. Cheein, F.A.A.; Toibero, J.M.; di Sciascio, F.; Carelli, R.; Pereira, F.L. Monte Carlo uncertainty maps-based for mobile robot autonomous SLAM navigation. In Proceedings of the 2010 IEEE International Conference on Industrial Technology, Via del Mar, Chile, 14–17 March 2010; pp. 1433–1438. [Google Scholar] [CrossRef]
  3. Stachniss, C. Robotic Mapping and Exploration; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  4. Matowicki, M.; Přibyl, O.; Přibyl, P. Analysis of possibility to utilize road marking for the needs of autonomous vehicles. In Proceedings of the 2016 Smart Cities Symposium Prague (SCSP), Prague, Czech Republic, 26–27 May 2016; pp. 1–6. [Google Scholar] [CrossRef]
  5. Llamazares, Á.; Molinos, E.; Ocaña, M. Detection and Tracking of Moving Obstacles (DATMO): A Review. Robotica 2020, 385, 761–774. [Google Scholar] [CrossRef]
  6. Tipaldi, G.D.; Ramos, F. Motion clustering and estimation with conditional random fields. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 872–877. [Google Scholar] [CrossRef] [Green Version]
  7. Hahnel, D.; Triebel, R.; Burgard, W.; Thrun, S. Map Building with Mobile Robots in Dynamic Environments. In Proceedings of the IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 14–19 September 2003. [Google Scholar] [CrossRef]
  8. Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  9. Rusinkiewicz, S.; Levoy, M. Efficient Variants of the ICP Algorithm. In Proceedings of the International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 1 June 2001; pp. 593–598. [Google Scholar]
  10. Censi, A. An ICP Variant Using a Point-to-Line Metric. In Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 19–25. [Google Scholar]
  11. Segal, A.V.; Haehnel, D.; Thrun, S. Generalized-ICP. In Proceedings of the Robotics: Science and Systems, Seattle, WA, USA, 28 June–1 July 2009; p. 435. [Google Scholar]
  12. Zhang, Z. Iterative Point Matching for Registration of Free-Form Curves and Surfaces. Int. J. Comput. Vis. 1994, 13, 119–152. [Google Scholar] [CrossRef]
  13. Huang, S.; Huang, H.-Z. A Frame-to-Frame Scan Matching Algorithm for 2D Lidar Based on Attention. Appl. Sci. 2022, 12, 4341. [Google Scholar] [CrossRef]
  14. Dubé, R.; Dugas, D.; Stumm, E.; Nieto, J.; Siegwart, R.; Cadena, C. SegMatch: Segment based place recognition in 3D point clouds. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Marina Bay Sands, Singapore, 29 May–3 June 2017; pp. 5266–5272. [Google Scholar] [CrossRef] [Green Version]
  15. Mihálik, M.; Hruboš, M.; Janota, A. Testing of SLAM methods in the Matlab environment. In Proceedings of the 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herl’any, Slovakia, 21–23 January 2021; pp. 55–58. [Google Scholar] [CrossRef]
  16. Mihálik, M.; Malobický, B.; Peniak, P.; Vestenický, P. The New Method of Active SLAM for Mapping Using LiDAR. Electronics 2022, 11, 1082. [Google Scholar] [CrossRef]
  17. Hernández, B.; Giraldo, E. A Review of Path Planning and Control for Autonomous Robots. In Proceedings of the 2018 IEEE 2nd Colombian Conference on Robotics and Automation (CCRA), Barranquilla, CO, USA, 1–3 November 2018; pp. 1–6. [Google Scholar] [CrossRef]
  18. Campbell, S.; O’Mahony, N.; Carvalho, A.; Krpalkova, L.; Riordan, D.; Walsh, J. Path Planning Techniques for Mobile Robots A Review. In Proceedings of the 2020 6th International Conference on Mechatronics and Robotics Engineering (ICMRE), Barcelona, Spain, 12–15 February 2020; pp. 12–16. [Google Scholar] [CrossRef]
  19. Zhang, H.-Y.; Lin, W.-M.; Chen, A.-X. Path Planning for the Mobile Robot: A Review. Symmetry 2018, 10, 450. [Google Scholar] [CrossRef] [Green Version]
  20. Sánchez-Ibáñez, J.R.; Pérez-del-Pulgar, C.J.; García-Cerezo, A. Path Planning for Autonomous Mobile Robots: A Review. Sensors 2021, 21, 7898. [Google Scholar] [CrossRef] [PubMed]
  21. Jurišica, L.; Duchoň, F. Pravdepodobnostné metódy lokalizácie mobilného robota v prostredí. AT&P J. 2010, 6, 1–5. Available online: http://www.atpjournal.sk/ (accessed on 10 April 2022).
  22. Duchoň, F.; Jurišica, L. Bunková dekompozícia prostredia v mobilnej robotike. In AT&P J.; 2010; Volume 12, pp. 1–6. [Google Scholar]
  23. 1873–2015; IEEE Standard for Robot Map Data Representation for Navigation. IEEE: Piscataway, NJ, USA, 2015; pp. 1–54. [CrossRef]
  24. Weinmann, M.; Jutzi, B.; Mallet, C. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-3, 181–188. [Google Scholar] [CrossRef] [Green Version]
  25. Liu, X.; Zhang, L.; Qin, S.; Tian, D.; Ouyang, S.; Chen, C. Optimized LOAM Using Ground Plane Constraints and SegMatch-Based Loop Detection. Sensors 2019, 19, 5419. [Google Scholar] [CrossRef] [PubMed]
  26. Pribyl, O.; Pribyl, P.; Lom, M.; Svitek, M. Modeling of Smart Cities Based on ITS Architecture. IEEE Intell. Transp. Syst. Mag. 2018, 11, 28–36. [Google Scholar] [CrossRef]
Figure 1. A robotic system with omnidirectional chassis [16].
Figure 1. A robotic system with omnidirectional chassis [16].
Applsci 12 05641 g001
Figure 2. Methods of path planning.
Figure 2. Methods of path planning.
Applsci 12 05641 g002
Figure 3. Division of methods path planning.
Figure 3. Division of methods path planning.
Applsci 12 05641 g003
Figure 4. Representation of space.
Figure 4. Representation of space.
Applsci 12 05641 g004
Figure 5. SegMatch algorithm flowchart.
Figure 5. SegMatch algorithm flowchart.
Applsci 12 05641 g005
Figure 6. Flowchart for Algorithm 1.
Figure 6. Flowchart for Algorithm 1.
Applsci 12 05641 g006
Figure 7. Corridors in the department.
Figure 7. Corridors in the department.
Applsci 12 05641 g007
Figure 8. Map from SLAM algorithm for static measurement.
Figure 8. Map from SLAM algorithm for static measurement.
Applsci 12 05641 g008
Figure 9. Dynamic objects found in static environment.
Figure 9. Dynamic objects found in static environment.
Applsci 12 05641 g009
Figure 10. Dynamic object united with wall on one segment.
Figure 10. Dynamic object united with wall on one segment.
Applsci 12 05641 g010
Figure 11. Navigation map with applied navigation algorithm.
Figure 11. Navigation map with applied navigation algorithm.
Applsci 12 05641 g011
Figure 12. Map with identified dynamic objects.
Figure 12. Map with identified dynamic objects.
Applsci 12 05641 g012
Figure 13. Dynamic objects in global coordinates.
Figure 13. Dynamic objects in global coordinates.
Applsci 12 05641 g013
Figure 14. Identification at low object speed.
Figure 14. Identification at low object speed.
Applsci 12 05641 g014
Figure 15. Identified objects at low-speed values.
Figure 15. Identified objects at low-speed values.
Applsci 12 05641 g015
Figure 16. Process of dynamic objects detection.
Figure 16. Process of dynamic objects detection.
Applsci 12 05641 g016
Figure 17. Simultaneous mapping and identification of dynamic objects.
Figure 17. Simultaneous mapping and identification of dynamic objects.
Applsci 12 05641 g017
Figure 18. ASLAM execution and detection of dynamic objects in space.
Figure 18. ASLAM execution and detection of dynamic objects in space.
Applsci 12 05641 g018
Figure 19. Flowchart for detecting dynamic objects during ASLAM execution.
Figure 19. Flowchart for detecting dynamic objects during ASLAM execution.
Applsci 12 05641 g019
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mihálik, M.; Hruboš, M.; Vestenický, P.; Holečko, P.; Nemec, D.; Malobický, B.; Mihálik, J. A Method for Detecting Dynamic Objects Using 2D LiDAR Based on Scan Matching. Appl. Sci. 2022, 12, 5641. https://0-doi-org.brum.beds.ac.uk/10.3390/app12115641

AMA Style

Mihálik M, Hruboš M, Vestenický P, Holečko P, Nemec D, Malobický B, Mihálik J. A Method for Detecting Dynamic Objects Using 2D LiDAR Based on Scan Matching. Applied Sciences. 2022; 12(11):5641. https://0-doi-org.brum.beds.ac.uk/10.3390/app12115641

Chicago/Turabian Style

Mihálik, Michal, Marian Hruboš, Peter Vestenický, Peter Holečko, Dušan Nemec, Branislav Malobický, and Ján Mihálik. 2022. "A Method for Detecting Dynamic Objects Using 2D LiDAR Based on Scan Matching" Applied Sciences 12, no. 11: 5641. https://0-doi-org.brum.beds.ac.uk/10.3390/app12115641

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop