Next Article in Journal
Optimization Design of Filament Wound Composite Pressure Vessel Based on OpenSees
Next Article in Special Issue
Trajectory Tracking Control Study of Unmanned Fully Line-Controlled Distributed Drive Electric Vehicles
Previous Article in Journal
Research on Algorithm of Corrosion Fatigue Damage Evolution of Stay Cables and Structural Mechanical Behavior with Cable Fracture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Usability of Perception Sensors to Determine the Obstacles of Unmanned Ground Vehicles Operating in Off-Road Environments

Military Institute of Armoured and Automotive Technology, 05-070 Sulejówek, Poland
*
Author to whom correspondence should be addressed.
Submission received: 27 February 2023 / Revised: 6 April 2023 / Accepted: 11 April 2023 / Published: 13 April 2023
(This article belongs to the Special Issue Advances in Navigation and Control of Autonomous Vehicles)

Abstract

:
This article presents the essential abilities and limitations of various sensors used for object recognition in the operation environment of unmanned ground vehicles (UGVs). The use of autonomous and unmanned vehicles for reconnaissance and logistics purposes has attracted attention in many countries. There are many different applications of mobile platforms in both civilian and military fields. Herein, we introduce a newly developed manned–unmanned high-mobility vehicle called TAERO that was designed for public roads and off-road operation. Detection for unmanned mode is required in both on-road and off-road environments, but the approach to identify drivable pathway and obstacles around a mobile platform is different in each environment. Dense vegetation and trees can affect the perception system of the vehicle, causing safety risks or even collisions. The main aim was to define the limitations of the perception system in off-road environments, as well as associated challenges and possible future directions for practical applications, to improve the performance of the UGV in all-terrain conditions. Recorded datasets were used to verify vision and laser-based sensors in practical application. The future directions of work to overcome or minimize the indicated challenges are also discussed.

1. Introduction

Unmanned vehicles are equipped with advanced technologies that integrate environmental perception, navigation, path planning, decision making, and control, combining many scientific fields, such as computer science, data fusion, machine vision, and deep learning [1,2,3,4,5]. The main advantage of UGVs is operation without the presence of a driver. Such mobile platforms are able to replace humans for a variety of applications, such as agricultural irrigation, logistics, or express delivery in civilian applications. In the field of military application, remote or autonomous operation modes allow for the completion of tasks such as monitoring and reconnaissance, transportation and logistics, detection of improvised explosive devices, provision of fire support, and rescue and evacuation on the battlefield [6,7,8,9].
Autonomous platforms represent the future direction in the development of vehicles, but the most immense challenge is to create control systems that ensure accurate detection of objects in the operational environment [10,11,12].
Perception technology is related to simultaneous localization and mapping, as well as object recognition and classification. Commercial applications require proper detection of typical objects in traffic such as pedestrians, cars, bikes, etc., to reduce accidents. The military field is more complicated, in which special-purpose vehicles carry out missions, typically in unknown environments where objects can be hidden in vegetation, with irregular shape or structure of obstacles [13]. The effective operation of UGVs in such conditions allows for accurate attacks in modern warfare but also requires a collection of environmental information from high-precision and high-reliability sensors.
Perception sensors can be divided into two categories: passive and active. The first type only receives the energy emitted from the environment and, on its basis, generates output signals such as through monocular cameras, stereo cameras, omnidirectional cameras, event cameras, and infrared cameras. Active sensors emit energy and measure the reaction of the environmental feedback represented by a portion of this energy such as LiDAR, radar, and ultrasonic signals [14].
The main goal of this paper is to present the abilities and limitations of sensors used for object recognition, considering off-road environments. In this article, we introduce a newly developed unmanned wheeled platform equipped with perception sensors that can carry out special missions. Typical operational conditions focusing on terrain and vegetation are described. We highlight the types of objects that should be recognized by the vehicles in the off-road conditions, as well as the sensors’ performance in such detection tasks.

2. Description of the Used UGV Platform

TAERO is basically designed for all-terrain operation in manned mode (with a driver). Depending on the combat situation, as well as military and non-military threats, the vehicle can be reconfigured into unmanned mode in a very short time (Figure 1).
The vehicle is equipped with a central processing unit containing the necessary IT infrastructure, a precise GPS including an inertial navigation system (IMU), situation awareness sensors, and mechatronic drives adopted to manage factory-fitted mechanisms of the platform (Figure 2). The system has got a security module that authorizes access, allowing for management of the vehicle and its resources, as well as a set of high-data-rate radio systems that enable the transmission of control and vision signals with minimal delays. The architecture enables the integration of additional modules, e.g., an observation head, weapon systems, and a threat detection system.
The vehicle is equipped with a hybrid drive: diesel (main) and electric (auxiliary) for silent operation. Durable and reliable four-wheel drive proven in the Polish Armed Forces, as well as rigid axles, differential lockers, and mud terrain tires equipped with runflats, guarantee high off-road capability in manned and unmanned modes (Figure 3).
Unmanned mode allows the following tasks to be carried out [15]:
  • Remote-controlled driving of a vehicle by an operator;
  • Following a route saved during the manned or remote control mode;
  • Autonomous waypoint navigation;
  • Driving in “follow me” mode behind a soldier as a “mule on wheels” or behind another vehicle;
  • Shuttle driving between designated points;
  • Operation in front of a convoy;
  • Driving in a tandem with an armored vehicle as a mobile control station;
  • Reconnaissance and observation missions using additional equipment such as an observation head;
  • Use of electric drive for silent operation.

3. Perception Sensors for the UGV Detection System

The architecture of the UGV is based on two main subsystems: sensor perception and control planning [16]. The first module is responsible for acquiring and processing data from all installed sensors (cameras, LiDAR, and other scanners) to create a representation of the surrounding environment.
Based on the perception data, the control system generates signals such as steering angle, speed, and degree of braking, which are subsequently sent to the actuators. In this work, we examined the performance of sensors in an operational environment using the unmanned TAERO platform.
A critical component of the perception system in autonomous vehicles is a camera that plays a key role in safe and effective navigation. Cameras provide a visual representation of the environment to detect and track objects, follow roads, and avoid obstacles.
There are several types of vision systems commonly used in unmanned vehicles [17]:
  • Stereo cameras that use two cameras to capture images of the same scene from slightly different perspectives, generating a 3D map of the environment, including the location and shape of objects and obstacles;
  • RGB cameras that capture color images of the environment. These images can be used for a variety of purposes, including object recognition, lane detection, and semantic segmentation;
  • Thermal cameras that detect infrared radiation emitted by objects, even in complete darkness.
The developed UGV was equipped (for evaluation purposes) with two types of vision systems: daylight–thermal and stereo camera.

3.1. Vision System

The daylight–thermal observation system is a vehicle camera composed of a thermal camera and a color daylight camera with a 60° horizontal field of view. The device is designed for tactical operation during day or night under clear or obscured atmospheric conditions such as fog or smoke. The thermal module incorporates an uncooled bolometric detector with a resolution of 640 × 480 pixels (Figure 4). The system is controlled via an RS-485 link and has two independent video outputs (Table 1) in PAL standard [18].
Camera windows are equipped with a deicing function. Sensors can be mounted on vehicles or other objects to provide driver vision enhancement, situational awareness, or observation during day and night-time.
For testing purposes, the vehicle was also equipped with a ZED 2 stereo camera. This vision system is designed for use in unmanned vehicles and augmented reality applications. It provides high-resolution, high-quality 3D imaging and depth-sensing capabilities. The camera produced by Stereolabs captures stereo video and depth data using two ultra-wide-angle sensors and a proprietary algorithm, allowing for object and surface detection in real time, even under low-light conditions. This information can be used to create 3D models of the environment, track the movements of objects and people, and support various other applications in the fields of autonomous vehicles.
In addition to its stereo imaging capabilities, the ZED 2 camera also offer advanced computer vision algorithms, such as depth perception, object detection, and semantic segmentation.
The main parameters of the ZED 2 camera are listed below [19]:
  • Resolution up to 2K (2048 × 1024 pixels);
  • Field of view of over 200 degrees;
  • Depth resolution of 720p (1280 × 720 pixels) and depth accuracy of up to 2 cm;
  • Frame rate of up to 60 frames per second (fps);
  • Latency of less than 10 milliseconds between capturing an image and processing the data;
  • Image format: the stereo video output is in YUV format, while the depth data are output as 16-bit grayscale.
There are several limitations of camera usage for unmanned ground vehicles related to lighting conditions (clear and accurate images), as well as weather conditions such as rain, snow, or fog, which can reduce accuracy and effectiveness. In relation to the operational environment of the TAERO vehicle, cameras may encounter challenges in detecting certain obstacles, especially if they are small or have low contrast with the background. Despite some applications for which only vision-based sensors be used in autonomous mode, it is necessary to use additional technologies in order to increase the accuracy and reliability of the perception system for unmanned operation.
Integrating cameras with other sensors, such as radar or LiDAR can provide a more complete and accurate understanding of the environment. These additional sensors can detect objects and obstacles that are difficult or impossible to see with cameras alone, improving the overall accuracy and reliability of the perception system.

3.2. Laser-Based Sensors

The most popular remote sensing technology that detects objects and their distances, is LiDAR (light detection and ranging).
The LiDAR sensor typically rotates to scan its field of view, emitting laser pulses in all directions. The optical signal from the laser can be modulated in various ways to carry information about the environment. By transmitting a modulated optical signal, the LiDAR system can gather information about the distance, reflectivity, and other properties of objects and surfaces in the environment [20].
The system then uses the time-of-flight measurement to determine the distance to each object in the field of view [21]. The LiDAR system can build a 3D map of the environment in real time, combining the distance information from multiple laser pulses as shown in Figure 5.
LiDAR systems are not affected by lighting conditions or weather and can operate effectively in a variety of conditions, allowing for object detection at a long range and providing early warnings to a path-planning system accordingly. However, there are also some limitations to the use of LiDAR in unmanned vehicles, including cost and size.
The TAERO platform is equipped with two different types LiDAR: Ouster OS0-32 and Velodyne -VLP 16 (Figure 6). Both sensors are very popular and widely used in autonomous vehicles.
Laser scanners offer compact size, high-resolution imaging, and a long range, which make them ideal for providing detailed and accurate data about the environment, even in challenging conditions, such as low light or adverse weather. Sensor data are made available via Ethernet in real time in the form of UDP packets.
Both the Velodyne VLP-16 [22] and Ouster Os01-32 [23] LiDAR are high-performance 3D LiDAR sensors that provide real-time, high-resolution 3D imaging of the environment, which is crucial for autonomous applications (Table 2). The choice between them for off-road use depends on the specific requirements of the application.

4. Operational Environment

Special-purpose vehicles should be developed to carry out operations in a variety of environments, including rough terrain, forests, deserts, mountainous areas, and other environments [24,25]. Mobile platforms should be adopted to operate for several important applications such as search and rescue, rural transport, and surveillance. An autonomous system that recognizes the drivable areas in off-road conditions is more challenging than a system suitable for urban areas where roads can be easily distinguished. Special-purpose vehicles need to be able to navigate narrow roads, cross rivers, and traverse steep and especially high vegetation. In this environment, vehicles may face obstacles such as rocks, trees, and other vegetation that can block the UGV’s path. Additionally, the weather and light conditions may also have an impact on the vehicle’s performance and ability to navigate.
In Central Europe, the drivability of rough terrain is determined by vegetation. The operational environment can include trees, bushes, grass, and other types of plants [26]. The presence of vegetation can make it more difficult for autonomous vehicles to perceive the environment and make decisions about the best path (Figure 7).
Dense vegetation can block the view of the vehicle’s sensors, making it difficult to determine the shape of the terrain or the stability of the soil. Trees can also pose challenges for unmanned vehicles, such as low-hanging branches or dense undergrowth that can hinder the vehicle’s passage or cause damage to installed sensors. Bushes and plants cast shadows that can make it difficult for the vehicle’s sensors to accurately perceive the environment. Forests in Europe often include natural obstacles such as fallen trees, boulders, and streams, which off-road vehicles must navigate around to maintain their path. The height of trees and the thickness of vegetation can also impact the vehicle’s perception in terms of identifying potential hazards or obstacles in the surroundings.
The movement of a vehicle across a forest is influenced by various vegetation factors, including the trunk diameter and tree spacing. The trunk diameter affects the clearance of the vehicle and the width of the paths that it can take. The tree spacing determines the accessibility of the vehicle to different parts of the forest and its maneuverability.
There are also seasonal factors, such as in summer, when warmer temperatures and longer days stimulate photosynthesis in trees, providing the energy they need to produce more leaves and grow taller. Additionally, in spring, constantly changing terrain conditions can lead to changes in terms of adhesion characteristics and rolling resistance, which can slow down vehicles and reduce their traction force and kinetic energy. All these factors can greatly impact the ability of a vehicle to traverse a forested area and must be taken into consideration when designing a perception system for special-purpose UGVs operating in off-road environments.
The forested environment in Europe is still an important issue because it provides a realistic simulation of the types of conditions that military vehicles may encounter in real-world operations. According to the European Forest Institute (EFI), forests cover approximately 40% of Europe’s area [27]. The exact percentage can vary depending on the country, with some countries having a higher proportion of forested land than others. For example, Finland has the highest proportion of forested land in Europe, with approximately 75% of its land area covered by forests. Other countries with high proportions of forested land are Sweden, Slovenia, and Portugal. This highlights the importance of considering the specific environment when evaluating the performance of UGVs.

5. Visualization of Experimental Data and Testing Conditions

All described sensors were connected to a computational unit with the UNIX system—Ubuntu 20.04 LTS supporting the ROS 1 environment. During the configuration of the perception system, ROS nodes software packages were used.
In the ROS environment, each sensor is typically controlled by a dedicated program called an ROS node. The node is responsible for handling low-level communication with the sensor hardware and reading data based on its protocol. The node then publishes these data as an ROS topic using standardized message formats. Other nodes in the ROS system can subscribe to these topics to receive the sensor data and use it for their own purposes. A very useful tool built into the ROS environment is ROSBAG Record, which enables recording of data published by ROS nodes on specified topics, which can then be saved in a compressed format. ROSBAG files store large amounts of data with timestamps that can be synchronized for later analysis based on the system clock. This makes it easy to compare data from different sensors, which is important in environmental perception applications in which multiple sources are used to create situational awareness.
We recorded data using ROSBAG for different driving scenarios that a UGV may encounter in an off-road environment. One field test drive consisted of 12 special stages capturing files with a total size of approx. 90 GB. It should be underlined that ROSBAG data can be analyzed offline to evaluate the performance of the perception system and its sensors. Foxglove Studio software (Figure 8) was used to visualize the collected data and to perform further analysis of the results [28].
During the performance tests, additional objects were placed in the off-road environment to be recognized against the background. For this purpose, barrels and a suitcase were used to evaluate the ability of the system to distinguish objects from surrounding vegetation and identify them under various lighting and environmental conditions.
Additionally, we verified recognition of people in rough terrain to simulate soldiers or other combatants in a battlefield scenario. Vegetation can make it difficult to detect and recognize a person, as the human body can be partially or fully occluded by leaves, branches, or other objects in the environment.

6. Obstacle Detection and Analysis Using Different Types of LiDAR

There are many types of LiDAR available on the market that have different parameters, such as measuring range, field of view (horizontal and azimuth), the number of measuring channels (the number of emitters and receivers of laser light), and the arrangement of sensor elements. We evaluated two types of 3D LiDAR to record the experimental data: Ouster OS0-32 and Velodyne VLP-16. The Velodyne LiDAR system was mounted in the front of the vehicle to detect obstacles ahead. Ouster OS0-32 LiDAR was installed on the roof of the vehicle to record data for spatial mapping purposes.
The influence of the scanner location on the quality of individual object imaging is presented in Figure 9, with a human standing near the road marked on the 3D scan. The Velodyne LiDAR, despite a smaller field of view in the elevation range (22.5 degrees) and fewer measurement channels, performed better regarding spatial imaging of the human body compared to the Ouster LiDAR, which was characterized by and angle of view in the elevation of 90 degrees with 32 measurement channels. Other objects such as trees behind the man are also more clearly depicted. The point cloud from the Ouster OS0-32 scanner representing surrounding objects looks more chaotic, which makes it difficult to find the distribution of points characteristic of a vertical object with a cross section similar to a circle. Data received from Velodyne LiDAR allows for more precise indication of such objects and can be excellent input as a training dataset for the neural network to support automatic detection of objects in the 3D point cloud.
During all tests in the off-road environment, additional objects were placed in the field of view of the validated sensors. The main aim was to evaluate the impact of vegetation on obstacle detection. A hidden barrel about 1 m high can be found in Figure 10 located on the left side under the branches of a coniferous tree. The laser scanner located in the front of the vehicle (Velodyne VLP-16) provided better imaging (similar to the previously described analysis of a human).
In the case of building a spatial map of the environment, data from Ouster LiDAR (Figure 11) are more useful due to the scanner location and the larger field of view. The data from a scanner with a limited field of view may not provide a lot of detail, but they can still be used to complete the overall dataset due to the density of measurement points. Determining the translation matrix between the scanners, it is possible to obtain a very accurate spatial map covering the surroundings. For this purpose, it is necessary to use auxiliary information from an inertial system and correlation details of both scans in the time domain. One of the LiDAR systems used in this research (Ouster) is equipped with a built-in IMU system to provide required data to the computational unit.
The determination of the translation matrix and the construction of maps from 3D scans are tasks for further work and investigation regarding perception system improvement for unmanned vehicles operating in off-road environments. Additionally, machine learning algorithms can be trained on the combined dataset to improve the accuracy of object detection.

7. Vision-Based Object Recognition in a Vegetation Environment

Installed cameras allow the vehicle’s operating environment to be recorded from different perspectives. Additionally, a thermal camera can distinguish certain objects from the background due to infrared radiation.
In order for vehicle cameras to perceive the surroundings, it is important to define their location at the design stage of the vehicle. The main camera was installed close to the windscreen, imitating the driver’s perception. Unfortunately, the installed Velodyne LiDAR obscures the camera view, extending the blind zone, making impossible to observe the road directly in front of the vehicle (Figure 12). Daylight cameras capture an image with more details of the surroundings, such as concrete fence posts (Figure 12d), vegetation, and a snow-covered road. The image from the thermal camera basically consists of three areas: the sky (Figure 12a), the environment of the vehicle with a similar temperature (Figure 12b), and the warm parts of the vehicle (Figure 12c).
Obstacle detection using a deterministic algorithm or an artificial neural network requires objects to be distinguished against the background from the data recorded by the sensor. The lighting and weather conditions can significantly affect the quality of images obtained from daylight cameras. In the case of the winter season, which is characterized by white snow cover, image quality can be improved in certain scenarios, such as to detect the contrast between the snow and the surroundings. Obstacles that are located on a similarly colored background (or surrounded by dense vegetation) pose difficulties for the recognition process based on a daylight camera stream (Figure 13a,b).
During other seasons, such as summer, the background of the environment masks objects more effectively, so lighting conditions can directly the algorithms being used. A thermal camera is more resistant to changing illumination and environmental conditions.
During imaging of the operational environment, the vision sensor matches the shades of gray to the measured maximum and minimum temperatures. As a result, regardless of the ambient temperature, warmer or cooler objects are more clearly visible. In our test scenario, a lower ambient temperature and warmer objects are indicated (Figure 14a–c). A disadvantage is that objects with a temperature similar to the ambient temperature are more difficult to distinguish. Additionally, thermal cameras are sensitive to environmental factors such as wind, rain, and fog, which can reduce their effectiveness. Therefore, it is important to carefully consider the use of thermal cameras and take into account the specific environmental conditions in which they will be used.
Distinguishing objects against the background in the operational environment is an important step in developing autonomous systems. Once the objects are detected and identified, deterministic algorithms or artificial neural networks can be implemented to perform specific tasks such as object tracking, obstacle avoidance, or path planning.
The application of artificial neural networks requires advanced recording of a large amount of data to train and validate the network architecture. It is also important to ensure that the data are labelled correctly to learn accurate predictions or classifications. This requires images containing the typical objects in various environmental and lighting conditions.
We evaluated object detection using the YOLO v5 neural network based on data recorded from cameras [29]. This neural network has been trained on large datasets of images that can be found in an urban environment, such as cars, bicycles, traffic signs, and pedestrians. Considering the off-road environment recordings, there were also people who should be recognized with a fairly high probability, as shown in Figure 15 [30]. The usefulness of the data recorded by the daylight camera and thermal camera is shown in Figure 16.
It should be underlined that the YOLO network was able to recognize people in images from both the daylight and thermal cameras. However, since the network was trained on standard objects found in urban environments, it may not have been optimized to detect objects in off-road environments, such as those encountered in military applications (Figure 16). Unfortunately, the network was able to identify certain objects as not fitting the operational environment by generalizing its original training dataset.
Despite the results achieved using the YOLO network, the use of artificial intelligence for object detection and classification is currently the most promising method [30]. However, it requires preparation of the network and its architecture by teaching objects that can be found in the operational environment of special-purpose vehicles (off-road and urban environments).
The challenge of detecting objects in different environments can be addressed by developing more advanced algorithms that can be adapted to changes in the operational environment. For example, the system can use sensors and machine learning to automatically detect the type of environment and switch to the appropriate set of trained models for classification. Additionally, the system can incorporate information from multiple sensors and sources to improve accuracy and robustness.

8. Discussion

Sensors such as cameras and LiDAR systems play a critical role in the perception system of a unmanned vehicle and have a significant impact on its overall performance and capabilities. Both vision and laser scanners installed on the TAERO vehicle are described in terms of parameters and functionality considering the unmanned mode in an off-road environment.
Cameras are used to gather visual information about the vehicle’s environment. They can be used to detect and track objects, to identify and classify objects, and to support navigation and obstacle avoidance. Laser sensors measure time between the emission of the laser pulse and receiving the reflection to determine distance and position. LiDAR systems provide high-resolution data but are limited by their range and susceptibility to weather conditions, while radar sensors have a longer range and better weather resistance but lower resolution. Choosing the right type of sensors for a particular application can have a significant impact on the performance of the perception system in an operational environment.
According to performance tests carried out on a paved road, the characteristics of the average braking distance to the obstacle (located in front of the vehicle) were determined. Considering the TAERO vehicle introduced herein, the formula to calculate the braking distance for a vehicle is expressed by:
s c = s 1 + v 2 2 a ,
where:
  • sc—the braking distance in meters;
  • s1—brake reaction distance;
  • v—the initial velocity of the vehicle in meters per second;
  • a—the coefficient of friction between the tires and the road surface.
The braking distance is closely related to dynamic characteristics the vehicle and the response time of the system. Taking into consideration the dependence described above, the braking distance relative to the vehicle speed was measured as shown in Figure 17.
Object classification in the autonomous mode is based on real-time information received from the perception system. Therefore, it is essential to determine the usefulness of the installed laser scanners in terms of obstacle recognition and classification in the UGV’s operational environment. In our analysis, we selected an example of a typical obstacle in the off-road scenario—a tree trunk with a height of 2 m and a diameter of 0.6 m corresponding to the TAERO collision zones, as shown in Figure 18.
The number of measurement points per defined surface was investigated in terms of object classification using available algorithms. Several speed values with corresponding point densities are shown in Figure 19 to compare the two types of laser scanners used in this research (Ouster and Velodyne).
The presented study demonstrates the usefulness of a configured perception system up to a speed of approx. 2.8 m/s, taking into consideration the minimum number of points (100) required to classify an object such as a tree trunk [31,32]. In the case of higher speeds, the used laser scanners provide only a few points per defined surface area, which makes it impossible to define spatial relationships in an off-road environment due to the presence of dense vegetation.

9. Conclusions

This paper presents the results of the examination of typical perception sensors installed on a mobile platform to be implemented in autonomous mode or to support remote-control operation in an off-road environment. Manned–unmanned vehicle TAERO was introduced as platform capable to be used in various terrain. This type of vehicle has the flexibility to be operated either by a driver or be controlled in autonomous mode, depending on the mission requirements (evacuation of injured soldiers or civilians from a risky area, transport of goods and equipment to remote locations, or even reconnaissance).
Sensors used in off-road environments face unique challenges compared to urban environments. Operational conditions can often be shrouded in dust, fog, or other conditions that limit visibility. We performed tests in a typical operating environment with trees and vegetation. Our analysis shows that recognizing obstacles LiDAR point clouds can be challenging in dense vegetation, making it difficult to identify and classify objects using algorithms. Advanced data processing helps to extract meaningful information about objects in the environment, but accuracy is related to the resolution of the laser sensor. Higher-resolution scanners are capable of generating finer and more detailed point clouds, which can provide additional information about the objects in the operational environment. Taking into account the braking properties of the vehicle and the obstacle detection zone, it is possible to effectively implement autonomous mode up to a speed of 2.8 m/s in an unstructured environment.
Vision sensors such as cameras can also face several challenges in recognizing objects in environments with dense vegetation, such as forests or agricultural fields. Vegetation can occlude objects from the view of the camera, making it difficult or impossible to recognize (hidden behind the foliage). According to our investigations, typical artificial algorithms such as YOLO v5 are not suitable for recognizing objects in off-road environments with vegetation. More advanced and trained computer vision neutral networks are required to accurately recognize objects in these types of environments with vegetation.
Our research considering a real operational environment highlights the importance of extending perception system architecture to analyze material properties and density. In order to operate effectively in off-road environments with vegetation, it is necessary to install specially designed sensors that penetrate certain materials and detect the reflection of the signals. This will be the next step of our work, which will involve testing wideband radar systems in various scenarios and conditions to determine their ability to accurately detect and analyze material density in the off-road environment.

Author Contributions

Conceptualization, M.N.; Methodology, M.N.; Validation, M.N. and J.K.; Formal analysis, M.N.; Resources, M.N.; Supervision, M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by research work no. 55.2022489.PL at the Military Institute of Armoured and Automotive Technology.

Acknowledgments

The authors acknowledge the members of the consortium that developed the TAERO vehicle (WITPIS, STEKOP, AutoPodlasie, and AP Solutions) for enabling tests in operational environment.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Sánchez-Montero, M.; Toscano-Moreno, M.; Bravo-Arrabal, J.; Barba, J.S.; Vera-Ortega, P.; Vázquez-Martín, R.; Fernandez-Lozano, J.J.; Mandow, A.; García-Cerezo, A. Remote Planning and Operation of a UGV Through ROS and Commercial Mobile Networks. In ROBOT2022: Fifth Iberian Robotics Conference; Tardioli, D., Matellán, V., Heredia, G., Silva, M.F., Marques, L., Eds.; ROBOT 2022; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2022; Volume 589. [Google Scholar]
  2. Bishop, R. A survey of intelligent vehicle applications worldwide. In Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No. 00TH8511), Dearborn, MI, USA, 5 October 2000; IEEE: Piscataway, NJ, USA, 2000; pp. 25–30. [Google Scholar]
  3. Alaba, S.; Gurbuz, A.; Ball, J. A Comprehensive Survey of Deep Learning Multisensor Fusion-based 3D Object Detection for Autonomous Driving: Methods, Challenges, Open Issues, and Future Directions. TechRxiv 2022. [Google Scholar] [CrossRef]
  4. Islam, F.; Ball, J.E.; Goodin, C. Dynamic path planning for traversing autonomous vehicle in off-road environment using MAVS. Proc. SPIE 2022, 12115, 210–221. [Google Scholar]
  5. Hu, J.-W.; Zheng, B.-Y.; Wang, C.; Zhao, C.-H.; Hou, X.-L.; Pan, Q.; Xu, Z. A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments. Front. Inf. Technol. Electron. Eng. 2020, 21, 675–692. [Google Scholar] [CrossRef]
  6. Sanaullah, M.; Akhtaruzzaman, M.; Hossain, M.A. Land-robot technologies: The integration of cognitive systems in military and defense. NDC E-J. 2022, 2, 123–156. [Google Scholar]
  7. Chen, Y.; Zhang, Y. An overview of research on military unmanned ground vehicles. Binggong Xuebao/Acta Armamentarii 2014, 35, 1696–1706. [Google Scholar]
  8. Hajdu, A.; Krecht, R.; Suta, A.; Árpád, T.; Friedler, F. The Resilience Barriers of Automated Ground Vehicles from Military Perspectives. Chem. Eng. Trans. 2022, 94, 1195–1200. [Google Scholar]
  9. Janos, R.; Sukop, M.; Semjon, J.; Vagas, M.; Galajdova, A.; Tuleja, P.; Koukolová, L.; Marcinko, P. Conceptual design of a leg-wheel chassis for rescue operations. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417743556. [Google Scholar] [CrossRef] [Green Version]
  10. Rosique, F.; Lorente, P.N.; Fernandez, C.; Padilla, A. A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Foroutan, M.; Tian, W.; Goodin, C.T. Assessing Impact of Understory Vegetation Density on Solid Obstacle Detection for Off-Road Autonomous Ground Vehicles. ASME Lett. Dyn. Syst. Control 2020, 1, 021008. [Google Scholar] [CrossRef]
  12. Guastella, D.C.; Muscato, G. Learning-Based Methods of Perception and Navigation for Ground Vehicles in Unstructured Environments: A Review. Sensors 2020, 21, 73. [Google Scholar] [CrossRef] [PubMed]
  13. Chen, D.; Zhuang, M.; Zhong, X.; Wu, W.; Liu, Q. RSPMP: Real-time semantic perception and motion planning for autonomous navigation of unmanned ground vehicle in off-road environments. Appl. Intell. 2023, 53, 4979–4995. [Google Scholar] [CrossRef]
  14. Krishnan, P. Design of Collision Detection System for Smart Car Using Li-Fi and Ultrasonic Sensor. IEEE Trans. Veh. Technol. 2018, 67, 11420–11426. [Google Scholar] [CrossRef]
  15. MSPO 2022: Taero optionally Unmanned Ground Vehicle Developed for Polish Airborne Troops. Available online: https://www.janes.com/defence-news/news-detail/mspo-2022-taero-optionally-unmanned-ground-vehicle-developed-for-polish-airborne-troops (accessed on 24 March 2023).
  16. Galar, D.; Kumar, U.; Seneviratne, D. Robots, Drones, UAVs and UGVs for Operation and Maintenance; CRC Press: Boca Raton, FL, USA, 2020. [Google Scholar]
  17. Luo, D.; Wang, J.; Liang, H.-N.; Luo, S.; Lim, E. Monoscopic vs. Stereoscopic Views and Display Types in the Teleoperation of Unmanned Ground Vehicles for Object Avoidance. In Proceedings of the 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 418–425. [Google Scholar] [CrossRef]
  18. Available online: https://www.etronika.pl/products/cameras/ktd-60/ (accessed on 24 March 2023).
  19. Available online: https://www.stereolabs.com/zed-2/ (accessed on 24 March 2023).
  20. Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  21. Lopac, N.; Jurdana, I.; Brnelić, A.; Krljan, T. Application of Laser Systems for Detection and Ranging in the Modern Road Transportation and Maritime Sector. Sensors 2022, 22, 5946. [Google Scholar] [CrossRef] [PubMed]
  22. Available online: https://ouster.com/products/scanning-lidar/os0-sensor/ (accessed on 24 March 2023).
  23. Available online: https://velodynelidar.com/ (accessed on 24 March 2023).
  24. Islam, F.; Nabi, M.M.; Ball, J.E. Off-Road Detection Analysis for Autonomous Ground Vehicles: A Review. Sensors 2022, 22, 8463. [Google Scholar] [CrossRef] [PubMed]
  25. Liu, T.; Liu, D.; Yang, Y.; Chen, Z. Lidar-based Traversable Region Detection in Off-road Environment. In Proceedings of the 38th Chinese Control Conference (CCC2019), Guangzhou, China, 27–30 July 2019; pp. 4548–4553. [Google Scholar]
  26. Rada, J.; Rybansky, M.; Dohnal, F. Influence of Quality of Remote Sensing Data on Vegetation Passability by Terrain Vehicles. ISPRS Int. J. Geo-Inf. 2020, 9, 684. [Google Scholar] [CrossRef]
  27. Muys, B.; Angelstam, P.; Bauhus, J.; Bouriaud, L.; Jactel, H.; Kraigher, H.; Müller, J.; Pettorelli, N.; Pötzelsberger, E.; Primmer, E.; et al. Forest Biodiversity in Europe; European Forest Institute: Joensuu, Finland, 2022. [Google Scholar]
  28. Available online: https://foxglove.dev/ (accessed on 24 March 2023).
  29. YOLOv5. Available online: https://github.com/ultralytics/yolov5 (accessed on 24 March 2023).
  30. Shaban, A.; Meng, X.; Lee, J.; Boots, B.; Fox, D. Semantic terrain classification for off-road autonomous driving. In Proceedings of the Machine Learning Research (PMLR), Almería, Spain, 5–7 October 2022; pp. 619–629. [Google Scholar]
  31. Vandendaele, B.; Martin-Ducup, O.; Fournier, R.A.; Pelletier, G.; Lejeune, P. Mobile Laser Scanning for Estimating Tree Structural Attributes in a Temperate Hardwood Forest. Remote Sens. 2022, 14, 4522. [Google Scholar] [CrossRef]
  32. Liu, B.; Huang, H.; Su, Y.; Chen, S.; Li, Z.; Chen, E.; Tian, X. Tree Species Classification Using Ground-Based LiDAR Data by Various Point Cloud Deep Learning Methods. Remote Sens. 2022, 14, 5733. [Google Scholar] [CrossRef]
Figure 1. Model of the UGV platform. (a) Front view; (b) back view.
Figure 1. Model of the UGV platform. (a) Front view; (b) back view.
Applsci 13 04892 g001
Figure 2. User interfaces: service (a) and operator (b).
Figure 2. User interfaces: service (a) and operator (b).
Applsci 13 04892 g002
Figure 3. View of the developed TAERO vehicle: manned (a) and unmanned (b) configuration.
Figure 3. View of the developed TAERO vehicle: manned (a) and unmanned (b) configuration.
Applsci 13 04892 g003
Figure 4. View of the daylight–thermal camera.
Figure 4. View of the daylight–thermal camera.
Applsci 13 04892 g004
Figure 5. Operational principles of LiDAR.
Figure 5. Operational principles of LiDAR.
Applsci 13 04892 g005
Figure 6. View of Ouster (a) and Velodyne (b) LiDAR with the indicated field of view (FOV).
Figure 6. View of Ouster (a) and Velodyne (b) LiDAR with the indicated field of view (FOV).
Applsci 13 04892 g006
Figure 7. Overview of typical vegetation in an operational environment: (a) plants; (b) grass.
Figure 7. Overview of typical vegetation in an operational environment: (a) plants; (b) grass.
Applsci 13 04892 g007
Figure 8. Main view of the Foxglove Studio application showing the following sensor data: (a) point cloud data (Ouster/Velodyne); (b) daylight camera view; (c) thermal camera view; (d,e) ZED2i camera RAW data from the left and right, respectively.
Figure 8. Main view of the Foxglove Studio application showing the following sensor data: (a) point cloud data (Ouster/Velodyne); (b) daylight camera view; (c) thermal camera view; (d,e) ZED2i camera RAW data from the left and right, respectively.
Applsci 13 04892 g008
Figure 9. Spatial imaging of a man (a) and trees (b) using the Ouster OS0-32 and Velodyne LiDAR systems with referential image from the ZED 2i camera.
Figure 9. Spatial imaging of a man (a) and trees (b) using the Ouster OS0-32 and Velodyne LiDAR systems with referential image from the ZED 2i camera.
Applsci 13 04892 g009
Figure 10. Observation of the barrel (a) using Velodyne VLP-16 and Ouster OS0-32 LiDAR with a reference image from the ZED 2i camera.
Figure 10. Observation of the barrel (a) using Velodyne VLP-16 and Ouster OS0-32 LiDAR with a reference image from the ZED 2i camera.
Applsci 13 04892 g010
Figure 11. Comparison of 3D point-cloud maps generated by LiDAR: Ouster OS0-32 (a) and Velodyne VLP-16 (b).
Figure 11. Comparison of 3D point-cloud maps generated by LiDAR: Ouster OS0-32 (a) and Velodyne VLP-16 (b).
Applsci 13 04892 g011
Figure 12. Views from the daylight thermal camera with marked objects: sky(a), trees (b), vehicle hood (c) and fence post (d).
Figure 12. Views from the daylight thermal camera with marked objects: sky(a), trees (b), vehicle hood (c) and fence post (d).
Applsci 13 04892 g012
Figure 13. Location of a suitcase (a), people (b), and barrel (c) in the vehicle’s operational environment.
Figure 13. Location of a suitcase (a), people (b), and barrel (c) in the vehicle’s operational environment.
Applsci 13 04892 g013
Figure 14. Location of hidden objects having form of barrels (a,b) in the vegetation against the background.
Figure 14. Location of hidden objects having form of barrels (a,b) in the vegetation against the background.
Applsci 13 04892 g014
Figure 15. Typical object classification from a daylight thermal camera using the YOLO v5 network (barrel marked (a) was not recognized).
Figure 15. Typical object classification from a daylight thermal camera using the YOLO v5 network (barrel marked (a) was not recognized).
Applsci 13 04892 g015
Figure 16. Object classification from the daylight thermal camera using the YOLO v5 network.
Figure 16. Object classification from the daylight thermal camera using the YOLO v5 network.
Applsci 13 04892 g016
Figure 17. Anticollision system: obstacle detection (a) and braking distance as a function of speed (b).
Figure 17. Anticollision system: obstacle detection (a) and braking distance as a function of speed (b).
Applsci 13 04892 g017
Figure 18. Detection of a tree trunk (a) as an obstacle and measurements from the installed LiDAR system (b).
Figure 18. Detection of a tree trunk (a) as an obstacle and measurements from the installed LiDAR system (b).
Applsci 13 04892 g018
Figure 19. Number of points from LiDAR considering object dimensions (W × H): 0.6 m × 2.0 m.
Figure 19. Number of points from LiDAR considering object dimensions (W × H): 0.6 m × 2.0 m.
Applsci 13 04892 g019
Table 1. Main parameters of the daylight–thermal and observation cameras.
Table 1. Main parameters of the daylight–thermal and observation cameras.
ParameterThermal Camera (IR)Daylight Camera (TV)
Field of view (horizontal)60°60°
DetectorUncooled bolometric FPA1/3″ CMOS
Resolution640 × 480 px1920 × 1080 px
Spectral response8 μm to 12 μm-
Table 2. Main parameters of Ouster OS0-32 and Velodyne VLP-16.
Table 2. Main parameters of Ouster OS0-32 and Velodyne VLP-16.
ParameterOuster OS0-32Velodyne VLP-16
Channels3216
RangeUp to 50 mUp to 100 m
Range accuracy±3 cm±3 cm
Field of view (vertical)90° (−45°–+45°)30° (−15°–+15°)
Angular resolution (vertical)2.125°
Field of view (horizontal)360°360°
Angular resolution (horizontal)0.18°–0.7° (configurable)0.18°–0.7° (configurable)
Rotation rate10 or 20 Hz (configurable)5–20 Hz (configurable)
Points per second655,360Up to 300,000
Internal IMU sensor3-axis gyro, 3-axis accelerometer-
Operating temperatureOS0: −20 °C to +55 °C (with mount)−10 °C to +60 °C
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nowakowski, M.; Kurylo, J. Usability of Perception Sensors to Determine the Obstacles of Unmanned Ground Vehicles Operating in Off-Road Environments. Appl. Sci. 2023, 13, 4892. https://0-doi-org.brum.beds.ac.uk/10.3390/app13084892

AMA Style

Nowakowski M, Kurylo J. Usability of Perception Sensors to Determine the Obstacles of Unmanned Ground Vehicles Operating in Off-Road Environments. Applied Sciences. 2023; 13(8):4892. https://0-doi-org.brum.beds.ac.uk/10.3390/app13084892

Chicago/Turabian Style

Nowakowski, Marek, and Jakub Kurylo. 2023. "Usability of Perception Sensors to Determine the Obstacles of Unmanned Ground Vehicles Operating in Off-Road Environments" Applied Sciences 13, no. 8: 4892. https://0-doi-org.brum.beds.ac.uk/10.3390/app13084892

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop